Infra Starter Kit

by Laurent Durnez

Infra Starter Kit

From the first idea to the current platform, we have created a MVP (Minimum Valuable Product) that is running and fit perfectly our needs. At the beginning, we used it for our own usage and internal tests, then we started to build the infrastructure to host our first version.

Finally we have associated different blocks to be able to launch and open to everyone. We have now what we could call the minimal infrastructure kit.

Current architecture

Saucs Infrascture

To render the website, the architecture is split in 3 different groups: frontend, backend and services.

All our servers are from OVH Virtual Private Servers. They are based on Openstack and we can use their API to scale easily.

Frontend servers

The frontend is composed by 2 layers : one loadbalancer and several web servers. The loadbalancer is a key element to split the traffic between our web servers as well as for high availability by checking their health status and rebalance the traffic if necessary.

On our side, it is particularly useful when we want to upgrade the core code : the node is removed from the loadbalancer backend, the traffic is rebalanced, allowing us to update freely the code with no impact.

Our loadbalancer is based on OVH IPLB where we have declared 2 frontend :

  • port 80 : we use the IPLB routes to redirect to 443
  • port 443 : Let’s Encrypt certificate, generated and renewed automatically

Our web servers compose the IPLB backend. They are installed with the following stack :

  • Nginx to serve the static files
  • Gunicorn to render our python code

You can subscribe on and then you receive an email to confirm. Any of our web servers can send emails and they should not be flagged as spam. For this, we do :

  • Dkim signature
  • Public IP declaration in our DNS zone

The important part in our frontend layer is the horizontal scale : at any time we can add web servers to handle more traffic. With the public API, we can scale automatically : add VPS with openstack, add them into the IPLB backend and update the DNS zone with the VPS public IP. Lastly, we can scale vertically with upgrading our VPS if needed.

Backend servers

Our backend servers are 2 kind : the database and the celery worker.

At Saucs, we use PostgreSQL for our database based on our personnal experiences, it works well in stressed environment and has extended functionalities.

We have 2 database servers, in master / slave replication. We have setup the replication for 4 reasons :

  • to allow transparent daily backup on the slave
  • to perform “select” queries when we need stats
  • to be able to switch to the slave in case of failure of the master
  • if we want to upgrade our server size with no interruption

The main problem in this setup is that it is impossible to scale horizontally. We could create and use other technology to have a SQL cluster, but it is out of our scope for our MVP, master / slave is our KISS solution for now.

Aware of this, we know that we can still scale vertically because we are based on VPS : with the master / slave, we can upgrade the slave, then promote it as our new master and upgrade our old master. The advantage : small to no interruption of service. Before thinking of clustering, we have 2 others KISS scaling : use of redis on our servers to cache and reduce the charge on the database; and scale the slave servers to handle all the “select” queries.

Another backend server hosts the celery worker, that collect the CVE data, create the reports and alerts. For now, we have got 1 server for it, the service is stateless and can be stopped if we need to upgrade the VPS.


These servers are the one to handle the services around our core code, it helps the infrastructure running :)

With our VPS, we use the Vrack solution to create an internal network between our servers. All our internal / management traffic goes through the private network. The objective here is to split clearly the traffic between the customer and our management one.

Our service stack is composed by the following blocks :


SSH entry point to our internal network.

Configuration and deployment manager

Familiar with puppet, I wanted to test another approach with Saltstack. The other advantage is their integration of cloud solution like Openstack to scale our VPS. We keep in mind to automatize everything.


We use Zabbix. Mainly for KISS, it is agent / server monitoring solution, easy configuration and handle graphs natively. The template system is interesting, we can add already made templates by the community to monitor our services.

Very dependant on the database, we use pgBadger as well to get all the statistics on our PostgreSQL DB. Easy to use and implement, it dit point out very quickly several SQL query optimizations.

As a rule of thumb, there is never enough monitoring.

Internal DNS zone

Our admin serves the internal zone that maps our internal network.


We use Thot service from OVH to keep some of our logs. Our syslog-ng configuration forwards ours Web and Celery workers logs to their platform.

With syslog-ng , we can parse and create index on our logs before sending them to Thot. It allows to create dashboards from our access logs like graphs on ‘200’ HTTP response code, ‘4XX’ & ‘5XX’ HTTP errors as well as query them to get the top Referer from the last 30 days, top source IP, top user-agent, etc.

And next?

The next step in our infrastructure is to create a pre-production environment to be able to test in real environment our next code updates before to push it in production. Unit tests prevent most of the problems but we prefer to run the new code during several days and wait for several updates from the NVD as functional tests to be sure that everything has the behavior we expect.

It will allow to qualify more efficiently the new changes and manually test them to be sure that all our updates keep our service intuitive and easy to use. Always KISS :)