Skip to content
 

How Gentlent Rapidly Glues Together Websites

Sharing how Gentlent rapidly glues together websites and custom infrastructure.

Tom Kleinby Tom Klein · ~ 12 min read
 

Here at Gentlent, we've optimised our internal processes to rapidly design, build, and publish websites as well as code changes to the underlying infrastructure.

These websites range from small one-pagers for project-specific information to full-blown company websites.

So — how did we do it?

Part 1 - Servers

Every project usually starts with hosting, or servers to be more specific. Gentlent is not an exception. Some computer connected to the internet needs to serve our rendered content on - favourably - port number 443.

This is where our first advantage lies. We have one codebase for the majority of our sites and services. Every single server is able to handle our HTTP, HTTPS, SMTP, and DNS traffic, amongst other protocols we have implemented. In the context of deploying new code, we don't have to worry about spinning up new servers and scaling them individually, but rather have them deployed on our existing distributed network of servers. This has some benefits that are super useful for our use case:

  • Automated Scaling - As all our servers run the same code and have been set up with a container-based orchestration service, we are able to very quickly reset machines, spin up and integrate new ones with automated distribution of the underlying pre-built container images, and dynamically scale as needed, both vertically and horizontally, by also having our code optimised to utilise multi-threading and being stateless.
  • Anycast Network - as discussed in a previous blog post, the Gentlent core infrastructure utilises a mix of Anycast, Geo-DNS, and load balancing to distribute incoming requests to hit the closest available data center to handle the requests. This allows us to have a solid fail-over strategy in case of network outages, but also re-route specific machines in case of maintenance, and have a rough idea about which region is the busiest to then act on it and spin up even more servers there to reduce latency.
  • Distributed Everything - All our servers are distributed to some extend. Our codebase is deployed to all servers globally, our database to a few select ones that are more powerful and located in a tightened, secure environment. Combined with our routing setup, edge servers will always connect to the closest, available database replica, and both read and write to it over a secured SSL/TLS connection.
  • Centralised maintenance - One of the main benefits is that due to this single, distributed setup, it's easy for us to push (approved and reviewed) changes and maintain the infrastructure as needed. If we push a change that improves latency or security, all our projects benefit from it immediately.
  • Third-Party CDN - We also utilise one of the largest CDNs available to us: Cloudflare. Cloudflare further improves our security, but most importantly, decreases web performance by aggressively caching critical static assets on its colos all around the globe (and hopefully in space once needed).

Now that we got our servers and basic infrastructure figured out, how do we reduce the work to actually serve our sites?

Part 2 - vHosts & SSL/TLS

Let's jump into the next critical component: Virtual Hosts and SSL/TLS encryption. We are utilising our own, custom web server that routes incoming HTTP/HTTPS requests to a custom code path dynamically, based on the values stored in our distributed database. SSL/TLS certificates are also distributed (with additional encryption in place) to servers through our database. Having centralised storage distribute the virtual hosts including which protocol versions should be served, and necessary certificates is critical to avoid manually pushing configuration changes to each servers individually. But how do we issue the certificates we need in the first place?

Yet another custom piece of software comes into play for SSL/TLS certificate issuance. We implemented our very own, custom ACME client that generates private keys and certificate signing requests (CSR) on the fly using either RSA or ECDSA, and sends the CSR to an ACME provider of our choice; which is LetsEncrypt by default, but also has some backup CAs listed in case of downtime.

Having a fully automated SSL/TLS issuance obviously has the advantage that, again, once we make any changes, it'll be rolled out to all our servers. For example, once we introduced OCSP Stapling , we wanted to incorporate OCSP Must-Staple flags into our certificates to force major browsers to make use of OCSP and avoid bypassing revocation checks in case our certificates ever get compromised. But there is also a major disadvantage:

If the server, on which the fully automated SSL/TLS issuance runs, ever shuts down and goes unnoticed for a long enough time, certificates wouldn't get renewed and downtime would occur on (sometimes unexpected) services ranging from HTTP(S) to SMTP. Sounds like an easy fix? Just run it on multiple servers? But did you think about the strict rate limiting and tightened error tolerance that CAs have in place to prevent abuse? What if these accidentally run at the same time and result in collosions or duplicates? For these cases, we've implemented yet another core component:

A distributed, fault-tolerant, self-orchestrating queue. There is a variety of use-cases that need such type of queue: invoice generation, health checks (which also alerts us in case of any issues around any of our components), and SSL/TLS issuance. This lead us to implementing a custom queue (think of it like a distributed yet organised cron job), that would run in a predefined interval without a single point of failure, and redistribute jobs to other servers in case of failure whilst making sure that it doesn't run twice. It's another complex topic in and of itself and will likely be covered by its own future blog post.

Part 3 - Modular Functions & APIs

After all the trouble in automating servers, vhosts, certificates, and things related to it, one still has to write logic to actually handle the incoming requests beyond the HTTP connection and routing of virtual hosts.

As many of you could already guess by the title of this section, we reuse a lot of code by keeping parts (especially functions) modular and reusable. There are both public Rest-APIs that we publish to make building frontends easier and more independent of the underlying backend, and there are specific helper functions which are independent of the rest of the code and can be used across all code paths. Need examples for these? Audit logs, billing, caching, cookies, crawlers, crypto, blocklists, entitlements, exchange rates, geo IP lookups, centralised ID generation, translations, database connections, email sending, sanitizers, password handling, OAuth, web requests, and way more. All these can be and are re-used across all projects and code paths which makes maintenance and pushing non-breaking changes way easier.

We even have an API to manage our own public key infrastructure (PKI) which is used for our intra-server communications and a custom internal reverse proxy used to re-route incoming traffic to separated services running on different ports.

Part 4 - Frontends

Our journey continues to the frontend side of things. Rapidly glueing together the underlying infrastructure is one thing. Designing and maintaining a usable and user-friendly frontend is another, but we have found a very similar approach to that.

Our websites and projects are usually split into different re-usable components, except for the main content of a site, as this usually differs and is not re-usable. We've got components for navigations, sidebars, footers, alerts, call-to-actions, and more, that are created as needed. All this is supported by our own, yet again custom, CSS (Cascading Style Sheets) and JS (JavaScript) framework which is used for all our projects, as well as select customer sites. It includes things like button designs, navigations, containers, form inputs, and everything else you can think off. In fact, the site you're currently seeing is completely relying on this framework. We utilise JavaScript to then enhance these designs by adding necessary functions like moving title attributes to a tooltip that pops up once you hover over the respective element or preloading linked sites to increase page switching performance.

Some projects require us to have the frontend completely separated from the backend. Our frameworks also support that use case. For example, we designed a whole website for one of our customers that used a WordPress-based backend. We were able to create the whole frontend including connecting it with the backend, in less than 2 days of work hours and it worked like a charm.

Part 5 - Deployments

Let's assume we've finished coding a new feature, infrastructure is already working, how do we roll it out? Easy! Just push it to our Git repository and you're done. That's atleast the workflow on a developer site. A bunch of next steps are then kicked off automatically:

  1. Linting & Testing - First of all, a pipeline is executed automatically that runs linting checks as well as our test suite. As everyone has a different style of coding, this helps us maintain a consistent syntax and enhance readibility for others working on it later.
  2. Merge Requests & Review Process - If the previous step passes, a merge request will be created automatically which then needs to be filled out and completed by the software engineer responsible for the change. Once they did that, a notification is sent via email to potential reviewers, which will then review the code, test it, and either approve the change or reject it with useful feedback.
  3. The Merge (and deployment) - Once a merge request was approved, it will be merged into upstream branches, which will kick off another linting and testing pipeline, and will continue to actually run our deployment code. In this stage, the new codebase is built into a container image, dependencies will be installed and setup, the commit messages are analysed and appended to a changelog, a version number will be generated, an email about the change will be sent out to the team, required secrets are being loaded into the environment, the final image will be uploaded to our container registry, and deployed on all servers, one after another, whilst being monitored closely with health checks, and until it is either completed successfully or a health check fails and the version is rolled back immediately to restore availability. These steps are fully automated and do not require any human intervention.

So, all that for saving some hours?

Yes, but not only that. It allows us to quickly prototype our ideas and carry them to production in an efficient and optimised way. This might not work for everyone and it surely took us a couple of years to achieve the independence we have, but it was worth it as we learned a lot of things during that journey and are able to share these achievements with our partners and colleagues to help them shape their workflow to be more efficient and useful. For us, it gives us the power to finally focus on ideas again, leaving the complex world of implementation behind - to an extend.

There are parts that I didn't mention yet, for example domain management, A/B testing, issue tracking, and other parts. Maybe we'll cover them in future posts, but our goals stay the same: Minimizing required maintenance whilst allowing flexible use of all our modular components.

Also, if you are interested in a specific topic or want to collaborate on a blog post, then it's your time to shine! Let us know and we'll figure something out. :)


Share article


Tom Klein
Founder & CEO
Gentlent UG (haftungsbeschränkt)

Gentlent
Customer Support
support@gentlent.com



Recent Articles

Wanna learn more?
Get in touch today.

GentlentAn official Gentlent website. Official Gentlent websites are always linked from our website gentlent.com, or contain an extended validated certificate.