Terraforming Ghost: Secure origin connection - PART 2

Automate Ghost blog with Terraform, Docker and Cloudflare. Part 2: Set up end to end encryption and secure your origin.

Terraforming Ghost: Secure origin connection - PART 2

In my last article, we have bootstrapped a Ghost and Commento blog configuration, and combining Cloudflare and Digitalocean as our infrastructure, automated with Docker and Terraform. In this article, we will review our initial configuration and add new features and improvements.

( Photo by Sergei Akulich )

If you have not yet read the first part of this tutorial - I strongly encourage you to do so as in this article we are going to expand and refine that setup. Here is an overall architectural diagram of what is our baseline.

High Level Diagram - from Part 1
High-Level Diagram - from Part 1

Our goal is to change this set up to introduce further components and to improve the security, in particular of the connection between Cloudflare and our Digitalocean environment.

As a  reminder, the full code for this tutorial is available on Github:

  • Part 1 - our baseline
  • Part 2 - the completed version that is being described here, to be used as reference.
  • Part 3 - Adding data persistence with Digitalocean Block Storage.

The commit with all the changes between Part 1 and Part 2 is here. Should be easier to follow 😀

SSL and Certificates

To begin with, we will achieve the following:

  1. Obtain SSL certificates from Let's Encrypt and set up auto-renewals
  2. Install certificates on Nginx
  3. Upgrade the Cloudflare <--> Digitalocean connection to HTTPS

Lastly, once we verified this setup, we will go one step further and configure Cloudflare's Authenticated Origin Pulls to implement Mutual TLS between Cloudflare and Nginx, for maximum security. Here's a recap with the new elements ( SSL Certificate icon courtesy of Freepik )

High Level Diagram - secure connectivity between Cloudflare and Digitalocean
High-Level Diagram - secure connectivity between Cloudflare and Digitalocean

Let's Encrypt certificate with Certbot

The first step is to set up appropriate certificates for our origin. On this blog, this topic has been discussed several times in the past so I will skip any general consideration. Focusing on what we want to achieve:

  • We want to use valid, Let's Encrypt certificates.
  • We want to install these certificates on our Nginx server.
  • We want to periodically check that the certificates are still valid, and if not renew them and then gracefully reload Nginx to pick up the updated ones.
💡 A different approach here could be to set up a Cloudflare Origin CA certificate, something that can be easily automated with Terraform. Feel free to try that, and let me know if it worked well for you. 👍

We are going to modify out web-cloud-init.yaml file to add our certbot instances

certbot:
  image: certbot/dns-cloudflare:latest
  volumes:
    - ${PWD}/cloudflare.ini:/opt/certbot/conf/cloudflare.ini:ro
    - certificates_data:/etc/letsencrypt
  command: "certonly
    --non-interactive
    --agree-tos
    --no-eff-email
    --preferred-challenges dns-01
    --dns-cloudflare
    --dns-cloudflare-credentials /opt/certbot/conf/cloudflare.ini
    -d '*.${cloudflare_domain}'
    --email ${certbot_email}"
web-cloud-init.yaml (full source)

Here, we map a configuration file containing the required credentials to let Certbot use the Cloudflare API and write the required verification DNS records for the approval and issuance by Let's Encrypt.

You also noticed that I am requesting a wildcard certificate for the domain, meaning we will be able to use just one certificate for all our subdomains (www, commento and static). This is convenient, but you could use separate certificates for each subdomain easily if needed.

Finally, note that we added a volume (certificates_data) which needs to be declared in the volumes section of our Docker Compose file alongside the other ones defined:

volumes:
  postgres_data:
    
  mysql_data:
      
  www_data:

  certificates_data:
    name: certbot-certificates      
web-cloud-init.yaml (full source)

This volume will contain the certificates obtained by certbot and we will use it to share them with our Nginx image. We also give it a name so that we can reference it in other parts of our tutorial.

There is another detail to keep in mind. We have a configuration composed of multiple containers, and there is a dependency between them. For example, we need to make sure that the certificates have been obtained from Let's Encrypt before we can start up our Nginx container, which needs to load them to run properly based on our configuration.

Docker Compose allows specifying a depends_on property where we can declare, for example, that container A depends on containers B and C. However, this simply checks that the dependent containers (B and C in the above example) have been started. It will not check that the certbot container has successfully obtained certificates - there is no way of telling this to Docker Compose (as per below Docker-Compose docs quote):

📝 depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.

To accomplish this, I have modified the nginx section of the Docker Compose configuration: it will use a custom entry point and startup command - see the below snipped (I removed most of the configuration for clarity)

nginx:
  image: nginx:stable-alpine
  container_name: nginx-container
  volumes:
    [...]
  entrypoint: /nginx-entrypoint.sh
  command: ["nginx", "-g", "daemon off;"]
web-cloud-init.yaml (full source)

We can then inspect the custom nginx-entrypoint.sh script that I have included in my web-cloud-init.yaml (so that it is deployed to the droplet during its startup, and can be used by Docker Compose). In my case, I copied the entry point script defined in the official repo for the nginx:stable-alpine image (see here) and added a code snippet at the top of it:

# We wait for certbot to have issued the certificates before starting up
while (! test -f "/etc/letsencrypt/live/${cloudflare_domain}/fullchain.pem") || (! test -f "/etc/letsencrypt/live/${cloudflare_domain}/privkey.pem"); do
  sleep 5
  echo "Waiting for certs..."
done
nginx-entrypoint.sh addition (full source)

The above code tests for the existence of the certificate and key files, on the shared volume that is used by both nginx and certbot . The sequence will be the following:

  • The certbot container starts first (due to the depends_on directive).
  • The Nginx container is also started and our customised script checks for the certificate and private key files. If not found, it sleeps for 5 seconds and retries later.
  • In the meantime, the certbot container begins the certificate creation and DCV (Domain Control Validation) process.
  • Let's Encrypt finally issues the certificate, which is saved on the shared volume. The certbot container terminates.
  • Finally, the entrypoint script of the Nginx container observes the presence of the files, and resumes the normal startup of Nginx.

For full clarity, the reference ${cloudflare_domain} is in fact a template variable in our web-cloud-init.yaml Terraform template file. Terraform will substitute it with the actual value when parsing web-cloud-init.yaml during terraform apply. See here for the full code explaining how we are passing it to the template and where we get the value from.

Certificate renewal

Once we have obtained the certificates, and we have started our infrastructure, we need to make sure that these certificates are kept valid and renewed when required. Remember, the certbot container we defined in Docker compose creates the certificates once, and nothing else.

To achieve this, I will modify my web-cloud-init.yaml to deploy another script, which will be installed in the crontab of the Docker host machine.

Here is the script:

#!/bin/sh
docker run --rm \
  -v "/var/log/certbot-renew:/var/log/letsencrypt" \
  -v "certbot-certificates:/etc/letsencrypt" \
  -v "/opt/scripts/cloudflare.ini:/opt/certbot/conf/cloudflare.ini:ro" \
  certbot/dns-cloudflare:latest \
  renew \
  --agree-tos \
  --keep-until-expiring \
  --non-interactive \
&& docker exec nginx-container nginx -s reload
web-cloud-init.yaml (full source)

And here are the two commands to add to our Cloud Init file:

- chmod +x /opt/scripts/certbot-renew.sh
- (crontab -l ; echo "0 17 * * * bash /opt/scripts/certbot-renew.sh") | crontab -
web-cloud-init.yaml (full source)

In other words, we will run the script every day at 17:00, and the script itself launches a standalone Docker instance of Certbot - attempting the renewals of the certificates that are stored on our certificates_data Docker volume. After this has been done, it bounces Nginx gracefully so that it reloads the certs (existing or new ones).

Do note that we refer to the named volume we created via docker-compose, using the certbot-certificates name (so that we look up in the same folder for our renewals). We also refer to the container_name we assigned earlier ( nginx-container )  to look up the correct Nginx container to bounce.

Upgrade to secure connection

Once we have sorted out the certificate management and its renewal, the rest is fairly simple.

We need to update first our Nginx configuration, to use the certificates we have created. For that, we create another file including all the common TLS configuration we want to share across our web servers. Here's mine

ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers off;
ssl_ecdh_curve  X25519:P-256:P-384:P-224:P-521;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA2;
ssl_session_cache shared:SSL:10m;
ssl_buffer_size 4k;
ssl_session_timeout 120m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
resolver 1.1.1.1 1.0.0.1  valid=300s;
resolver_timeout 5s;
web-cloud-init.yaml (full source)

Then, we change the nginx configuration files for our webservers to use the certificates and listed on port 443. We also enable HTTP/2 . Here is a snippet with the important bits for one webserver.

listen 443 ssl http2;
listen [::]:443 ssl http2;

ssl_certificate /etc/letsencrypt/live/${cloudflare_domain}/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/${cloudflare_domain}/privkey.pem;

include /etc/nginx/snippets/ssl-params.conf;
web-cloud-init.yaml (full source)

Again, ${cloudflare_domain} here is a Terraform template variable as we discussed above.

We then change the Docker Compose template to make our nginx container listen on port 443 instead of port 80.

Last step, we update our Terraform configuration:

  • We change cloudflare.tf to add ssl = "full" in the cloudflare_zone_settings_override resource.
  • We change digitalocean.tf to include all the required new template variables in the digitalocean_droplet declaration. Also, we update the digitalocean_firewall so that our inbound rule is configured to accept incoming traffic from Cloudflare IP ranges on port 443 instead of 80.

That's pretty much it. One last step to do for enhancing our security even further.

Authenticated Origin Pulls

With the above setup, we are accepting traffic on our origin as long as it comes from the Cloudflare IP range. What if someone sends traffic with spoofed IPs? We can crank up our security level even further with Authenticated Origin Pulls.

In short, we can configure Cloudflare to send a client certificate to Nginx, which we can validate, so that we can discard all traffic without a valid certificate. I discussed this mechanism in the past as well. Can we now do it easily with Terraform? Of course, we can!

We modify cloudflare.tf to add the below directive

resource "cloudflare_authenticated_origin_pulls" "auth_origin_pull" {
  zone_id     = lookup(data.cloudflare_zones.ghost_domain_zones.zones[0], "id")
  enabled     = true
}
cloudflare.tf (full source)

This will enable Authenticated Origin Pulls on Cloudflare. In our configuration, we then need to configure Nginx to validate the client certificate. The certificate can be downloaded from here:

ssl_client_certificate /etc/nginx/certs/origin-pull-ca.pem;
ssl_verify_client on;
web-cloud-init.yaml (full source)

And then needs to be added to the nginx container so it can be found.

volumes:
  [...]
  - ${PWD}/origin-pull-ca.pem:/etc/nginx/certs/origin-pull-ca.pem
web-cloud-init.yaml (full source)

We can now test our deployment with terraform apply. The end result should be the same as per the first part of our tutorial, but we are now running over a fully secure and encrypted communication between the browser and the origin.

Photo by Aman Dhakal

Conclusion

In this chapter of our tutorial, we have upgraded the initial setup so that the security is fully enforced and automated from the end-user to our origin server. In future episodes, we will work on templatizing the configuration for other features of our blog. We are definitely getting closer to a fully automated set up, which is quite exciting!

Let me know how it goes for you and feel free to contribute to the repository if you have suggestions or comments!