Welcome to my first public experiment with cattle computing using DigitalOcean droplets for hosts. Given that every program is derived from Hello World, and a blog is the Django equivalent, here we are. While developing a Django blog is a trivial exercise, I am interested in learning the tools and techniques required to maintain and update a Django application over the long term.
Objectives
- Develop practical experience with the DevOps 'cattle computing' approach
- Use docker-compose for packaging and deploying a multi-container system
- Use Docker to containerize individual application components
- Develop practical experince with secrets management
- Manage DigitalOcean droplets via API
- Trivial rollback on a failed deployment
Assert
- A single host, low traffic site; not Amazon or Google
- Single docker-compose deployment per host
- Secrets management
- Not placed in SCM
- Maintained ion a file on dev host and copied to droplet for container build and then deleted
- Bake secrets into containers because they are not going going to be uploaded to a registry
Design
The application consists of five docker containers: 1) app, 2) Nginx, 3) Postfix relay, 4) Letsencrypt, and 5) utility.
The application container contains the Django application, Gunicorn WSGI Server, Sqllite database, and the backup and restore scripts. The restore script runs the Django collectstatic and migrate commands.
The Nginx container provides the Nginx reverse-proxy and static file serving functions. When deployed in production, mandatory HTTPS is provided using the certificates maintained by the Letsencrypt container and shared via a shared volume.
The Postfix relay relays all VPS email via a Gmail account.
Letsencrypt container contains the certbot utility and the following scripts: 1) backup, 2) restore, 3) new, and 4) renew. Certificates are shared with Nginx via a shared volume.
The utility container provides everything that doesn't fit in another container. For example the master backup and restore scripts, the daily backup, and weekly certbot renew cronjob scripts.
Three docker-compose files are used for development and production settings:
- docker-compose-common.yaml
- docker-compose-develop.yaml
- docker-compose-production.yaml
Feature/Release Plan
The current release plan is set up to move from a minor release to ever more complex releases which modify the database, add new components, and modify existing components.
V 1.0.0
- Minimal viable product
V 1.1.0
- Add tags based navigation of posts, snippets, etc
- Tag application has already been added, and tag fields are added to relevent models.
- No db migration is required to deploy
V 1.2.0
- Add file and image management. Insert images into blog posts and attach files to blog posts.
- Requires a db migration to add new tables to db
- Requires a modification of Nginx configuration to serve images and files
V 1.3.0
- Add comments with anti-spam protection to blog posts
- Requires adding a background anti-spam checking task
- Requires adding a third-party service for comment checking
Results
This exercise proved to be a valuable and rewarding learning experience. While no single step was tough, integrating everything into a cohesive development toolchain took significant effort.
The Good
Cattle Computing: DigitalOcean provides a Python library for interacting with their API which makes creating, querying, and deleting droplets trivial. Furthermore, the desired droplet configuration is documented in an SCM versioned YAML file; a consistent configuration is guaranteed.
Docker: Docker provides an easy way to containerize individual applications while maintaining the build instructions in an SCM versioned Dockerfile. Some containers, like the Postfix relay, are available off-the-shelf and require minimal effort on our part to use in our application.
Docker Compose: Docker-compose provides an easy way to coordinate and document a multi-container application.
Trivial Rollback: If the deployment of a new version fails we simply leave the original droplet intact and destroy the new droplet after troubleshooting.
The Bad
Host Postfix: A running Postfix server is required on the host to relay host emails (eg cron emails) to the SMTP relay docker container. I was unable to configure the cloud-config file to perform an non-interactive installation of Postfix and currently have to manually install on a new droplet.
The Ugly
Docker Compose: We have to copy the entire docker-compose source tree to the remote host to enable docker-compose to run commands against the deployed containers; ex backup.
Secrets: We currently have to copy our secrets file to the remote host to build the containers and then delete once the build is complete.
Future Work
- Figure out how to perform a non-interactive install of Postfix via the cloud-init.yaml file
- Reconfigure SSH client and SSH server to permit sending secrets via session instead of copying the file
- Figure out how to create cronjobs that can run commands against containers (eg backup) without requiring docker-compose and it's dependant files
- Figure out how to build containers on development host and then deploy build containers to remote host