Modern DevOps with Django
Source: https://peakwinter.net/blog/modern-devops-django/
Last updated
Source: https://peakwinter.net/blog/modern-devops-django/
Last updated
The primary objectives for the DevOps stack of our app:
Allow for CI. Clarity. The information that a CI pipeline can provide when properly integrated with external tools for measuring test coverage and code quality.
Allow for CD. Far less stress around the time of major releases, fewer bugs that make it to production.
Simplify our hosting system. Servers can be a pain to manage. Using containers or VMs for our deployments can make a major impact here, because your operating environment is treated as if it was the same as your application - undergoing the same tests and idempotent build processes.
We will be deploying our Django application using a .
Docker allows us to easily create clean, pre-installed images of our application in an isolated state, like a binary application build, rather than having to worry about virtual environments and system packages of whatever server we are deploying to.
This build can then be tested and deployed as if it was an isolated artifact in and of itself.
Our container can be grouped with other dependent services (DBs, memory caches, etc) together in a docker-compose.yml
file.
Using an advanced hosting mechanism like or , we can then deploy our entire application as a “stack” with the push of a button.
An integrated, job-based testing and deployment pipeline system.
In our specific setup, on each push (commit or merge):
to the develop
branch: build the app and run the test suite
to the master
branch: build the app, run the test suite and deploy to staging if successful. Tagged commits pushed to master
will be pushed to production instead.
In my particular setup, containers are stored with the registry with their branch name tagged. That way, I can keep each version of my application and each branch separate, for re-downloading and testing later if need be.
Take a base image (Python 3.6 installed on a thin copy of Alpine Linux)
Install everything our application needs to run (requirements.txt)
Set a default command to use - this is the command that will be executed each time our container starts up in production
Check for any pending migrations, run them
Start up our uWSGI server to make our app available to the Internet. It’s safe to do this because if any migrations failed after our automatic deployments to staging, we would be able to recover from that and make the necessary changes before we tag a release and deploy to production.
Example: build a container with necessary dependencies for things like image uploads as well as connections to a PostgreSQL DB.
We can now build our application with docker build -t myapp .
and run it with docker run -it myapp
.
But in the case of our development environment, we are going to use Docker Compose in practice. The Docker Compose configuration below is sufficient for our development environment, and will serve as a base for our configurations in staging and production, which can include things like Celery workers and monitoring services.
This is a pretty basic configuration:
set a startup command for our app (similar to the entrypoint in our Docker container, except this time we are going to run Django’s internal dev server instead)
Initialize PostgreSQL and Redis containers that will be linked with it.
Note that thevolumes
line in our app service is going to bind the current directory of source code on our host machine to the installation folder inside the container. That way we can make changes to the code locally and still use the automatic reloading feature of the Django dev server.
Now, run docker-compose up
, and our Django application will be listening on port 8000, just as if we were running it from a virtualenv locally.
This config is perfectly suitable for developer environments — all anyone needs to do to get started using the exact same environment as you is to clone the Git repository and run docker-compose up
For testing your application, whether that’s on your local machine or via Gitlab CI, I’ve found it’s helpful to create a clone of this docker-compose.yml
configuration and customize the command directive to instead run whatever starts your test suite.
In my case, I use the Python coverage
library, so I have a second file called docker-compose.test.yml
which is exactly the same as the first, save for the command directive has been changed to:
Then, I run my test suite locally with docker-compose -p test -f docker-compose.test.yml up
For production and staging environments, I do the same thing — duplicate the file with the few changes I need to make for the environment in particular. In this case, for production, I don’t want to provide a build path — I want to tell Docker that it needs to take my application from the container registry each time it starts up. To do so, remove the build
directive and add an image
one like so:
Create a .gitlab-ci.yml
file, which contains within it all of the instructions that Gitlab needs to properly set up our testing and deployment pipeline.
I’m going to walk through this configuration step-by-step, so we can get a better grasp of what’s going on.
Each job we set up in our Gitlab CI pipeline will correspond to one of these stages, so we can control what jobs get executed concurrently and at which point the pipeline stops if it encounters a problem.
We also set up a few handy variables here that we will reference later:
CONTAINER_IMAGE
: the path to our repository on the Gitlab Container Registry
CONTAINER_TEST_IMAGE
: the name of the image and tag for the branch we are running the pipeline on
DEPLOY_SERVER_URL
: the name of one of our Docker Swarm master nodes that we will connect to via SSH.
DEPLOY_PATH
: the path on the server to deploy our Docker configurations to.
We choose a Shell executor for Gitlab CI because that is the quickest and easiest way to build and work with Docker containers from the outside. This may not be a suitable option for everyone.
Here we execute 3 commands:
login to the Gitlab Container Registry with a custom token (you’ll see this command a lot)
build our container and tag it with the current branch name
push this tagged container to our registry
At this point, we can now download a copy of our application at the state of this branch using this tag.
Here we have two jobs for testing:
Analyzing code smells and style, using the Code Climate analyzer. This will run a variety of checks including for cyclomatic complexity and PEP 8 compatibility.
Running the unit test suite that comes with our Django application. If any tests fail, the job will fail and we will be able to see a readout of what went wrong on our Gitlab site.
Each time we perform an operation with our container, we pull the container at the state it was during our build step, using the tag name assigned to it. This way we are always testing the container we have already built, rather than the application code alone.
Since these 2 jobs are in the same stage, they run concurrently, and the pipeline will not advance until the next stage unless they both pass
If we made it past our testing system, we release it to production (or staging, as the case may be). This job merely takes the passing container, tags it with either “prod” or “staging” so that we can match the state of our production or staging services at any given time, then pushes that tag to our container registry.
Deploying can be done in a variety of ways.
The same idea can also be used for a deployment to any Docker server — just replace the docker stack deploy
with a docker-compose up
and the same basic concept holds true.
In order to properly authenticate with our server, Gitlab CI needs to know where to find an SSH private key. You can set this up as a secret variable within your Gitlab CI repository itself. Then, as we see in the before_script
section, we do some magic that tells Gitlab to take the value of that secret variable and to insert it into our container as an SSH private key file.
Our script
section is very minimal, since all we are doing is copying over our Docker Compose configuration file, then telling the Docker daemon on our server to run it. If you are using Docker Swarm and the docker stack deploy
command, this one command will intelligently restart different components of the stack if their configurations have changed or if there are newer versions of their images available on our container registry (which is always the case here, since we just submitted a new release to it!).
The app builds as a Docker container, and these containers are stored using the - turns your Gitlab instance into a full-featured Docker registry (like ).
This guide assumes you have CI enabled on your Gitlab instance of choice, and have set up Shell and Docker runners on an external server. Doing so is beyond the scope of this guide, but there are online if you need help!
In this case, it is done by SSHing to a master in my , copying over the Compose configurations, then deploying them as a .