Modern DevOps with Django

Source: https://peakwinter.net/blog/modern-devops-django/

The Idea

The primary objectives for the DevOps stack of our app:

  • Allow for CI. Clarity. The information that a CI pipeline can provide when properly integrated with external tools for measuring test coverage and code quality.

  • Allow for CD. Far less stress around the time of major releases, fewer bugs that make it to production.

  • Simplify our hosting system. Servers can be a pain to manage. Using containers or VMs for our deployments can make a major impact here, because your operating environment is treated as if it was the same as your application - undergoing the same tests and idempotent build processes.

The Stack

Docker

We will be deploying our Django application using a Docker container.

Docker allows us to easily create clean, pre-installed images of our application in an isolated state, like a binary application build, rather than having to worry about virtual environments and system packages of whatever server we are deploying to.

This build can then be tested and deployed as if it was an isolated artifact in and of itself.

Our container can be grouped with other dependent services (DBs, memory caches, etc) together in a docker-compose.yml file.

Using an advanced hosting mechanism like Docker Swarm or Kubernetes, we can then deploy our entire application as a “stack” with the push of a button.

Gitlab CI

An integrated, job-based testing and deployment pipeline system.

In our specific setup, on each push (commit or merge):

  • to the develop branch: build the app and run the test suite

  • to the master branch: build the app, run the test suite and deploy to staging if successful. Tagged commits pushed to master will be pushed to production instead.

The app builds as a Docker container, and these containers are stored using the Gitlab Container Registry - turns your Gitlab instance into a full-featured Docker registry (like hub.docker.com).

In my particular setup, containers are stored with the registry with their branch name tagged. That way, I can keep each version of my application and each branch separate, for re-downloading and testing later if need be.

The Configuration

Dockerfile

  1. Take a base image (Python 3.6 installed on a thin copy of Alpine Linux)

  2. Install everything our application needs to run (requirements.txt)

  3. Set a default command to use - this is the command that will be executed each time our container starts up in production

  4. Check for any pending migrations, run them

  5. Start up our uWSGI server to make our app available to the Internet. It’s safe to do this because if any migrations failed after our automatic deployments to staging, we would be able to recover from that and make the necessary changes before we tag a release and deploy to production.

Example: build a container with necessary dependencies for things like image uploads as well as connections to a PostgreSQL DB.

Dockerfile
FROM python:3-alpine3.6

ENV PYTHONUNBUFFERED=1

RUN apk add --no-cache linux-headers bash gcc \
    musl-dev libjpeg-turbo-dev libpng libpq \
    postgresql-dev uwsgi uwsgi-python3 git \
    zlib-dev libmagic

WORKDIR /site
COPY ./ /site
RUN pip install -U -r /site/requirements.txt
CMD python manage.py migrate && uwsgi --ini=/site/uwsgi.ini

Docker Compose configuration

We can now build our application with docker build -t myapp . and run it with docker run -it myapp.

But in the case of our development environment, we are going to use Docker Compose in practice. The Docker Compose configuration below is sufficient for our development environment, and will serve as a base for our configurations in staging and production, which can include things like Celery workers and monitoring services.

docker-compose.yml
version: '3'

services:
  app:
    build: ./
    command: bash -c "python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000"
    volumes:
      - ./:/site:rw
    depends_on:
      - postgresql
      - redis
    environment:
      DJANGO_SETTINGS_MODULE: myapp.settings.dev
    ports:
      - "8000:8000"

  postgresql:
    restart: always
    image: postgres:10-alpine
    volumes:
      - ./.dbdata:/var/lib/postgresql:rw
    environment:
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: myapp
      POSTGRES_DB: myapp

  redis:
    restart: always
    image: redis:latest

This is a pretty basic configuration:

  1. set a startup command for our app (similar to the entrypoint in our Docker container, except this time we are going to run Django’s internal dev server instead)

  2. Initialize PostgreSQL and Redis containers that will be linked with it.

Note that thevolumes line in our app service is going to bind the current directory of source code on our host machine to the installation folder inside the container. That way we can make changes to the code locally and still use the automatic reloading feature of the Django dev server.

Now, run docker-compose up, and our Django application will be listening on port 8000, just as if we were running it from a virtualenv locally.

This config is perfectly suitable for developer environments — all anyone needs to do to get started using the exact same environment as you is to clone the Git repository and run docker-compose up

Testing and Production

Testing

For testing your application, whether that’s on your local machine or via Gitlab CI, I’ve found it’s helpful to create a clone of this docker-compose.yml configuration and customize the command directive to instead run whatever starts your test suite.

In my case, I use the Python coverage library, so I have a second file called docker-compose.test.yml which is exactly the same as the first, save for the command directive has been changed to:

command: bash -c "coverage run --source='.' manage.py test myapp && coverage report"

Then, I run my test suite locally with docker-compose -p test -f docker-compose.test.yml up

Production and Staging

For production and staging environments, I do the same thing — duplicate the file with the few changes I need to make for the environment in particular. In this case, for production, I don’t want to provide a build path — I want to tell Docker that it needs to take my application from the container registry each time it starts up. To do so, remove the build directive and add an image one like so:

image: registry.gitlab.com/pathto/myapp:prod

Continuous Integration and Delivery

Create a .gitlab-ci.yml file, which contains within it all of the instructions that Gitlab needs to properly set up our testing and deployment pipeline.

This guide assumes you have CI enabled on your Gitlab instance of choice, and have set up Shell and Docker runners on an external server. Doing so is beyond the scope of this guide, but there are plenty of walkthroughs online if you need help!

I’m going to walk through this configuration step-by-step, so we can get a better grasp of what’s going on.

.gitlab-ci.yml
stages:
  - build
  - test
  - release
  - deploy

variables:
  CONTAINER_IMAGE: registry.gitlab.com/pathto/myapp
  CONTAINER_TEST_IMAGE: $CONTAINER_IMAGE:$CI_BUILD_REF_NAME
  DEPLOY_SERVER_URL: myserver.example.com
  DEPLOY_PATH: /var/data/myapp

Each job we set up in our Gitlab CI pipeline will correspond to one of these stages, so we can control what jobs get executed concurrently and at which point the pipeline stops if it encounters a problem.

We also set up a few handy variables here that we will reference later:

  • CONTAINER_IMAGE: the path to our repository on the Gitlab Container Registry

  • CONTAINER_TEST_IMAGE: the name of the image and tag for the branch we are running the pipeline on

  • DEPLOY_SERVER_URL: the name of one of our Docker Swarm master nodes that we will connect to via SSH.

  • DEPLOY_PATH: the path on the server to deploy our Docker configurations to.

Now, we start our pipeline with the build step:

.gitlab-ci.yml
build:
  stage: build
  tags:
    - shell
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker build -t $CONTAINER_TEST_IMAGE .
    - docker push $CONTAINER_TEST_IMAGE

We choose a Shell executor for Gitlab CI because that is the quickest and easiest way to build and work with Docker containers from the outside. This may not be a suitable option for everyone.

Here we execute 3 commands:

  1. login to the Gitlab Container Registry with a custom token (you’ll see this command a lot)

  2. build our container and tag it with the current branch name

  3. push this tagged container to our registry

At this point, we can now download a copy of our application at the state of this branch using this tag.

Now we get into the testing jobs:

.gitlab-ci.yml
codequality:
  stage: test
  tags:
    - shell
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker pull codeclimate/codeclimate
    - docker run --env CODECLIMATE_CODE="$PWD" --volume "$PWD":/code --volume /var/run/docker.sock:/var/run/docker.sock --volume /tmp/cc:/tmp/cc codeclimate/codeclimate analyze -f json > codeclimate.json
  artifacts:
    paths: [codeclimate.json]

test:
  stage: test
  tags:
    - shell
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker pull $CONTAINER_TEST_IMAGE
    - docker-compose -f docker/compose.ci.yml -p ci up --abort-on-container-exit
  coverage: '/TOTAL.*?(\d{1,2}.\d+%)/'

Here we have two jobs for testing:

  1. Analyzing code smells and style, using the Code Climate analyzer. This will run a variety of checks including for cyclomatic complexity and PEP 8 compatibility.

  2. Running the unit test suite that comes with our Django application. If any tests fail, the job will fail and we will be able to see a readout of what went wrong on our Gitlab site.

Each time we perform an operation with our container, we pull the container at the state it was during our build step, using the tag name assigned to it. This way we are always testing the container we have already built, rather than the application code alone.

Since these 2 jobs are in the same stage, they run concurrently, and the pipeline will not advance until the next stage unless they both pass

.gitlab-ci.yml
release_stg:
  stage: release
  tags:
    - shell
  only:
    - master
  script:
    - docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
    - docker tag $CONTAINER_TEST_IMAGE $CONTAINER_IMAGE:staging
    - docker push $CONTAINER_IMAGE:staging

If we made it past our testing system, we release it to production (or staging, as the case may be). This job merely takes the passing container, tags it with either “prod” or “staging” so that we can match the state of our production or staging services at any given time, then pushes that tag to our container registry.

.gitlab-ci.yml
deploy_stg:
  stage: deploy
  tags:
    - docker
  only:
    - master
  environment:
    name: staging
    url: https://staging.example.com
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
    - eval $(ssh-agent -s)
    - echo "$DEPLOY_KEY" | ssh-add -
    - mkdir -p ~/.ssh
    - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
  script:
    - scp docker-compose.staging.yml deploy@$DEPLOY_SERVER_URL:$DEPLOY_PATH/staging/docker-compose.yml
    - ssh deploy@$DEPLOY_SERVER_URL "cd $DEPLOY_PATH/staging && docker stack deploy -c docker-compose.yml --with-registry-auth myapp_staging"

Deploying

Deploying can be done in a variety of ways.

In this case, it is done by SSHing to a master in my Docker Swarm, copying over the Compose configurations, then deploying them as a Stack.

The same idea can also be used for a deployment to any Docker server — just replace the docker stack deploy with a docker-compose up and the same basic concept holds true.

In order to properly authenticate with our server, Gitlab CI needs to know where to find an SSH private key. You can set this up as a secret variable within your Gitlab CI repository itself. Then, as we see in the before_script section, we do some magic that tells Gitlab to take the value of that secret variable and to insert it into our container as an SSH private key file.

Our script section is very minimal, since all we are doing is copying over our Docker Compose configuration file, then telling the Docker daemon on our server to run it. If you are using Docker Swarm and the docker stack deploy command, this one command will intelligently restart different components of the stack if their configurations have changed or if there are newer versions of their images available on our container registry (which is always the case here, since we just submitted a new release to it!).

Last updated