Setup Automatic Deployment, Updates and Backups of Multiple Web Applications with Docker on the Scaleway Server

The purpose of this setup is:

  • Setup multiple web apps with different dependencies on the same server
  • Link all apps to the same MySQL server
  • Manage uploaded files for web apps in the single place (so it is easy to backup them)
  • Automatically deploy and update apps on the remote server
  • Run the same setup locally, so development environment is very close to production
  • Setup backups for MySQL databases and for uploaded files

In this case I deploy to Scaleway, but same approach can be used for almost any cloud service.

Cloud Server

First, you need the cloud server with one of operation systems, supported by Docker Machine with SSH access. I used Scaleway’s VC1S server with Ubuntu 14.04. You also need to install Docker Engine and Docker Machine locally.

Docker Machine - setup Docker remotely

Next step is to setup the Docker on the server, this is done with docker-machine create command:

SSH_HOST=111.222.333.44
SSH_USER=root
SSH_KEY=~/.ssh/id_rsa_scaleway

docker-machine create --driver generic \
 --generic-ip-address $SSH_HOST \
 --generic-ssh-user $SSH_USER \
 --generic-ssh-key $SSH_KEY \
 scaleway

To make it work, it is necessary to open the 2375 port on the server. On Scaleway it is done via security group configuration. After changing the security group, it is necessary to reboot the server (stop / run via Archive option or Hard reboot).

Here I have used the generic docker machine driver, there are also specific drivers for popular cloud providers - AWS, Digital Ocean, etc. Note: Scaleway has the Docker Machine plugin, using it you can do even more and automatically launch the new instance during the setup.

Check the full setup script here, on Scaleway I also had to create loopback devices because docker setup failed with [graphdriver] prior storage driver \"devicemapper\" failed: loopback attach failed.

If something goes wrong during the setup, run docker-machine rm scaleway, fix the problem and run the setup again.

Deployment setup

This project is responsible for deployment of the web applications to the remote host.

Projects layout

There are several web applications which I want do deploy, each packaged into the docker container.

On the filesystem they reside in the same ~/web folder along with the web-deploy project which acts as a glue and setups everything together along with MySQL container (used by all projects) and HAProxy (listens to the port 80 and forwards requests to individual containers):

in ~/web $
projecta.com/
   Dockerfile
   config/
   db/
   www/
projectb.com/
   Dockerfile
   config/
   db/
   www/
projectc.com/
   Dockerfile
   app/
   config/
   db/
   ...
web-deploy/

The typical Dockerfile for the php application looks like this:

FROM php:5.6-apache

RUN apt-get update && \
  apt-get install -y msmtp wget && \
  apt-get clean

RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd pdo pdo_mysql mysql mysqli

COPY config/msmtprc /etc/msmtprc

RUN echo 'sendmail_path = "/usr/bin/msmtp -t -i"' > /usr/local/etc/php/conf.d/mail.ini
RUN a2enmod expires && a2enmod rewrite

COPY www/ /var/www/html/

VOLUME /var/www/html/files

This container is based on the official php image. To make the php mail function work, I also setup msmtp and configure php to use it. The example of the msmtp configuration file is here.

On Scaleway by default SMTP ports are disabled. To make emails work, it is necessary to configure the security group (switch “Block SMTP” from On to Off). After changing the security group, server should be rebooted (stop / run via Archive option or Hard reboot).

Here is also an example of the ruby-on-rails application (Redmine) Dockerfile:

# re-use image which we already use
FROM php:5.6-apache
MAINTAINER serebrov@gmail.com

RUN DEBIAN_FRONTEND=noninteractive apt-get update \
    && apt-get install -y ruby libruby \
    libapache2-mod-passenger ruby-dev \
    libmysqlclient-dev libmagickcore-dev libmagickwand-dev \
    build-essential \
    apache2 ruby-passenger \
    && gem install bundler

WORKDIR /var/www/redmine/
COPY ./ /var/www/redmine/
COPY config_prod/*.yml /var/www/redmine/config/

RUN bundle install

COPY config_prod/redmine.conf /etc/apache2/sites-available
RUN chown -R www-data:www-data /var/www/redmine
RUN a2enmod passenger
RUN a2ensite redmine

VOLUME /var/www/redmine/files

This container is based on the same base official php container as php applications just to reuse the already downloaded layers. Ruby application is running under apache with mod passenger.

The web deploy project

The web-deploy project is a glue to build and start the containers for all web projects, link them to mysql if necessary and setup the HAProxy to forward requests to each sub-project.

in ~/web $
projecta.com/
projectb.com/
projectc.com/
   ...
web-deploy/
  deploy.sh
  init-docker.sh
  init-db-files.sh
  mysql-cli.sh
  utils.sh
  /haproxy
    Dockerfile
    build.sh
    haproxy.cfg
    mycompany.pem
  /mysql
    dumps/
      load-dumps.sh
      projecta.sql
      projectb.sql
      projectc.sql
    Dockerfile
    build.sh
    init-db.sh
  /build
    projecta.com.sh
    projectb.com.sh
    projectc.com.sh

Top level scripts include:

The deploy.sh script uses docker-machine to build and run the containers on the remote server. There are several modes it can be used it:

  • ./deploy.sh - deploy (or update) all projects locally
  • ./deploy.sh local projecta.com - deploy only projecta.com locally
  • ./deploy.sh production - deploy only projecta.com on the remote server
  • ./deploy.sh production projecta.com - deploy only projecta.com on the remote server

The script looks like this:

#!/usr/bin/env bash
set -e

SCRIPT_PATH=`dirname $0`
ENVIRONMENT="local"
if [[ "$1" != "" ]]; then
    ENVIRONMENT=$1
else
    echo Environment name is not specified, using 'local'.
fi

if [[ "$2" != "" ]]; then
    # deploy only the specified project
    # for example ./deploy.sh production projecta.com
    # will run ./build/projecta.com
    PROJECTS=$SCRIPT_PATH/build/$2.sh
else
    # deploy all projects
    PROJECTS=$SCRIPT_PATH/build/*.sh
fi

echo "Deploy $PROJECTS"
read -p "Do you want to continue? " -n 1 -r
echo    # (optional) move to a new line
if [[ ! $REPLY =~ ^[Yy]$ ]]
then
    exit 1
fi

# add mysql and haproxy, always update them
PROJECTS="$SCRIPT_PATH/mysql/build.sh "$PROJECTS" $SCRIPT_PATH/haproxy/build.sh"

if [[ "$ENVIRONMENT" == "production" ]]; then
  eval "$(docker-machine env scaleway)"
  docker-machine ip scaleway
  set +a
  export | grep DOCKER
fi

for project in $PROJECTS
do
    echo 'Project: ' $project
    $project $ENVIRONMENT
done

if [[ "$ENVIRONMENT" == "production" ]]; then
  set +a
  export | grep DOCKER
fi

docker ps

Internally, the deploy.sh script goes over the scripts under /build sub-folder with build scripts and executes them to create or update docker containers.

The build script looks like this:

#!/usr/bin/env bash
set -e

SCRIPT_PATH=`dirname $0`
APP_NAME=projecta
CONTAINER_NAME=$APP_NAME-docker
PORT=8091
source $SCRIPT_PATH/../utils.sh
SRC=$SCRIPT_PATH/../../projecta.com

# if we are running locally, the container will use sources from the host filesystem
# (we create the volume pointing to /html)
# for production, we will copy the sources into the container
ENVIRONMENT="local"
if [[ "$1" != "" ]]; then
    ENVIRONMENT=$1
fi
ENV_DOCKER_OPTS=""
if [[ "$ENVIRONMENT" == "local" ]]; then
  ENV_DOCKER_OPTS="-v $(realpath $SRC)/html:/var/www/html"
fi
echo "Environment: $ENVIRONMENT"

# go to the source folder, where Dockerfile is
cd $SRC
# remove the image if it already exists
docker_rm_app_image $APP_NAME
# rebuild the image
docker build -t $CONTAINER_NAME .
# start the container
docker run -d -p $PORT:80 -v /var/web/projecta.com/files:/var/www/html/data/upload --name $APP_NAME --link web-mysql:mysql --restart always $ENV_DOCKER_OPTS $CONTAINER_NAME
# initially files belong to the root user of the host OS
# make them available to containter's www-data user
docker exec $APP_NAME sh -c 'chown -R www-data:www-data /var/www/html/files'

Few interesting things happen here:

  • This project will use port 8091 on the host machine (can be accessed as localhost:8091), this port also will be mentioned in the HAProxy config to redirect requests to this app based on the requested domain name
  • For local deployment the container will use source files from the host machine folder, so we can edit them directly without having to login to container, this is very convenient for local development
  • The web application files will be stored in the volume, which is available on the host machine at /var/web/projecta.com/files
  • The problem with permissions to the shared volume is solved by running chown from within the container (apache runs as www-data and after creation the files folder will belong to root user)
  • This build script is used both for local and remote deployment, the remote part is handled by Docker Machine which allows to run docker commands against the remote host (this is handled in the deploy.sh script)

Note: the docker_rm_app_image function from the build script is defined in the utils.sh script.

The web-deploy project also includes setup for HAProxy and MySQL containers.

HAProxy setup

This container (resides in the web-deploy/haproxy) runs HAProxy and it serves as main entry point on the server. Requests to port 80 are reverse-proxied to other containers. The Dockerfile is simple:

FROM haproxy:1.5
COPY ./haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# this is needed to setup ssl for one of projects
COPY ./mycompany.pem /etc/ssl/certs/

The configuration file (web-deploy/haproxy/haproxy.cfg) looks like this:

global
    log /dev/log local2
    maxconn     2048
    tune.ssl.default-dh-param 2048
    #debug

defaults
    log     global
    option  dontlognull
    mode http
    option forwardfor
    option http-server-close
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000s
    stats enable
    stats auth haadimn:ZyXuuXYZ
    stats uri /haproxyStats

frontend http-in
    bind *:80

    # Define hosts based on domain names
    acl host_projecta hdr(host) -i projecta.com
    acl host_projecta hdr(host) -i www.projecta.com
    # this name is used for local testing
    acl host_projecta hdr(host) -i www.projecta.local
    acl host_projectb hdr(host) -i projectb.com
    acl host_projectb hdr(host) -i www.projectb.com
    # handle sub-domains (use hdr_end here, not hdr)
    acl host_projectc hdr_end(host) -i projectc.com

    # redirect projectd to https
    redirect scheme https if host_projectd !{ ssl_fc }

    ## figure out backend to use based on domainname
    use_backend projecta if host_projecta
    use_backend projectb if host_projectb
    use_backend projectc if host_projectc

frontend https-in
    bind *:443 ssl crt /etc/ssl/certs/mycompany.pem

    # Define hosts based on domain names
    acl host_projectd hdr(host) -i projectd.com

    use_backend projectd if host_projectd

backend projecta
    balance roundrobin
    server srv_pawnshop-soft-ru 127.0.0.1:8091

backend projectb
    balance roundrobin
    server srv_pawnshop-soft-kz 127.0.0.1:8092

backend projectc
    balance roundrobin
    server srv_lombard 127.0.0.1:8093

backend projectd
    balance roundrobin
    server srv_redmine 127.0.0.1:8100

And the build script (web-deploy/haproxy/build.sh) is this:

#!/usr/bin/env bash

export | grep DOCKER
docker-machine ip scaleway

SCRIPT_PATH=`dirname $0`
APP_NAME=web-haproxy
CONTAINER_NAME=web-haproxy-docker
source $SCRIPT_PATH/../utils.sh

pushd $SCRIPT_PATH
docker_rm_app_image $APP_NAME
docker build -t $CONTAINER_NAME .
docker run -d --restart always --net=host -p 80:80 -p 443:443 -v /dev/log:/dev/log --name $APP_NAME $CONTAINER_NAME
popd

The HAProxy container is launched with --net=host option, so it can directly access all the ports exposed by other containers.

The HAProxy also handles SSL (HTTPS) connections. For one of projects it redirects http to https. See the config file and this post for more information.

The HAProxy stats are available via http://hostname.com/haproxyStats. Login and password are set in the config file (haproxy.cfg, see the stats auth line in defaults section).

Some hints to to debug problems with HAProxy setup:

  • Uncomment ‘debug’ in the config file
  • Check ‘docker logs web-haproxy’
  • Check system log (/var/log/syslog), with the configuration above the HAProxy container will log to host’s system log (see https://github.com/dockerfile/haproxy/issues/3)

MySql setup

The MySql container (it is under web-deploy/mysql folder) also has a simple Dockerfile:

FROM mysql:5.6
ENV MYSQL_ROOT_PASSWORD=myrootpassword

VOLUME /var/www/mysql

The build script looks like this:

#!/usr/bin/env bash

SCRIPT_PATH=`dirname $0`
APP_NAME=web-mysql
CONTAINER_NAME=$APP_NAME-docker
source $SCRIPT_PATH/../utils.sh

pushd $SCRIPT_PATH
docker_rm_app_image $APP_NAME
docker build -t $CONTAINER_NAME .

docker run -d -v /var/web/mysql-data:/var/lib/mysql --restart always --name $APP_NAME $CONTAINER_NAME
popd

The data folder is mapped to the host machine to /var/web/mysql-data. The app containers which need the database are linked to this container.

Useful Docker Commands

Some useful commands to view and manage Docker containers:

  • Information about docker docker info
  • List running containers: docker ps
  • Information about specific container docker inspect {appname}
  • View application logs docker logs {appname}
  • Restart application docker restart {appname}
  • Run shell inside the container: docker exec -it {appname} bash # replace {appname} with name from docker ps
  • Run the command inside the container (in this case - backup mysql databases): docker exec web-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' | gzip -9 > all-databases.sql.gz

Cron in Docker and Backups on Scaleway

There are few options to run cron with docker:

Here I have chosen the first option to use cron on the host machine. First, the host machine is Ubuntu 14.04, so it already has cron. Second, everything runs on the same machine and I have no plans to scale out this setup, so it was the easiest option.

The web-deploy/init-db-files.sh script contains code to setup cron, here is the related part:

  docker-machine ssh scaleway apt-get install -y postfix mutt s3cmd
  docker-machine scp -r ./config/web-cron scaleway:/etc/cron.d/
  docker-machine scp -r ./config/.s3cfg scaleway:/root

The postfix and mutt are need for cron to send local emails about jobs. You can check these mails by ssh’ing to the server and running mutt. The s3cmd is used to backup databases and files from the shared storage to S3. The .s3cfg contains credentials for s3cmd, it can be generated by running s3cmd --configure.

Backups are very important, because at the moment Scaleway does not provide highly reliable hardware (no RAID disks, see this thread). And to make a snapshot, you need to archive the instance (it takes few minutes), make the snapshot and launch the instance again, so there is a noticeable down-time period. So I used cron to backup files and databases to Amazon’s S3.

Note: Scaleway has storage which is compatible to S3, called SIS, but at the moment it is not available (at least in my account). When I try to create bucket from the command line it returns S3 error: 400 (TooManyBuckets): You have attempted to create more buckets than allowed. And the note in the Scaleway UI states “We’re currently scaling up our SIS infrastructure to handle the load generated by all our new users. All new bucket creation are currently forbidden to preserve the service quality for current SIS users.”.

The cron file to perform backups looks this way:

# In the case of problems, cron sends local email, can be checked with mutt
# (it requires apt-get install postfix mutt)
# Also check cron messages in /var/log/syslog
# update geolite database
# Note: docker exec should not have -t (tty) option, there is no tty, -i is also not needed
0 1 * * Sun root docker exec projecta /var/www/service/cron-update-geoip-db.sh
# backup all databases to s3
0 5 * * *   root docker exec web-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' | gzip -9 > all-databases.sql.gz && s3cmd put all-databases.sql.gz s3://myweb-backup/$(date +\%Y-\%m-\%d-\%H-\%M-\%S)-all-databases.sql.gz && rm all-databases.sql.gz
0 6 * * *   root s3cmd sync /var/web s3://myweb-backup/storage/

The command to backup mysql databases does few things:

  • Run mysqldump inside the MySQL container via docker exec
  • Pipe the result to gzip to archive it
  • Put the archive to S3, add a timestamp to the file name

And since all application containers map their files folders under host’s /var/web, I simply sync this folder to my S3 bucket.

Conclusion

As we can see, Docker can be successfully used not only to isolate application dependencies, but, along with Docker Machine, also to deploy and update applications on the remote server.

I don’t recommend using the approach above for serious production setups, but it can be useful for small / personal projects.

Update: now it could be easier to use docker swarm mode to build the system like described above.

profile for Boris Serebrov on Stack Exchange, a network of free, community-driven Q&A sites