Setup Automatic Deployment, Updates and Backups of Multiple Web Applications with Docker on the Scaleway Server

The purpose of this setup is:

In this case I deploy to Scaleway, but same approach can be used for almost any cloud service.

Cloud Server

First, you need the cloud server with one of operation systems, supported by Docker Machine with SSH access. I used Scaleway's VC1S server with Ubuntu 14.04. You also need to install Docker Engine and Docker Machine locally.

Docker Machine - setup Docker remotely

Next step is to setup the Docker on the server, this is done with docker-machine create command:


docker-machine create --driver generic \
 --generic-ip-address $SSH_HOST \
 --generic-ssh-user $SSH_USER \
 --generic-ssh-key $SSH_KEY \

To make it work, it is necessary to open the 2375 port on the server. On Scaleway it is done via security group configuration. After changing the security group, it is necessary to reboot the server (stop / run via Archive option or Hard reboot).

Here I have used the generic docker machine driver, there are also specific drivers for popular cloud providers - AWS, Digital Ocean, etc. Note: Scaleway has the Docker Machine plugin, using it you can do even more and automatically launch the new instance during the setup.

Check the full setup script here, on Scaleway I also had to create loopback devices because docker setup failed with [graphdriver] prior storage driver \"devicemapper\" failed: loopback attach failed.

If something goes wrong during the setup, run docker-machine rm scaleway, fix the problem and run the setup again.

Deployment setup

This project is responsible for deployment of the web applications to the remote host.

Projects layout

There are several web applications which I want do deploy, each packaged into the docker container.

On the filesystem they reside in the same ~/web folder along with the web-deploy project which acts as a glue and setups everything together along with MySQL container (used by all projects) and HAProxy (listens to the port 80 and forwards requests to individual containers):

in ~/web $

The typical Dockerfile for the php application looks like this:

FROM php:5.6-apache

RUN apt-get update && \
  apt-get install -y msmtp wget && \
  apt-get clean

RUN apt-get update && apt-get install -y \
        libfreetype6-dev \
        libjpeg62-turbo-dev \
        libmcrypt-dev \
        libpng12-dev \
    && docker-php-ext-install -j$(nproc) iconv mcrypt \
    && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
    && docker-php-ext-install -j$(nproc) gd pdo pdo_mysql mysql mysqli

COPY config/msmtprc /etc/msmtprc

RUN echo 'sendmail_path = "/usr/bin/msmtp -t -i"' > /usr/local/etc/php/conf.d/mail.ini
RUN a2enmod expires && a2enmod rewrite

COPY www/ /var/www/html/

VOLUME /var/www/html/files

This container is based on the official php image. To make the php mail function work, I also setup msmtp and configure php to use it. The example of the msmtp configuration file is here.

On Scaleway by default SMTP ports are disabled. To make emails work, it is necessary to configure the security group (switch "Block SMTP" from On to Off). After changing the security group, server should be rebooted (stop / run via Archive option or Hard reboot).

Here is also an example of the ruby-on-rails application (Redmine) Dockerfile:

# re-use image which we already use
FROM php:5.6-apache
MAINTAINER serebrov@gmail.com

RUN DEBIAN_FRONTEND=noninteractive apt-get update \
    && apt-get install -y ruby libruby \
    libapache2-mod-passenger ruby-dev \
    libmysqlclient-dev libmagickcore-dev libmagickwand-dev \
    build-essential \
    apache2 ruby-passenger \
    && gem install bundler

WORKDIR /var/www/redmine/
COPY ./ /var/www/redmine/
COPY config_prod/*.yml /var/www/redmine/config/

RUN bundle install

COPY config_prod/redmine.conf /etc/apache2/sites-available
RUN chown -R www-data:www-data /var/www/redmine
RUN a2enmod passenger
RUN a2ensite redmine

VOLUME /var/www/redmine/files

This container is based on the same base official php container as php applications just to reuse the already downloaded layers. Ruby application is running under apache with mod passenger.

The web deploy project

The web-deploy project is a glue to build and start the containers for all web projects, link them to mysql if necessary and setup the HAProxy to forward requests to each sub-project.

in ~/web $

Top level scripts include:

The deploy.sh script uses docker-machine to build and run the containers on the remote server. There are several modes it can be used it:

The script looks like this:

#!/usr/bin/env bash
set -e

SCRIPT_PATH=`dirname $0`
if [[ "$1" != "" ]]; then
    echo Environment name is not specified, using 'local'.

if [[ "$2" != "" ]]; then
    # deploy only the specified project
    # for example ./deploy.sh production projecta.com
    # will run ./build/projecta.com
    # deploy all projects

echo "Deploy $PROJECTS"
read -p "Do you want to continue? " -n 1 -r
echo    # (optional) move to a new line
if [[ ! $REPLY =~ ^[Yy]$ ]]
    exit 1

# add mysql and haproxy, always update them
PROJECTS="$SCRIPT_PATH/mysql/build.sh "$PROJECTS" $SCRIPT_PATH/haproxy/build.sh"

if [[ "$ENVIRONMENT" == "production" ]]; then
  eval "$(docker-machine env scaleway)"
  docker-machine ip scaleway
  set +a
  export | grep DOCKER

for project in $PROJECTS
    echo 'Project: ' $project
    $project $ENVIRONMENT

if [[ "$ENVIRONMENT" == "production" ]]; then
  set +a
  export | grep DOCKER

docker ps

Internally, the deploy.sh script goes over the scripts under /build sub-folder with build scripts and executes them to create or update docker containers.

The build script looks like this:

#!/usr/bin/env bash
set -e

SCRIPT_PATH=`dirname $0`
source $SCRIPT_PATH/../utils.sh

# if we are running locally, the container will use sources from the host filesystem
# (we create the volume pointing to /html)
# for production, we will copy the sources into the container
if [[ "$1" != "" ]]; then
if [[ "$ENVIRONMENT" == "local" ]]; then
  ENV_DOCKER_OPTS="-v $(realpath $SRC)/html:/var/www/html"
echo "Environment: $ENVIRONMENT"

# go to the source folder, where Dockerfile is
cd $SRC
# remove the image if it already exists
docker_rm_app_image $APP_NAME
# rebuild the image
docker build -t $CONTAINER_NAME .
# start the container
docker run -d -p $PORT:80 -v /var/web/projecta.com/files:/var/www/html/data/upload --name $APP_NAME --link web-mysql:mysql --restart always $ENV_DOCKER_OPTS $CONTAINER_NAME
# initially files belong to the root user of the host OS
# make them available to containter's www-data user
docker exec $APP_NAME sh -c 'chown -R www-data:www-data /var/www/html/files'

Few interesting things happen here:

Note: the docker_rm_app_image function from the build script is defined in the utils.sh script.

The web-deploy project also includes setup for HAProxy and MySQL containers.

HAProxy setup

This container (resides in the web-deploy/haproxy) runs HAProxy and it serves as main entry point on the server. Requests to port 80 are reverse-proxied to other containers. The Dockerfile is simple:

FROM haproxy:1.5
COPY ./haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
# this is needed to setup ssl for one of projects
COPY ./mycompany.pem /etc/ssl/certs/

The configuration file (web-deploy/haproxy/haproxy.cfg) looks like this:

    log /dev/log local2
    maxconn     2048
    tune.ssl.default-dh-param 2048

    log     global
    option  dontlognull
    mode http
    option forwardfor
    option http-server-close
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000s
    stats enable
    stats auth haadimn:ZyXuuXYZ
    stats uri /haproxyStats

frontend http-in
    bind *:80

    # Define hosts based on domain names
    acl host_projecta hdr(host) -i projecta.com
    acl host_projecta hdr(host) -i www.projecta.com
    # this name is used for local testing
    acl host_projecta hdr(host) -i www.projecta.local
    acl host_projectb hdr(host) -i projectb.com
    acl host_projectb hdr(host) -i www.projectb.com
    # handle sub-domains (use hdr_end here, not hdr)
    acl host_projectc hdr_end(host) -i projectc.com

    # redirect projectd to https
    redirect scheme https if host_projectd !{ ssl_fc }

    ## figure out backend to use based on domainname
    use_backend projecta if host_projecta
    use_backend projectb if host_projectb
    use_backend projectc if host_projectc

frontend https-in
    bind *:443 ssl crt /etc/ssl/certs/mycompany.pem

    # Define hosts based on domain names
    acl host_projectd hdr(host) -i projectd.com

    use_backend projectd if host_projectd

backend projecta
    balance roundrobin
    server srv_pawnshop-soft-ru

backend projectb
    balance roundrobin
    server srv_pawnshop-soft-kz

backend projectc
    balance roundrobin
    server srv_lombard

backend projectd
    balance roundrobin
    server srv_redmine

And the build script (web-deploy/haproxy/build.sh) is this:

#!/usr/bin/env bash

export | grep DOCKER
docker-machine ip scaleway

SCRIPT_PATH=`dirname $0`
source $SCRIPT_PATH/../utils.sh

docker_rm_app_image $APP_NAME
docker build -t $CONTAINER_NAME .
docker run -d --restart always --net=host -p 80:80 -p 443:443 -v /dev/log:/dev/log --name $APP_NAME $CONTAINER_NAME

The HAProxy container is launched with --net=host option, so it can directly access all the ports exposed by other containers.

The HAProxy also handles SSL (HTTPS) connections. For one of projects it redirects http to https. See the config file and this post for more information.

The HAProxy stats are available via http://hostname.com/haproxyStats. Login and password are set in the config file (haproxy.cfg, see the stats auth line in defaults section).

Some hints to to debug problems with HAProxy setup:

MySql setup

The MySql container (it is under web-deploy/mysql folder) also has a simple Dockerfile:

FROM mysql:5.6

VOLUME /var/www/mysql

The build script looks like this:

#!/usr/bin/env bash

SCRIPT_PATH=`dirname $0`
source $SCRIPT_PATH/../utils.sh

docker_rm_app_image $APP_NAME
docker build -t $CONTAINER_NAME .

docker run -d -v /var/web/mysql-data:/var/lib/mysql --restart always --name $APP_NAME $CONTAINER_NAME

The data folder is mapped to the host machine to /var/web/mysql-data. The app containers which need the database are linked to this container.

Useful Docker Commands

Some useful commands to view and manage Docker containers:

Cron in Docker and Backups on Scaleway

There are few options to run cron with docker:

Here I have chosen the first option to use cron on the host machine. First, the host machine is Ubuntu 14.04, so it already has cron. Second, everything runs on the same machine and I have no plans to scale out this setup, so it was the easiest option.

The web-deploy/init-db-files.sh script contains code to setup cron, here is the related part:

  docker-machine ssh scaleway apt-get install -y postfix mutt s3cmd
  docker-machine scp -r ./config/web-cron scaleway:/etc/cron.d/
  docker-machine scp -r ./config/.s3cfg scaleway:/root

The postfix and mutt are need for cron to send local emails about jobs. You can check these mails by ssh'ing to the server and running mutt. The s3cmd is used to backup databases and files from the shared storage to S3. The .s3cfg contains credentials for s3cmd, it can be generated by running s3cmd --configure.

Backups are very important, because at the moment Scaleway does not provide highly reliable hardware (no RAID disks, see this thread). And to make a snapshot, you need to archive the instance (it takes few minutes), make the snapshot and launch the instance again, so there is a noticeable down-time period. So I used cron to backup files and databases to Amazon's S3.

Note: Scaleway has storage which is compatible to S3, called SIS, but at the moment it is not available (at least in my account). When I try to create bucket from the command line it returns S3 error: 400 (TooManyBuckets): You have attempted to create more buckets than allowed. And the note in the Scaleway UI states "We're currently scaling up our SIS infrastructure to handle the load generated by all our new users. All new bucket creation are currently forbidden to preserve the service quality for current SIS users.".

The cron file to perform backups looks this way:

# In the case of problems, cron sends local email, can be checked with mutt
# (it requires apt-get install postfix mutt)
# Also check cron messages in /var/log/syslog
# update geolite database
# Note: docker exec should not have -t (tty) option, there is no tty, -i is also not needed
0 1 * * Sun root docker exec projecta /var/www/service/cron-update-geoip-db.sh
# backup all databases to s3
0 5 * * *   root docker exec web-mysql sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' | gzip -9 > all-databases.sql.gz && s3cmd put all-databases.sql.gz s3://myweb-backup/$(date +\%Y-\%m-\%d-\%H-\%M-\%S)-all-databases.sql.gz && rm all-databases.sql.gz
0 6 * * *   root s3cmd sync /var/web s3://myweb-backup/storage/

The command to backup mysql databases does few things:

And since all application containers map their files folders under host's /var/web, I simply sync this folder to my S3 bucket.


As we can see, Docker can be successfully used not only to isolate application dependencies, but, along with Docker Machine, also to deploy and update applications on the remote server.

I don't recommend using the approach above for serious production setups, but it can be useful for small / personal projects.

Update: now it could be easier to use docker swarm mode to build the system like described above.

profile for Boris Serebrov on Stack Exchange, a network of free, community-driven Q&A sites