It have been twice I tried to use Docker. The software is very promising, but it can be hard to understand both what problem it solves, how it does it and how to use it. A lot of introductions blog posts explain these points quite well, so in this post I’d rather focus on a global explanation of the setup I made. It will not be deep Docker tinkering — quite the reverse indeed — but provide explanations of concepts that are splited on multiple documentation pages. If you want to get started without 5 browsers tab browsing documentation, you’re at the right place!
Docker in very short
Docker ease software deployment by providing a framework to build, share, retrieve software images and execute them in a restricted environment (called containers). Image of a given application includes its dependencies so it can run without installing any additional software on the host computer. Images can be composed to run up complex systems. The beast run on Linux and is also coming in Windows Server 2016 and Windows 10.
On a local machine, it also allow on to run multiple versions of softwares (eg. multiple Python installation) that can be used by the host computer.
With the recent release of Docker for Mac (based upon the native macOS’s Hypervisor.framework), I decided to try Docker. I wanted to dockerize an existing PHP application of mine I made for a university course as a small hands-on practice project. This is a good use case because the deployment of this kind of software is typically not fully automated: installing and configuring a web server, copying application files (here comes git), provisioning a database and correcting problems that arise on the production machine but not on the dev one because of environment changes.
The quickest (and dirtiest) way to Dockerize such an application would be to create an image with a web server, a database and the app code. It works and solve the problem but any modification of the code or software update of one of the servers require to build a new image. It is not very Dockerish.
The web stack: few tricks
The system I crafted is made of three images: one for the web server, one for the database server and the last one for the app code. The original app was developed against two stacks: Mac OS, Apache, MySQL, PHP (MAMP) and Windows, IIS, SQL Server, PHP (WISP?). The last stack was actually hosted on Azure with Azure Websites with an Azure Database provisioned so I hadn’t a direct control of the underlying Windows & IIS.
Anyway, this time the code is expected to run on the LAMP stack. A docker image is officially provided for apache (httpd) but it doesn’t come with PHP bundled. The trick — shown on the apache doc. page — is to actually pulls the PHP image with the apache tag to get an apache server with php installed.
Here comes a second problem: table name comparison differ between platforms. That is, MySQL on Linux stores and compares table names differently than Mac OS and Windows. On the later two, the comparison is not case sensitive whereas it is on the former. Actually each system has its own default behavior. My application creates table in CAPS while consistently use them in lower case. The stack change was then a breaking one, and the database creation script had to be modified to create lowercase tables.
Moreover by default MySQL use latin1 encoding for data.
Docker Compose: gourmet explanation
Here is the main dish: the docker-compose file. This file declares all the services (containers to run) and volumes (persistent storage location) that compose — hence the name — the system we want to run. I’ll use the following file as a basis for further explanations. It’s the same file I’d come with except the application code is a single PHP test file instead of my app codebase.
version: '2' services: db: image: mysql restart: always depends_on: - app env_file: - ./env volumes: - db_data:/var/lib/mysql volumes_from: - app:ro web: image: lecailliez/laxp:latest restart: always depends_on: - db - app ports: - 5000:80 environment: DB_HOST: db:3306 env_file: - ./env volumes_from: - app:ro app: image: lecailliez/demo_blog_post volumes: - /var/www/html/ - /docker-entrypoint-initdb.d/ - /etc/mysql/conf.d/ volumes: db_data:
There is 3 containers declared under the ‘services’ key: db (mysql instance), web (apache+php) and app (app code). One volume (db_data) is declared under ‘volumes’ key: it will allow database changes to persist upon containers creation and deletion.
Note that the web server port 80 is redirected to the 5000 port of the host machine. This is because I already have an Apache server running on my machine. That mean the website will be accessible on the host machine by the address localhost:5000.
There is three points I want to explain deeper: container communication, container file sharing and container network.
Containers are created from immutable images. Changes made in running container can only be saved by created a new image based on the container state. It means for example you cannot change configuration files of an image and expect them to persist across instances. But most software can or even must be parametrised, generally at launch.
The Docker way to pass parameters to a container is to use environment variables. In a docker-compose.yml file that can be done in two ways: either directly in the file with the ‘environment‘ directive and a list of value or by using a env_file directive. Env_file contains values which are path to value=key file that define environment variables that will then be defined in the container.
As you can see in the file, this two way of providing environment variables are not mutually exclusive and can be used together.
Communication by variable is why documentation is of uttermost importance for a Docker image: if you don’t specify the name of environment variables your image will use to set its configuration value, users cannot guess them and will not be able to make full use of it.
This variable is mandatory and specifies the password that will be set for the MySQL root superuser account. In the above example, it was set to my-secret-pw.
This variable is optional and allows you to specify the name of a database to be created on image startup. If a user/password was supplied (see below) then that user will be granted superuser access (corresponding to GRANT ALL) to this database.
My web app’s code need access to the database. It must be able to get database connection information (host name, db name, user name, user password) from environment variables. The code needed to be slightly modified to handle this. I declared APP_CC_DB_* keys for the 4 database connection information and 2 APP_CC_ADMIN_* to register the admin name and password of the app (see below).
Then the PHP code pulls this values from the environment variable using PHP’s getenv().
Environment file to alleviate lack of variable in compose file
Something is not smooth as it should here in a docker-compose file. Duplicate keys are needed to share the same value between containers. In my app APP_CC_DB_NAME and MYSQL_DATABASE key contains the database name to use but there in no way to use the value of one key to define another. So if I need to change to value of one key, it need to manually search and made the same change to other key containing the same information.
Hence my use of an environment file. By using an environment file where I can group keys that should contains the same value and put comments, it makes the compose file a little more understandable and modifiable.
# MYSQL_XXXX keys are MySQL docker image configuration # APP_CC_DB_XXXX are keys are related to database access in the application # This two kays should have the same value MYSQL_ROOT_PASSWORD=r89hgiuzhgurhghe56789ezgoh APP_CC_DB_PASS=r89hgiuzhgurhghe56789ezgoh # This two kays should have the same value MYSQL_DATABASE=notes_cc APP_CC_DB_NAME=notes_cc APP_CC_DB_USER=root # This two keys defines administator credentials for the app APP_CC_ADMIN_NAME=louis APP_CC_ADMIN_PASS=genie
The same environment file is used for each container that need it in the application, that is the two servers. The « app » container contains read-only files and does not run software per se.
Server containers need to talk to each other. Here the web server must access the database server. Docker compose automatically create an internal network that enable containers to access each other. It does so while assigning automatic IP address you don’t know in advance. So you can’t hardcode them anywhere.
To solve the problem of reaching a running container from another Docker Compose creates a host name for each container. By default name is the same as the container name (eg. app, db or web in this post). The db container instance will thus be reachable on the internal network by its host name db.
The web server must have database server IP address. That’s why there is an environment variable APP_CC_DB_HOST: db:3306 defined. The db:3006 value is the hostname and port of the database server; the db hostname will resolve to the db instance’s IP in the internal network. The environment variable is used in the PHP code (getenv function) to get the database connection location.
Container file sharing
In the official documentation sample files located on the host computer are used inside a running container. In my application, the code is in release state so it does not need to be modified outside of the dockerized system and it should lies on an image to be able to distributed via Docker.
That’s why I built a third image to include the application codebase. This image is based on the web server one. That make use of the layered nature of Docker’s image which store only delta between related images.
The files then needs to be accessed from the servers and both the web server and the database server expect initialisation files to be in some given folder. The web server will by default serve files under the /var/www/html/ path. The database will run any .sql script located in /docker-entrypoint-initdb.d/ and its configuration can be override by providing a files in the /etc/mysql/conf.d/ directory.
So we will build the app code image accordingly: the PHP files (index.php) will put under /var/www/html/, the database initialisation script (init.sql) under /docker-entrypoint-initdb.d/ and the MySQL configuration file (app_mysql.cnf) in /etc/mysql/conf.d/.
Theses folders then need to be declared accessible to other containers. This is what the ‘volumes’ declared in the app section are for
fragment of docker-composer.yml
app: image: lecailliez/demo_blog_post volumes: - /var/www/html/ - /docker-entrypoint-initdb.d/ - /etc/mysql/conf.d/
The two containers then import (mount) this volumes using the volumes_from key. The value below is the container from which the volume import the folder, with the read/write permission. Because app code needs only to be read to be executed it is mounted as read-only (ro). By doing this, there existing folders that may exists at theses path masked by the imported one.
fragment of docker-composer.yml
db: volumes_from: - app:ro
And voilà! The system is now complete. But there is still some painful points in my opinion.
Docker compose allow one to mount folder folder (volume) other containers, or a given directory from the host to a given path on a container. But as far as a I understand (and tried), one cannot map a folder from a given path to another path in the mounting container.
I would have liked to create an app image that really mimicries the codebase: one folder /content, then mounted on /var/www/html and one /init mounted on /docker-entrypoint-initdb.d/ but it seems not possible.
Code download and usage
The full working sample code is available here.
Dockerfile docker-compose.yml env content/index.php init/init.sql init/app_mysql.cnf
The last there files is the simplified code of the application. It’s connect to the database initialised with a greeting message containing UTF-8 and display it to the user.
You must first build the app code image with docker build then launch the full system with docker-compose. You can then try to browse localhost:5000. You’ll need to wait a bit before the database is fully initialised and reachable.
$ docker build . -t lecailliez/demo_blog_post $ docker-compose up
In case you need clean-up, the following commands run in this order will be useful:
$ docker-compose down $ docker volume rm demo_db_data $ docker rmi lecailliez/demo_blog_post
Thanks for reading !
Remember I’m not a Docker expert, but I wanted to share knowledge that take some hours to get, and to provide a working sample of mounted volumes that’s missing from the official documentation.