When working on continuously expanding projects over long periods of time, software developers often come across code that must be updated in order for the project to progress. If the code is outdated, preparing the environment and launching certain applications become problematic. Docker is an open-source, virtual environment tool primarily used on Linux and OS6 applications that allows the developer to more quickly update and deploy apps inside various virtual software containers. These containers do not require separate operating systems. They access the Linux kernel’s features, meaning several containers can be run simultaneously by a single server.
Docker further speeds up deployment by providing consistent development and production environments. All members of the team can use the same system libraries, language run-times, and OS. It supports multiple language versions, including Python, Ruby, Java, and Node, meaning developers can run a script inside a Docker image, rather than installing various languages and running scripts separately. Docker’s “Copy-on-Write” mechanism duplicates any requested code in a standard UNIX pattern. Developers can also begin running a container in less than 0.1 seconds, and the entire container requires less than 1 MB of disc-space. These are significant improvements over standard VMs.
To help developers take advantage of these many benefits, this article shows how to update an old production application with Docker. The app uses a non-standard Sphinx build, wkhtmltopdf, NPM, and Resque. They will initially be kept within the application and then moved to their own containers.
The goals are to:
- Keep 100% forward compatibility with devs not using Docker.
- Make the app use environment variables for configuration.
- Build Sphinx, install Node.js, and wkhtmltopdf in a basic container.
- Keep the app and Resque in the same container.
- Simplify the Procfile.
- Create a base Docker-composed configuration file and extend it for development and CI envs.
Step 1: Setup
Here is what you’ll need to install Docker and its NFS plug-in for OSX Native, OSX Brew and VirtualBox, or Ubuntu.
For OSX Native: https://docs.docker.com/docker-for-mac/
For OSX via Brew and VirtualBox:
brew install caskroom/cask/brew-cask brew cask install virtualbox brew install docker docker-machine docker-compose # create the vm docker-machine create -d virtualbox default # import environment variables for the docker-cli eval "$(docker-machine env default)"
For Ubuntu: (install docker) (install docker-compose)
Now that you have Docker installed, you can run the application on it. Docker apps are configured via a Dockerfile, which defines how the container is built.
Step 2: Unify File Configuration
Assume the app you are updating has hard-coded config files under the version control. When running the app in Docker, the secrets will no longer be stored inside the image.
When modifying this file to unify the configuration, Hash#fetch
helps return default value unless it already exists in the ENV variable:
# database.yml development: &default username: <%= ENV.fetch('MYSQL_USER', 'foo') %> password: <%= ENV.fetch('MYSQL_PASSWORD', 'bar') %> host: <%= ENV.fetch('MYSQL_HOST', '127.0.0.1') %> database: app_development .....
# mongoid.yml development: sessions: default: database: app_development hosts: - <%= ENV.fetch('MONGO_HOST', 'localhost') %>:ENV.fetch('MONGO_PORT', 27017) %> ...
# sphinx.yml development: &default address: <%= ENV.fetch('SPHINX_HOST', 'localhost') %> ...
Step 3: Run Dockerfile
This app uses light-weighted Ruby: 2.1 as the base image and changes BUNDLE_PATH
to get cache via volume (https://docs.docker.com/engine/reference/builder):
FROM ruby:2.1-slim
RUN apt-get update -qq && \ apt-get install -y \ build-essential git \ libmysqlclient-dev mysql-client libxslt-dev libxml2-dev \ curl net-tools --no-install-recommends
Step 4: Install NPM and Node.js via Node Version Manager
At this point, you are explicitly specifying the Node libraries. As mentioned, the application is using a non-standard Sphinx build. Therefore, you can’t use already existing Docker images for this purpose:
ENV NVM_DIR "/root/.nvm" ENV NVM_VERSION 0.33.0 ENV NODE_VERSION 7.5.0 RUN curl https://raw.githubusercontent.com/creationix/nvm/v${NVM_VERSION}/install.sh | bash \ && . $NVM_DIR/nvm.sh \ && nvm install $NODE_VERSION \ && nvm alias default $NODE_VERSION \ && nvm use default ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
Step 5: Containerize the App
This step is a temporary solution to keep Sphinx within the application. When you are preparing for production release, Sphinx must be moved to a separate container:
WORKDIR /tmp RUN curl -o sphinx.tar.gz https://sphinxsearch.com/files/sphinx-2.2.11-release.tar.gz \ && tar -zxvf sphinx.tar.gz > /dev/null \ && curl -o libstemmer_c.tgz https://snowball.tartarus.org/dist/libstemmer_c.tgz \ && tar -xvzf libstemmer_c.tgz > /dev/null \ && cd sphinx-*/ \ && cp -R ../libstemmer_c/* ./libstemmer_c \ && ./configure --with-mysql --with-libstemmer > /dev/null \ && make > /dev/null && make install > /dev/null
During this step, you also need to clean up extraneous data to reduce the image size:
# Cleanup RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
Step 6: Mount Bundled Gems Folder Inside Container
Docker supports mount points called “volumes” which let you access data from either the native host or another container. In this case, you can mount your bundled gems folder inside the container:
ENV APP /app ENV BUNDLE_PATH /bundle VOLUME /bundle # Gems RUN gem install bundler ADD Gemfile . ADD Gemfile.lock . ADD vendor . RUN bundle install --jobs 4 --retry 5
(To learn more about BUNDLE_PATH
and why gems must be installed as the last step, see: https://medium.com/@fbzga/how-to-cache-bundle-install-with-docker-7bed453a5800)
Step 7: Install Node Packages
When using Docker to perform this step, the NPM is invoked with root privileges, so it will change the UID to the user account or to the UID specified by the user config, which, in this case, defaults to nobody.
Start by setting the unsafe-perm flag to run scripts with root privileges:
ADD package.json . RUN npm install --unsafe-perm
(For more details on running scripts with root privileges, see: https://docs.npmjs.com/misc/scripts#user)
During the last part of this step, you add the app’s code, expose the internal 3000 port, and process any incoming commands thought bin/docker-entrypoint.sh:
# App RUN mkdir $APP WORKDIR $APP ADD . $APP EXPOSE 3000 ENTRYPOINT bin/docker-entrypoint.sh $0 $@
Step 8: Create docker-compose.yml File
Before executing any code in the app’s container, you need to make sure that the old PID files have been deleted and the database is available:
#!/bin/bash bundle check || bundle install --jobs 4 --retry 5 pids=( server.pid realtime_updater.pid ) for file in "${pids[@]}" do path="tmp/pids/$file" if [ -f $path ]; then rm $path fi done ./bin/wait-for-mysql.sh echo "[INFO] Running in app: $@" exec "$@"
Now you’ll use wait-for-mysql.sh to hold the execution until the DB is ready:
#!/bin/bash echo "[INFO] Waiting for mysql" until mysql -h"$MYSQL_HOST" -P3306 -u"$MYSQL_ROOT_USER" -p"$MYSQL_ROOT_PASSWORD" -e 'show databases;'; do >&2 printf "." sleep 1 done echo "[INFO] Mysql ready"
Next, you need to integrate the app with Mysql, Mongo, Redis, volumes and expose the env variables.
As an additional step, you can create docker-compose-base.yml which will be extended in the compose file for dev and ci/prod.
Many of the images on Dockerhub accept env variables for configuration. In this case, you want to config mysql’s root user and expose the data in the application container, as well as in the Redis and Mongo URLs:
# .docker/.env REDIS_HOST=redis MONGO_HOST=mongo # .docker/db.env MYSQL_ROOT_USER=root MYSQL_ROOT_PASSWORD=password MYSQL_USER=root MYSQL_PASSWORD=password MYSQL_HOST=db MYSQL_DATABASE=app_development
The docker-compose.yml includes the application service, along with the images for Mysql, Redis and Mongo:
# docker-compose.yml version: '2' services: app: build: . command: ./bin/startup.sh volumes: - .:/app - bundle:/bundle ports: - '3000:3000' - '4000:4000' env_file: - ./.docker/db.env - ./.docker/.env links: - db - redis - mongo db: image: mysql:5.7 ports: - 3306 env_file: - ./.docker/db.env volumes: - mysql:/var/lib/mysql redis: image: redis:latest volumes: - redis:/data mongo: image: mongo volumes: - mongo:/data/db volumes: bundle: mysql: redis: mongo: # Procfile.docker web: bundle exec rails s -b 0.0.0.0 -p 3000 -e ${RACK_ENV:-development} react: npm start resque: QUEUE=... bundle exec rake resque:work sphinx: bundle exec rake ts:start NODETACH=true ....
#startup.sh #!/bin/bash # tells the bash script to exit whenever anything returns a non-zero return value. set -e ./bin/wait-for-mysql.sh # echo "[INFO] Precreate databases..." # rake db:version || bundle exec rake db:setup # echo "[INFO] Running db:migrate..." # rake db:migrate bundle exec foreman start -f Procfile.docker
At this point, you have completed all the steps needed to create your new Docker container and can move on to starting and running the application.
Step 9: Start and Run the Application
Your entry point:
docker-compose up
Point your browser to: https://localhost:3000 or https://192.168.99.100:3000.
The Docker default machine’s IP is 192.168.99.100. To make sure your IP is correct, run:
docker-machine ip docker
If you want to bash to the app container or run custom command, don’t forget to enable and map services to the host by passing the “service-ports” option to RUN command:
docker-compose run --service-ports app bash
Here’s what you need to run the specs:
docker-compose run app bundle exec rspec
Step 10: Prepare to Release the App to Production
- Move applications logs to STDOUT/STDERR. During the updating process, logs will be saved to the container’s default file system.
- Integrate this STDOUT/STDERR log file into a centralized logging system. This is usually a 3rd party service.
- Move Sphinx, Node.js, Resque, wkhtmltopdf and all schedule-based tasks and processes to their own containers.
- Create a compose file for CI and prod.
Containerizing outdated apps with Docker is becoming increasingly popular due to its many benefits. It speeds up the entire development process while improving portability, transparency, and security.