What is Docker? Why is it important and necessary for developers? Part II

11.What is Docker? Why is it important and necessary for developers? Part II.jpg

In the first part of the article, we examined the concept of containerization, looked at the difference between LXC and Docker, and also what else has replaced such a powerful tool in the development world. You can see everything in detail here

And we continue our review of Docker and talk about the development environment and the main delicacies of Docker.

Docker Development Environment

When you develop an application, you need to provide the code along with all its components, such as libraries, servers, databases, etc. You may find yourself in a situation where the application is running on your computer but refuses to turn on on another user's device. And this problem is solved by creating software independence from the system.

But what is the difference between virtualization?

Initially, virtualization was designed to eliminate such problems, but it has significant drawbacks:

  • slow loading;
  • possible payment for the provision of additional space;
  • not all virtual machines support compatible use;
  • supporting VMs often require complex configuration;
  • the image may be too large since the "additional OS" adds a gigabyte of space to the project on top of the operating system, and in most cases, several VMs are put on the server, which takes up even more space.

But Docker simply shares resources of the OS among all containers (Docker container) that work as separate processes. This is not the only such platform, but, undoubtedly, one of the most popular and in demand.

wale_zoo-01.png

If you have not started using Docker, then read on. Docker has changed the approach to building applications and has become an extremely important tool for Developers and DevOps professionals. Using this tool to automate tasks related to development, testing and configuration, let's take a look at how, in a few simple steps, you can make the team more efficient and focus directly on product development.

Quick start with docker-compose

Docker-compose is a simple tool that allows you to run multiple docker containers with one command. Before diving into the details, let's talk about the structure of the project. We use monorepo, and the code base of each service (web application, API, background handlers) is stored in its root directory. Each service has a Docker file describing its dependencies. An example of such a structure can be seen in the demo project.

As an example, consider one of the projects that were developed by our team. The project used technologies such as Ruby (back-end), Vue.js (front-end), and Golang (background jobs). PostgreSQL database and Faktory message broker. Docker-compose works best for linking all of these parts. The configuration for docker-compose is in the docker-compose.yml file, which is located inside the project.

During the first launch, all necessary containers will be created or loaded. At first glance, it’s nothing complicated, especially if you used to work with Docker, but still let's discuss some details:

  • context: ../directory or context: . - this specifies the path to the source code of the service within monorepo.
  • dockerfile: dev.Dockerfile - for development environments, we use a separate dockerfiles. In production, the source code is copied directly to the container, and for development is connected as a volume. Therefore, there is no need to recreate the container each time the code is changed.
  • volumes: - "../directory_with_app_code:/app" - this way the directory with the code is added to the docker as a volume.
  • links: docker-compose can links containers with each other through virtual network, so for example: a web service can access postgres database by the hostname: postgres://postgres:password@db.local:5432

Always use the --build argument

By default, if containers are already on the host, docker-compose up does not recreate them. To force this operation, use the --build argument. This is necessary when third-party dependencies or the Docker file itself change. We made it a rule to always run docker-compose up --build. Docker perfectly caches container layers and will not recreate them if nothing has changed. Continuous use of --build can slow down loading for a few seconds, but prevents unexpected problems associated with the application running outdated third-party dependencies.

You can abstract the start of the project with a simple script




#!/bin/sh

docker-compose up --build "$@"



This technique allows you to change the options used when starting the tool, if necessary. Or you can just do ./bin/start.sh

Partial launch

In the docker-compose.yml example, some services depend on others:

In this fragment, the app and tests services require a database service (postgres in our case) and a data store service (redis in our case). When using docker-compose, you can specify the name of the service to run only it: docker-compose run app. This command will launch postgres container (with PostgreSQL service in it) and redis container (with Redis service), and after it the app service. In large projects, such features may come in handy. This functionality is useful when different developers need different parts of the system. For example, the frontend specialist who works on the landing page does not need the entire project, just the landing page itself is enough.

Unnecessary logs in>/dev/null

Some programs generate too many logs. This information is in most cases useless and only distracting. In our demo repository, we turned off MongoDB logs by setting the log driver to none:

Multiple docker-compose files

After running the docker-compose up command, it by default searches for the docker-compose.yml file in the current directory. In some cases, you may need multiple docker-compose.yml files. To include another configuration file, the --file argument can be used:

docker-compose --file docker-compose-tests.yml up

So why do we need multiple configuration files? First of all, to split a composite project into several subprojects. I am glad that services from different compose files can still be connected. For example, you can put infrastructure-related containers (databases, queues, etc.) in one docker-compose file, and application-related containers in another.

Testing

We use docker-compose to run all our tests inside self-hosted drone.io. And we use various types of testing like unit, integrational, ui, linting. A separate set of tests has been developed for each service. For example, integration and UI tests golang workers. Initially, it was thought that it was better to run tests every time the main compose file was run, but it soon became clear that it was time consuming. In some cases, you need to be able to run specific tests. A separate compose files was created for this:

Our docker-compose file does not depend on the entire project, in this case, when this file is launched, a test database is created, migration is carried out, test data is written to the database, and after that, the tests of our worker are launched.

The entire list of commands is recorded in the script file tests.sh.

The Main Docker's Delicacies

Dockerfile

It might seem that Dockerfile is a good old Chef-config, but in a new way. And here it is, from the server configuration in it there is only one line left this is the name of the base image of the operating system. The rest is part of the application architecture. And this should be taken as a declaration of the API and the dependencies of a service, not a server. This part is written by the programmer designing the application along the way in a natural way right in the development process. This approach provides not only amazing configuration flexibility, but also avoids the damaged phone between the developer and the administrator.

Puffed images

The images in docker are not monolithic, but consist of copy-on-write layers. This allows you to reuse the base read only image files in all containers for free, launch the container without copying the image file system, make readonly containers, and also cache different stages of the image assembly. Very similar to git commits if you are familiar with its architecture.

Docker in Docker

Ability to allow one container to manage other containers. Thus, it turns out that nothing will be installed on the host machine except Docker. There are two ways to reach this state. First way is to use Docker official image “docker” ( previously “Docker-in-Docker” or dind) with -privileged flag. Second one is more lightweight and deft - link docker binaries folder into container. It is done like this:




docker run -v /var/run/docker.sock:/var/run/docker.sock \

-v $(which docker):/bin/docker \

-ti ubuntu



But this is not a real hierarchical docker-in-docker, but a flat but stable option.

Summary

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

You just don’t have to waste time setting up everything locally on the developer's machine. We no longer have versions of nightmare and the like, and launching a new project takes not days, but only 15 minutes. The developer no longer needs an administrator, we have the same image everywhere, the same environment everywhere, and this is incredibly cool!

In our next articles, we will encounter Docked more than once, so follow us on this blog and social networks.

app development vue

How Step by Step Automate Instagram Bot

app development vue

Why Is Prototyping Important?