Skip to Content

Docker – Daemon Administration and Networking (3)

Docker – Daemon Administration and Networking (3)

This time we are beginning by centering around the Docker daemon and how it interacts with various process mangers from different platforms. Followed up by an introduction to networking in Docker that uses more of the Docker training images to link together and create a basic network of containers. Specifically a PostgreSQL database container and a Python webapp container.

This is post three on Docker following on from Docker – Administration and Container Applications (2). If you’re looking for more generalized[alert-announce]$ docker daemon[/alert-announce] administration and basic example uses of the Docker Engine CLI then you may want to read that post instead.


1 – Docker Daemon Administration

The Docker daemon is the background service that handles running containers and all their states.

The starting and stopping of the Docker daemon is often configured through a process manager like systemd or Upstart. In a production environment, this is very useful as you have a lot of customizable control over the behavior of the daemon.

It can be run directly from the command line though instead of this:

[alert-announce]

  1. $ docker daemon

[/alert-announce]

It listens on the Unix socket – unix:///var/run/docker.sock when active and running.

If you’re running the docker daemon directly like this you can append configuration options to the command.

An example of running the docker daemon with configuration options is as follows:

[alert-announce]

  1. $ docker daemon -D –tls=true –tlscert=/var/docker/server.pem –tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376

[/alert-announce]

  • -D --debug=false – Enable or disable debug mode.
  • --tls=false – Enable or disable TLS.
  • --tlscert= – certificate location.
  • tlskey= – key location.
  • -H --host=[] – Daemon socket(s) to connect to.

More options are on offer for the Docker daemon at the link before the last code block.

Upstart

The default Docker daemon Upstart job is found in /etc/init/docker.conf .

To check the status of the daemon:

[alert-announce]

  1. $ sudo status docker

[/alert-announce]

To start the Docker daemon:

[alert-announce]

  1. $ sudo start docker

[/alert-announce]

Stop the Docker daemon:

[alert-announce]

  1. $ sudo stop docker

[/alert-announce]

Or restart the daemon:

[alert-announce]

  1. $ sudo restart docker

[/alert-announce]

Logs for Upstart jobs are found in /var/log/upstart and are compressed when the daemon is not running. So run the daemon/container to read the active log file – docker.log via:

[alert-announce]

  1. $ sudo tail -fn 15 /var/log/upstart/docker.log

[/alert-announce]

systemd

Default unit files are stored in the subdirectories of /usr/lib/systemd and /lib/systemd/system . Custom user created unit files are kept in /etc/systemd/system .

To check the status of the daemon:

[alert-announce]

  1. $ sudo systemctl status docker

[/alert-announce]

To start the Docker daemon:

[alert-announce]

  1. $ sudo systemctl start docker

[/alert-announce]

Stop the Docker daemon:

[alert-announce]

  1. $ sudo systemctl stop docker

[/alert-announce]

Or restart the daemon:

[alert-announce]

  1. $ sudo systemctl restart docker

[/alert-announce]

To ensure the Docker daemon starts at boot:

[alert-announce]

  1. $ sudo systemctl enable docker

[/alert-announce]

Logs for Docker are viewed in systemd with:

[alert-announce]

  1. $ journalctl -u docker

[/alert-announce]

A more in-depth look at systemd and Docker is kept here in the Docker docs:

Check out Docker Documentation – systemd

2 – Process Manager Container Automation

Restart policies are an in-built Docker mechanism for restarting containers automatically when they exit. These must be set manually with the flag – --restart="yes" and are also triggered when the Docker daemon starts up (like after a system reboot). Restart policies start linked containers in the correct order too.

If you have non-Docker processes that depend on Docker containers you can use a process manager like upstart, systemd or supervisor instead of these restart policies to replace this functionality.

This is what we will cover in this step.

Note: Be aware that process mangers will conflict with Docker restart policies if they are both in action So don’t run restart policies if you are using a process manager.

For these examples assume that the container’s for each have already been created and are running Ghost with the name --name=ghost-container .

Upstart

[alert-announce]

/etc/init/ghost.conf

  1. description “Ghost Blogging Container”
  2. author “Scarlz”
  3. start on filesystem and started docker
  4. stop on runlevel [!2345]
  5. respawn
  6. script
  7. /usr/bin/docker start -a ghost-container
  8. end script

[/alert-announce]

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env) then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

[alert-announce]

/etc/init/ghost.conf

  1. script
  2. /usr/bin/docker run –env foo=bar –name ghost-container ghost
  3. end script

[/alert-announce]

This differs as it creates a new container using the ghost image every time the service is started and takes into account the extra options.

systemd

[alert-announce]

/etc/systemd/system/ghost

  1. [Unit]
  2. Description=Ghost Blogging Container
  3. Requires=docker.service
  4. After=docker.service
  5. [Service]
  6. Restart=always
  7. ExecStart=/usr/bin/docker start -a ghost-container
  8. ExecStop=/usr/bin/docker stop -t 2 ghost-container
  9. [Install]
  10. WantedBy=local.target

[/alert-announce]

Docker automatically attaches the process manager to the running container, or starts it if needed with this setup.

All signals from Docker are also forwarded so that the process manager can detect when a container stops, to correctly restart it.

If you need to pass options to the containers (such as --env), then you’ll need to use docker run rather than docker start in the job configuration.

For Example:

[alert-announce]

/etc/systemd/system/ghost

  1. ExecStart=/usr/bin/docker run –env foo=bar –name ghost-container ghost
  2. ExecStop=/usr/bin/docker stop -t 2 ghost-container ; /usr/bin/docker rm -f ghost-container

[/alert-announce]

This differs as it creates a new container with the extra options every time the service is started, which stops and removes itself when the Docker service ends.


3 – Docker Networks

Network drivers allow containers to be linked together and networked. Docker comes with two default network drivers as part of the normal installation:

  • The bridge driver.
  • The overlay driver.

These two drivers are replaceable with other third-party drivers that perform more optimally in different situations. But for low end, basic Docker uses these given defaults are fine.

Docker also automatically includes three default networks with the base install:

[alert-announce]

  1. $ docker network ls

[/alert-announce]

Listing them as:

[alert-announce]

Output

  1. NETWORK ID NAME DRIVER
  2. 2d41f8bbf514 host host
  3. f9ee6308ecdd bridge bridge
  4. 49dab653f349 none null

[/alert-announce]

The network named bridge is classed as a special network. Docker launches any and all containers in this network (unless told otherwise).

So if you currently you have containers running these will have been placed into the bridge network group.

Networks can be inspected using the next command, where bridge is the network name to be inspected:

[alert-announce]

  1. $ docker network inspect bridge

[/alert-announce]

The output shows any and all configured directives for the network:

Output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[
    {
        "Name": "bridge",
        "Id": "f9ee6308ecdd5dc5a588428469de1b7c475fdafdab49cfc33c1c3ac0bf0559ab",
        "Scope": "local",
        "Driver": "bridge",
        "IPAM": {
            "Driver": "default",
            "Config": [
                {
                    "Subnet": "172.17.0.0/16"
                }
            ]
        },
        "Containers": {
            "ff98b5ed01dd4323f0ce38af9b8cea2d49d0b1e194cf147a3a8f632278a11451": {
                "EndpointID": "b7c9fabcda00ccebd6523f76477b51eba00dd5d3f26940355139fff62d5576bb",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        }
    }
]

This inspect output changes as a network is altered and configured, how to do this is covered in later steps.

4 – Creating Docker Networks

Networks are natural ways to isolate containers from other containers or other networks. The original default networks are not to be solely relied upon, however. It’s better to create your own network groups.

Remember there are two default drivers and therefore two native network types; bridge and overlay . Bridge networks can only make use of one singular host to run the Docker Engine software. An overlay network differs in that it can incorporate multiple hosts into running the Docker software.

To make the simpler “bridge” type network we use the create option:

[alert-announce]

  1. $ docker network create -d bridge

[/alert-announce]

With this last command the -d (driver) and bridge option specifies the network type we want to create. With a new name for the network at the end of the command.

To see the new network after creation:

[alert-announce]

  1. $ docker network ls

[/alert-announce]

Shown on the last line:

[alert-announce]

Output

  1. NETWORK ID NAME DRIVER
  2. f9ee6308ecdd bridge bridge
  3. 49dab653f349 none null
  4. 2d41f8bbf514 host host
  5. 08f44ef7de28 test-bridge-network bridge

[/alert-announce]

Overlay networks are a much wider topic due to their inclusion of multiple hosts so aren’t covered in this post but the basic principles and where to start is mentioned in the link below:

Check out Docker Documentation – Working with Network Commands.

5 – Connecting Containers to Networks

Creating and using these networks allows container applications to operate in unison and as securely as possible. Containers inside of networks can only interact with their counterparts and are isolated from the outsides of the network. Similar to VLAN segregation inside of an IP based network.

Usually containers are added to a network when you first launch and run the container. We’ll follow the example from the Docker Documentation that uses a PostgreSQL database container and the Python webapp to demonstrate a simple network configuration.

First launch a container running the PostgreSQL database training image, and in the process add it to your custom made bridge network from the previous step.

To do this we must pass the --net= flag to the new container, and provide it with the name of our custom bridge network. Which in my example earlier was test-bridge-network :

[alert-announce]

  1. $ docker run -d –net=test-bridge-network –name db training/postgres

[/alert-announce]

You can inspect this aptly named db container to see where exactly it is connected:

[alert-announce]

  1. $ docker inspect –format='{{json .NetworkSettings.Networks}}’ db

[/alert-announce]

This shows us the network details for the database container’s test-bridge-network connection:

[alert-announce]

Output

  1. {“test-bridge-network”:{“EndpointID”:”0008c8566542ef24e5e57d5911c8e33a79f0fcb91b1bbdd60d5cdec3217fb517″,”Gateway”:”172.18.0.1″,”IPAddress”:”172.18.0.2″,”IPPrefixLen”:16,”IPv6Gateway”:””,”GlobalIPv6Address”:””,”GlobalIPv6PrefixLen”:0,”MacAddress”:”02:42:ac:12:00:02″}}

[/alert-announce]

Next run the Python training web application in daemonised mode with out any extra options:

[alert-announce]

  1. $ docker run -d –name python-webapp training/webapp python app.py

[/alert-announce]

Inspect the python-webapp container’s network connection in the same way as before:

[alert-announce]

  1. $ docker inspect –format='{{json .NetworkSettings.Networks}}’ python-webapp

[/alert-announce]

As expected this new container is running under the default bridge network, shown in the output of the last command:

[alert-announce]

Output

  1. {“bridge”:{“EndpointID”:”e5c7f1c8d097fdafc35b89d7bce576fe01a22709424643505d79abe394a59767″,”Gateway”:”172.17.0.1″,”IPAddress”:”172.17.0.2″,”IPPrefixLen”:16,”IPv6Gateway”:””,”GlobalIPv6Address”:””,”GlobalIPv6PrefixLen”:0,”MacAddress”:”02:42:ac:11:00:02″}}

[/alert-announce]

Docker lets us connect a container to as many networks as we like. More importantly for us we can also connect an already running container to a network.

Attach the running python-webapp container to the “test-bridge-network” like we need:

[alert-announce]

  1. $ docker network connect test-bridge-network python-webapp

[/alert-announce]

To test the container connections to our custom network we can ping from one to the other.

Get the IP address of the db container:

[alert-announce]

  1. $ docker inspect –format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ db

[/alert-announce]

In my case this was:

[alert-announce]

Output

  1. 172.18.0.2

[/alert-announce]

Now we have the IP address open an interactive shell into the python-webapp container:

[alert-announce]

  1. $ docker exec -it python-webapp bash

[/alert-announce]

Attempt to ping the db container with the IP address from before, substituting 172.18.0.2 for your address equivalent:

[alert-announce]

  1. ping -c 10 172.18.0.2

[/alert-announce]

As long as you successfully connected both containers earlier on, the ping command will be successful:

[alert-announce]

Output

  1. root@fc0f73c129c0:/opt/webapp# ping -c 10 db
  2. PING db (172.18.0.2) 56(84) bytes of data.
  3. 64 bytes from db (172.18.0.2): icmp_seq=1 ttl=64 time=0.216 ms
  4. 64 bytes from db (172.18.0.2): icmp_seq=2 ttl=64 time=0.059 ms
  5. 64 bytes from db (172.18.0.2): icmp_seq=3 ttl=64 time=0.053 ms
  6. 64 bytes from db (172.18.0.2): icmp_seq=4 ttl=64 time=0.063 ms
  7. 64 bytes from db (172.18.0.2): icmp_seq=5 ttl=64 time=0.065 ms
  8. 64 bytes from db (172.18.0.2): icmp_seq=6 ttl=64 time=0.063 ms
  9. 64 bytes from db (172.18.0.2): icmp_seq=7 ttl=64 time=0.062 ms
  10. 64 bytes from db (172.18.0.2): icmp_seq=8 ttl=64 time=0.064 ms
  11. 64 bytes from db (172.18.0.2): icmp_seq=9 ttl=64 time=0.061 ms
  12. 64 bytes from db (172.18.0.2): icmp_seq=10 ttl=64 time=0.063 ms

     

  13. — db ping statistics —
  14. 10 packets transmitted, 10 received, 0% packet loss, time 8997ms
  15. rtt min/avg/max/mdev = 0.053/0.076/0.216/0.047 ms

[/alert-announce]

Conveniently container names work in the place of an IP address too in this scenario:

[alert-announce]

  1. ping -c 10 db

[/alert-announce]

Press CTRL + D to exit the container prompt, or type in exit instead.

And with that we have two containers on the same user created network able to communicate with each other, and able to share data. Which is what we would be aiming for in the case of the PostgreSQL database and Python webapp.

There’s more ways of sharing data between containers once they are connected through a network, but these are covered in the next post of the series.


6 – Miscellaneous Networking Commands

Here are a few complimentary commands in relation to what has already been covered in this post.

At some point, you are likely to need to remove a container from its network. This is done by using the disconnect command:

[alert-announce]

  1. $ docker network disconnect test-bridge-network <container-name>

[/alert-announce]

Here test-bridge-network is the name of the network, followed by which container you want to remove from it.

When all the containers in a network are stopped or disconnected, you can remove networks themselves completely with:

[alert-announce]

  1. $ docker network rm test-bridge-network

[/alert-announce]

Meaning the test-bridge-network is now deleted and absent from the list of existing networks:

[alert-announce]

Output

  1. NETWORK ID NAME DRIVER
  2. 2e38b3a44489 bridge bridge
  3. 79d9d21edbec none null
  4. 61371e641e1b host host

[/alert-announce]

The output here is garnered from the docker network ls command.


Networking in Docker begins here with these examples but goes a lot further than what we’ve covered. Data volumes, data containers, and mounting host volumes are described in the next post on Docker when it’s released.