Start using Docker Swarm to easily manage your containers.

Initial information

Guest operating system version.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04 LTS
Release:        20.04
Codename:       focal

Docker version.

$ docker --version
Docker version 19.03.8, build afacb8b7f0

Firewall information

Management node exposes port 2377/TCP.

Every swarm node uses port 7946/TCP&UDP for internal communication and 4789/UDP for overlay network traffic.

Installation – management node

Install docker.io package.

$ sudo apt install docker.io

Initialize Docker Swarm (become a management node).

deploy@swarm-node-1:~# docker swarm init
Swarm initialized: current node (jbq6dn64ezngq9g34srv96c49) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1adxxc02t8c1lhuq8w4g2bx7abpjry2n2a5d997p7lasp72t26-8ixnsh1lstuew81lm63p19yz4 172.16.30.111:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

It will create two additional networks.

deploy@swarm-node-1:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4aab53e8e1a6        bridge              bridge              local
d5a2e53e5a0d        docker_gwbridge     bridge              local
0dd5e8de16e9        host                host                local
mmw9lwllnm4g        ingress             overlay             swarm
ceb52a1ce887        none                null                local

The docker_gwbridge which is used to connect Docker daemons participating in the swarm.

deploy@swarm-node-1:~# docker network inspect docker_gwbridge 
[
    {
        "Name": "docker_gwbridge",
        "Id": "d5a2e53e5a0d9d525f67fa17105c11218d8b7c4d5d0e34ac5c703fbfdea24951",
        "Created": "2020-07-20T15:55:35.71862268Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "gateway_ingress-sbox",
                "EndpointID": "1111da7eeb16afc581ea8ef3393f4068e886c1c17ab385e547582d11efda93c6",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]

The ingress which will be by default used to expose services if not stated otherwise.

deploy@swarm-node-1:~# docker network inspect ingress
[
    {
        "Name": "ingress",
        "Id": "mmw9lwllnm4geo7poj4pmakg9",
        "Created": "2020-07-20T15:55:35.053774437Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "03db719c5ca9e172253d2067b2337da0ee8725d478fe7e88a7406f6b8abfd6bd",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "ac2f6dd83307",
                "IP": "172.16.30.111"
            }
        ]
    }
]

Installation – worker node

Install docker.io package.

$ sudo apt install docker.io

Join the Docker Swarm as a worker node.

deploy@swarm-node-2:~# docker swarm join --token SWMTKN-1-1adxxc02t8c1lhuq8w4g2bx7abpjry2n2a5d997p7lasp72t26-8ixnsh1lstuew81lm63p19yz4 172.16.30.111:2377
This node joined a swarm as a worker.

Display nodes information

List all nodes.

# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jbq6dn64ezngq9g34srv96c49 *   swarm-node-1        Ready               Active              Leader              19.03.8
jaz8xev6djzpw7sjtdzx1qdu8     swarm-node-2        Ready               Active                                  19.03.8
zgfvp5wkk9bjde0iiop8yl5wd     swarm-node-3        Ready               Active                                  19.03.8

List manager nodes.

deploy@swarm-node-1:~# docker node ls --filter "role=manager"
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jbq6dn64ezngq9g34srv96c49 *   swarm-node-1        Ready               Active              Leader              19.03.8

List worker nodes.

deploy@swarm-node-1:~# docker node ls --filter "role=worker"
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jaz8xev6djzpw7sjtdzx1qdu8     swarm-node-2        Ready               Active                                  19.03.8
zgfvp5wkk9bjde0iiop8yl5wd     swarm-node-3        Ready               Active                                  19.03.8

List node details using human-readable output.

deploy@swarm-node-1:~# docker node inspect swarm-node-1 --pretty
ID:                     jbq6dn64ezngq9g34srv96c49
Hostname:               swarm-node-1
Joined at:              2020-07-20 15:55:34.487042196 +0000 utc
Status:
 State:                 Ready
 Availability:          Active
 Address:               172.16.30.111
Manager Status:
 Address:               172.16.30.111:2377
 Raft Status:           Reachable
 Leader:                Yes
Platform:
 Operating System:      linux
 Architecture:          x86_64
Resources:
 CPUs:                  8
 Memory:                15.61GiB
Plugins:
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog
 Network:               bridge, host, ipvlan, macvlan, null, overlay
 Volume:                local
Engine Version:         19.03.8
TLS Info:
 TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBajCCARCgAwIBAgIUY6quH/n/bNfmJKR5TU4wcpMzMT4wCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjAwNzIwMTU1MTAwWhcNNDAwNzE1MTU1
MTAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABJ0oo7+C9SFtLQjLJAZSG7cnMcATRTw2SX2oGRqbz0CtFLSRDu6U5c5Tovnc
h/Kh3D8C8vQ0pnGoVgK7Lu1eHXWjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBQL6guUtfq6eguRXfM3dVpoLKNfujAKBggqhkjO
PQQDAgNIADBFAiEA81VcPEhSmr2KJ+r2nXqhBi2u4XON9NeBQG+OAmKwI6ICIDHs
KlWA4BxCnO7qoH4IY71hPm4i+Mx2aMeTMa8gtCIW
-----END CERTIFICATE-----

 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YXJtLWNh
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnSijv4L1IW0tCMskBlIbtycxwBNFPDZJfagZGpvPQK0UtJEO7pTlzlOi+dyH8qHcPwLy9DSmcahWArsu7V4ddQ==

List all nodes using a custom format.

deploy@swarm-node-1:~# docker node ls --format "{{.Hostname}}\t{{.Status}}/{{.Availability}}"
swarm-node-1    Ready/Active
swarm-node-2    Ready/Active
swarm-node-3    Ready/Active

Get specific node information using JSON format.

# docker node inspect --format '{{json .Spec}}' swarm-node-1
{"Labels":{},"Role":"manager","Availability":"active"}

Manage node labels

Add multiple labels to a single node.

deploy@swarm-node-1:~# docker node update --label-add for=cats --label-add color=white  swarm-node-3
swarm-node-2

List node details, including labels using human-readable output.

deploy@swarm-node-1:~# docker node inspect swarm-node-3 --pretty
ID:                     zgfvp5wkk9bjde0iiop8yl5wd
Labels:
 - color=white
 - for=cats
Hostname:               swarm-node-3
Joined at:              2020-07-20 15:57:20.654887843 +0000 utc
Status:
 State:                 Ready
 Availability:          Active
 Address:               172.16.50.111
Platform:
 Operating System:      linux
 Architecture:          x86_64
Resources:
 CPUs:                  8
 Memory:                31.34GiB
Plugins:
 Log:           awslogs, fluentd, gcplogs, gelf, journald, json-file, local, logentries, splunk, syslog
 Network:               bridge, host, ipvlan, macvlan, null, overlay
 Volume:                local
Engine Version:         19.03.8
TLS Info:
 TrustRoot:
-----BEGIN CERTIFICATE-----
MIIBajCCARCgAwIBAgIUY6quH/n/bNfmJKR5TU4wcpMzMT4wCgYIKoZIzj0EAwIw
EzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjAwNzIwMTU1MTAwWhcNNDAwNzE1MTU1
MTAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABJ0oo7+C9SFtLQjLJAZSG7cnMcATRTw2SX2oGRqbz0CtFLSRDu6U5c5Tovnc
h/Kh3D8C8vQ0pnGoVgK7Lu1eHXWjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBQL6guUtfq6eguRXfM3dVpoLKNfujAKBggqhkjO
PQQDAgNIADBFAiEA81VcPEhSmr2KJ+r2nXqhBi2u4XON9NeBQG+OAmKwI6ICIDHs
KlWA4BxCnO7qoH4IY71hPm4i+Mx2aMeTMa8gtCIW
-----END CERTIFICATE-----

 Issuer Subject:        MBMxETAPBgNVBAMTCHN3YXJtLWNh
 Issuer Public Key:     MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEnSijv4L1IW0tCMskBlIbtycxwBNFPDZJfagZGpvPQK0UtJEO7pTlzlOi+dyH8qHcPwLy9DSmcahWArsu7V4ddQ==

List labels for a specific node.

deploy@swarm-node-1:~# docker node inspect  --format '{{range  $k,$v := .Spec.Labels }}{{printf "%s=%s\n" $k $v}}{{end}}' swarm-node-3
color=white
for=cats

Remove multiple labels from a single node.

deploy@swarm-node-1:~# docker node update --label-rm for --label-rm color swarm-node-3
swarm-node-3

Manage node availability

Ensure that scheduler will not assign new services to a particular node.

deploy@swarm-node-1:~# docker node update --availability pause swarm-node-2
swarm-node-2
deploy@swarm-node-1:~# docker node ls --filter "name=swarm-node-2"
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jaz8xev6djzpw7sjtdzx1qdu8     swarm-node-2        Ready               Pause                                   19.03.8

Ensure that scheduler will remove every running service from a specific node.

deploy@swarm-node-1:~# docker node update --availability drain swarm-node-2
swarm-node-2
deploy@swarm-node-1:~# docker node ls --filter "name=swarm-node-2"
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jaz8xev6djzpw7sjtdzx1qdu8     swarm-node-2        Ready               Drain                                   19.03.8

Ensure that scheduler will assign new services to a particular node.

deploy@swarm-node-1:~# docker node update --availability active swarm-node-2
swarm-node-2
deploy@swarm-node-1:~# docker node ls --filter "name=swarm-node-2"
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
jaz8xev6djzpw7sjtdzx1qdu8     swarm-node-2        Ready               Active                                  19.03.8

Publish simple service

Publish nginx service.

deploy@swarm-node-1:~# docker service create --name nginx_test -p published=8010,target=80,protocol=tcp nginx
47kr8jbpf66i5cio04vrhbljl
overall progress: 1 out of 1 tasks 
1/1: running   [==================================================>] 
verify: Service converged

Now, the nginx service is available on every Docker Swarm node on port 8010.

deploy@swarm-node-1:~# curl -I http://swarm node-3:8010
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Mon, 20 Jul 2020 17:37:55 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 07 Jul 2020 15:52:25 GMT
Connection: keep-alive
ETag: "5f049a39-264"
Accept-Ranges: bytes

Service was started on swarm-node-1.

deploy@swarm-node-1:~# docker service ps nginx_test
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
1k5966kr6w24        nginx_test.1        nginx:latest        swarm-node-1        Running             Running 26 minutes ago

Service details.

deploy@swarm-node-1:~# docker service inspect nginx_test --pretty
ID:             47kr8jbpf66i5cio04vrhbljl
Name:           nginx_test
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         nginx:latest@sha256:a93c8a0b0974c967aebe868a186e5c205f4d3bcb5423a56559f2f9599074bbcd
 Init:          false
Resources:
Endpoint Mode:  vip
Ports:
 PublishedPort = 8010
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress 

Publish simple service on selected nodes

Publish service on worker nodes using two replicas.

deploy@swarm-node-1:~# docker service create --constraint node.role==worker --replicas 2 --name nginx_replication_test -p published=8011,target=80,protocol=tcp nginx
nku9lq34rb2yxb65nj9f5aorq
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 

Service was started on swarm-node-2 and swarm-node-3 as expected.

deploy@swarm-node-1:~# docker service ps nginx_replication_test
ID                  NAME                       IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
xosj37664pv6        nginx_replication_test.1   nginx:latest        swarm-node-3        Running             Running 13 minutes ago                       
khzc93f04vke        nginx_replication_test.2   nginx:latest        swarm-node-2        Running             Running 13 minutes ago  

Service details.

deploy@swarm-node-1:~# docker service inspect nginx_replication_test --pretty
ID:             nku9lq34rb2yxb65nj9f5aorq
Name:           nginx_replication_test
Service Mode:   Replicated
 Replicas:      2
Placement:
 Constraints:   [node.role==worker]
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         nginx:latest@sha256:a93c8a0b0974c967aebe868a186e5c205f4d3bcb5423a56559f2f9599074bbcd
 Init:          false
Resources:
Endpoint Mode:  vip
Ports:
 PublishedPort = 8011
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress 

Publish simple service on selected nodes using multiple constraints

Add a label to a single worker node.

deploy@swarm-node-1:~# docker node update --label-add for=cats   swarm-node-2
swarm-node-2

Publish service on worker nodes with specific label using three replicas.

deploy@swarm-node-1:~# docker service create --constraint node.role==worker --constraint node.labels.for==cats --replicas 3 --name nginx_label_test -p published=8012,target=80,protocol=tcp nginx
65isd7cnsxplmimwma0uuohud
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service converged 

Service was started on swarm-node-2 as expected.

deploy@swarm-node-1:~# docker service ps nginx_label_test
ID                  NAME                 IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
6ddr76j77mqf        nginx_label_test.1   nginx:latest        swarm-node-2        Running             Running 58 seconds ago                       
xbwzhb6etgpv        nginx_label_test.2   nginx:latest        swarm-node-2        Running             Running 58 seconds ago                       
44v8r53h34o0        nginx_label_test.3   nginx:latest        swarm-node-2        Running             Running 58 seconds ago 

Service details

deploy@swarm-node-1:~# docker service inspect nginx_label_test --pretty
ID:             65isd7cnsxplmimwma0uuohud
Name:           nginx_label_test
Service Mode:   Replicated
 Replicas:      3
Placement:
 Constraints:   [node.role==worker node.labels.for==cats]
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         nginx:latest@sha256:a93c8a0b0974c967aebe868a186e5c205f4d3bcb5423a56559f2f9599074bbcd
 Init:          false
Resources:
Endpoint Mode:  vip
Ports:
 PublishedPort = 8012
  Protocol = tcp
  TargetPort = 80
  PublishMode = ingress 

Publish simple service from private registry

Login to the private registry on the manager node.

deploy@swarm-node-1:~# docker login registry.sleeplessbeastie.eu

Alternatively, edit the local docker configuration.

deploy@swarm-node-1:~# cat ~/.docker/config.json 
{
	"auths": {
		"registry.example.org": {
			"auth": "J4thisJ43ZE0is3NVKdEfake4RNEZDauthVL"
		}
	},
	"HttpHeaders": {
		"User-Agent": "Docker-Client/19.03.8 (linux)"
	}
}

Use --with-registry-auth parameter on every service invocation to send registry authentication details to swarm agents.

deploy@swarm-node-1:~# docker service create --name jekyll_development -p 8080:80 --with-registry-auth registry.example.org/websites/jekyll:development"

Update simple service

Scale service.

deploy@swarm-node-1:~# docker service scale nginx_test=4
nginx_test scaled to 4
overall progress: 4 out of 4 tasks 
1/4: running   [==================================================>] 
2/4: running   [==================================================>] 
3/4: running   [==================================================>] 
4/4: running   [==================================================>] 
verify: Service converged

List services to confirm that it is scaled up.

deploy@swarm-node-1:~# docker service ls
ID                  NAME                     MODE                REPLICAS            IMAGE               PORTS
65isd7cnsxpl        nginx_label_test         replicated          3/3                 nginx:latest        *:8012->80/tcp
nku9lq34rb2y        nginx_replication_test   replicated          2/2                 nginx:latest        *:8011->80/tcp
47kr8jbpf66i        nginx_test               replicated          4/4                 nginx:latest        *:8010->80/tcp

Remove existing constraints and add a new one to the existing service.

$ docker service update --constraint-add node.role==manager --constraint-rm node.role==worker --constraint-rm node.labels.for==cats --replicas 2 --name nginx_label_test
nginx_label_test
overall progress: 2 out of 2 tasks 
1/2: running   [==================================================>] 
2/2: running   [==================================================>] 
verify: Service converged 

The service will move to the manager node as expected.

deploy@swarm-node-1:~# docker service ps nginx_label_test
ID                  NAME                     IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR               PORTS
kbmazt0g9hho        nginx_label_test.1       nginx:latest        swarm-node-1        Running             Running about a minute ago                        
6ddr76j77mqf         \_ nginx_label_test.1   nginx:latest        swarm-node-2        Shutdown            Shutdown about a minute ago                       
bkqlw9rkgypb        nginx_label_test.2       nginx:latest        swarm-node-1        Running             Running 53 seconds ago                            
xbwzhb6etgpv         \_ nginx_label_test.2   nginx:latest        swarm-node-2        Shutdown            Shutdown 55 seconds ago 

Remove service

Display services.

deploy@swarm-node-1:~# docker service ls
ID                  NAME                     MODE                REPLICAS            IMAGE               PORTS
65isd7cnsxpl        nginx_label_test         replicated          2/2                 nginx:latest        *:8012->80/tcp
nku9lq34rb2y        nginx_replication_test   replicated          2/2                 nginx:latest        *:8011->80/tcp
47kr8jbpf66i        nginx_test               replicated          4/4                 nginx:latest        *:8010->80/tcp

Remove service.

deploy@swarm-node-1:~# docker service rm nginx_test
nginx_test

Display services again.

deploy@swarm-node-1:~# docker service ls
ID                  NAME                     MODE                REPLICAS            IMAGE               PORTS
65isd7cnsxpl        nginx_label_test         replicated          2/2                 nginx:latest        *:8012->80/tcp
nku9lq34rb2y        nginx_replication_test   replicated          2/2                 nginx:latest        *:8011->80/tcp

Remove old docker images

It is vital to remove unused docker images from each Docker Swarm node.

deploy@swarm-node-1:~# docker image prune --filter "until=72h" --force --all

Additional notes

This introductory blog post is too long at this moment. I have skipped node and stacks management.

Docker Swarm is enjoyable to use, and I really mean it!