1st July 2024
Updated on 7th July 2024 - added description about the docker swarm mesh network, which really should have been included in version 1 of this post.
Updated on 23rd May 2025 - I come to realize that docker swarm does not support IPv6. I have written some details about that
Docker Swarm is an container orchestrating tool that creates an overlay network that docker nodes communicate across. The benefit with docker swarms is that it works well for small and medium-sized container networks. For large and complex environments, Kubernetes may be more practical.
This post is not about how to setup a docker swarm, you can read about that from dockers documentation, or watch Trevor Sullivans videos about it over on CBT Nuggets. This post is just about how the overlay network works and how to customize it.
Docker Swarm Design
I don’t have a big enough infrastructure (yet) to have any use for Docker Swarms. But one could imagine of how a scalable docker swarm network would look like:
Explanation:
The nodes (servers/VM’s running the docker daemon) are connected to different overlays, depending which network they belong to (Internal Services, Management or Public Services).
The nodes can communicate directly with each other through the Overlay network (assuming the underlying infrastructure is setup correctly).
The containers get IP addresses from their local subnet scopes. They don’t have to share the same subnet.
Update: The above design doesn’t work because I’m using IPv6 on the containers. While you may be able to reach the containers IPv6 address directly, port forwarding doesn’t seem to work on my current version of docker. (v28.1.1)
UPDATE: The Docker Swarm Routing Mesh
Source: https://docs.docker.com/engine/swarm/ingress/
All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there's no task running on a node.
Example: Webserver running 3 replicas on the Management Swarm
This is a visual explanation of how the following configuration works:
$ docker service create \
--name nginx \
--publish published=8080,target=80 \
--replicas 3 \
nginx
The Swarm ingress network will automatically load balance incoming connections towards the nginx service. If it’s a worker node or manager node doesn’t matter.
Custom Swarm Network Configuration
Note: IPv6 is not supported on the ingress network. Read my rant about it in the appendix.
Source: https://docs.docker.com/engine/swarm/networking/
Initializing a docker swarm is not hard to do if you don’t care about the network. If you do however…
Manually create the docker_gwbridge network
This is recommended to do before initializing a swarm. This bridge interface is used to bridge traffic from the overlay networks to the containers. This network is created automatically when initializing a swarm, but not with IPv6 enabled.
docker network create \
--ipv6 \
--subnet 172.20.255.0/24 \
--subnet 2001:DB8:0:A01F::/64 \
--gateway 172.20.255.1 \
--gateway 2001:DB8:0:A01F::1 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridge
Verification:
$ docker network inspect docker_gwbridge
[
{
"Name": "docker_gwbridge",
"Id": "7e51b0556cadecc8744c7bc72bc8648a106b0f3af7fe61e18ba28b22ce7d3bb9",
"Created": "2024-06-16T10:21:35.927947544Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": true,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.255.0/24",
"Gateway": "172.20.255.1"
},
{
"Subnet": "2001:DB8:0:A01F::/64",
"Gateway": "2001:DB8:0:A01F::1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.name": "docker_gwbridge"
},
"Labels": {}
}
]
Edit the ingress network
When initializing a swarm, the ingress network, which is a special overlay network for docker swarm clusters, will be created. By default IPv6 won’t be enabled on this network and there are no flags to add IPv6 support when initializing it.
This network can’t be edited without first initializing a swarm. Therefore, you have to delete the ingress network and re-create it, after initializing the swarm.
docker network rm ingress
docker network create \
--driver overlay \
--ingress \
--subnet 172.20.0.0/24 \
--subnet FC01:A::/80 \
--gateway FC01:A::1 \
ingress
Note: You can set IPv6 addresses on the Ingress network but you can’t enable IPv6 forwarding with the --ipv6
flag. The Ingress network will stop working if you do.
Update: In more recent versions of docker, at least the one I use, docker swarm seems to completely fail even by only specifying an IPv6 subnet; The ingress network is not created on the worker nodes and services won’t be able to deploy.
Appendix
Rant about Docker Swarms and IPv6
This is how this post series started. Here I have documented my experience with docker swarms and IPv6 when I was setting it up for the first time:
Sometimes with IPv6, I feel like I have too many choices. I’m sitting here trying to figure out what kind of overlay prefix I’m going to set for my docker swarm cluster. It could be almost anything, as long as it’s not conflicting with my other networks. I can even reuse it on other overlay networks, as it will only be used inside the swarm. What good could it do to have different scopes on different overlay networks? The only benefit I can think of is that it might cause a little less confusion when you are inspecting the nodes.
I tend to always overthink details like this. It is the same thing with VLAN IDs. For me, logical identifiers must always have a meaning behind them. I could just take one IPv6 prefix at random and be done with it. But no, it has to be something special.
It could either be a global scope or a ULA prefix. The problem with ULA is that IPv4 will have higher priority than the IPv6 scope. But then… do I need IPv4? I guess not, it’s an overlay after all. Even if the underlying infrastructure was running IPv4 only (it doesn’t), it would not matter.
I ended up deciding for IPv6 only with ULA scopes, just because it can be shortened down and include information like site ID and what domain the swarm belongs to.
But then I found out that docker creates an IPv4 scope automatically on the docker_gwbridge network! Worse yet: It is not possible to turn off IPv4 in docker networks, unless you turn it off completely on the host OS!
A workaround was to increase the priority for ULA addresses (FC00::/7) in the host OS. On Linux:
sudo nano /etc/gai.conf
...
# Add another rule to the RFC 3484 precedence table. See section 2.1
# and 10.3 in RFC 3484. The default is:
#
#precedence ::1/128 50
#precedence ::/0 40
#precedence 2002::/16 30
precedence fc00::/7 25
#precedence ::/96 20
#precedence ::ffff:0:0/96 10
#
...
Note: Ipv4 Networks have a precedence of 10.
But even after all that it still doesn’t work! When IPv6 is enabled (the --ipv6
flag)
, no IP addresses gets assigned to nodes inside the swarm. Then came a bummer realization: IPv6 is not yet supported on docker swarm clusters, GAAH!
Well, at least everything is ready for the day when it is supported. Although I won’t hold my breath since this issue was raised in 2016 and still not resolved.
wl@sauna-nms:~$ docker network inspect ingress
[
{
"Name": "ingress",
"Id": "6chsvun2ym2afjirncpy9fcgg",
"Created": "2024-06-10T18:10:44.405647865Z",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/24",
"Gateway": "172.16.0.1"
},
{
"Subnet": "FC01:A::/80",
"Gateway": "FC01:A::1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "a3dd1f1ac316f58f94a043e4cbbf6d376e7832932df858e45520aab6d922ef76",
"MacAddress": "02:42:c0:a8:0a:02",
"IPv4Address": "172.20.0.2/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4096,4097"
},
"Labels": {},
"Peers": [
{
"Name": "42d5a123f1cb",
"IP": "10.10.1.14"
}
]
}
]
A good thing however is that the containers created with the docker service command still get IPv6 addresses from the default bridge interface:
wl@sauna-nms:/var/log$ docker container inspect 2cc3ea0 | grep IPv6
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPv6Addresses": null,
"GlobalIPv6Address": "2001:464f:6f83:a018:0:242:ac11:2",
"GlobalIPv6PrefixLen": 64,
"IPv6Gateway": "2001:464f:6f83:a018::1",
"IPv6Gateway": "2001:464f:6f83:a018::1",
"GlobalIPv6Address": "2001:464f:6f83:a018:0:242:ac11:2",
"GlobalIPv6PrefixLen": 64,
That means it is only node-to-node communication in the overlay network that is IPv4 only.
Update: Or so I thought. When actually testing to reach the containers over IPv6 by issuing a curl command, there is only response on IPv4.
Special Mention
Thank you Trevor Sullivan at CBT Nuggets for your videos on Docker Swarm. All your Docker related videos are a good starting point.