Learnitweb

Single-host bridge networks

1. Introduction

The simplest type of Docker network is the single-host bridge network.

  • A single-host network refers to a setup that operates on just one Docker host, meaning it can only link containers running on that same host.
  • The term “Bridge” indicates that it functions as an implementation of an 802.1d bridge, essentially acting as a Layer 2 switch.

On Linux, Docker uses the built-in bridge driver to create single-host bridge networks, while on Windows, it utilizes the built-in NAT driver. Despite the difference in drivers, they function the same for practical purposes.

Each Docker host is provided with a default single-host bridge network. On Linux, this network is named “bridge,” while on Windows it’s called “nat.” By default, any new container will connect to these networks unless specified otherwise using the --network flag in the command line.

2. Docker Networking in details

The following command show the output of a docker network ls command on newly installed Linux Docker hosts.

C:\>docker network ls
NETWORK ID     NAME                      DRIVER    SCOPE
ca3117419a70   bridge                    bridge    local
C:\>docker inspect bridge
[
    {
        "Name": "bridge",
        "Id": "ca3117419a70e3ec0b0.....245bda9f2d9efbece08d352c5ccd",
        "Created": "2024-10-22T17:30:09.8494153Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "87268e06d53453a50d17033c7412c40f5fa8313b7df660c9ca8e580b8a6d9c82": {
                "Name": "alwayspolicycontainer",
                "EndpointID": "4e220bc68cf5fa2f1a2816bb9772c4b9db0b491d9e6884d86da0e01aa0f3744a",
                "MacAddress": "02:40:ac:12:00:03",
                "IPv4Address": "192.15.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

The default “bridge” network, on all Linux-based Docker hosts, maps to an underlying Linux bridge in the kernel called “docker0”. We can see this from the output of docker inspect.

$ docker inspect bridge | grep bridge.name
"com.docker.network.bridge.name": "docker0",


C:\>docker network create -d bridge mynet
4f2d7c2db7b3a4b91831ef63968bdd9ebf3b8af39139440adfd67275ec6ab58d

C:\>docker network ls
NETWORK ID     NAME                      DRIVER    SCOPE
ca3117419a70   bridge                    bridge    local
550a0334cc4b   host                      host      local
4f2d7c2db7b3   mynet                     bridge    local
7b217d0cdd69   none                      null      local

A new Linux bridge created in the kernel.

Let’s create a new container and attach it to the new mynet bridge network.

$ docker run -d --name c1 --network mynet alpine sleep 1d
C:\>docker run -d --name container1 --network mynet alpine sleep 1d
f39883aac957d4fe006a78514408b7918793445b6c7f9b040adff9b4bee3f51e


C:\>docker inspect mynet
[
   ......................
        "Containers": {
            "f39883aac957d4fe006a78514408b7918793445b6c7f9b040adff9b4bee3f51e": {
                "Name": "container1",
                "EndpointID": "ffb45df9c6ba8f08d9418b65a7588f6894644917fb3760bb711428cb446d69d4",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

The output shows that the new “container1” container is on the localnet bridge network.

When a new container is added to the same network, it can ping the “container1” container by name. This is possible because all containers automatically register with Docker’s built-in DNS service, enabling them to resolve the names of other containers on the same network.

The default bridge network, named “bridge,” does not support name resolution via Docker’s DNS service. However, all user-defined bridge networks do. In the following demo, the container will be able to resolve names because it is on the user-defined network, “mynet.”

It works because the “container2” container runs a local DNS resolver that forwards requests to Docker’s internal DNS server. This server keeps mappings for all containers started with the --name or --net-alias flags, allowing containers to resolve each other by name.

C:\>docker run -it --name container2 --network mynet alpine sh
/ # ping container1
PING container1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=1.138 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.158 ms
64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.132 ms

Service discovery allows all containers and Swarm services to locate each other by name. The only requirement is that they be on the same network. Under the hood, this leverages Docker’s embedded DNS server and the DNS resolver in each container.

Let’s break down the process:

  • Step 1: The ‘ping c2’ command prompts the local DNS resolver to translate the hostname “c2” into an IP address. Every Docker container includes a built-in local DNS resolver.
  • Step 2: If the local resolver doesn’t already have the IP address for “c2” cached, it starts a recursive query to the Docker DNS server. The local resolver is configured to know how to reach Docker’s DNS server.
  • Step 3: The Docker DNS server maintains a record of name-to-IP mappings for containers that were created with the ‘–name’ or ‘–net-alias’ options, so it knows the IP address of the container “c2”.
  • Step 4: The DNS server sends the IP address of “c2” back to the local resolver in container “c1”. This works because both containers are on the same network. If they were on separate networks, this wouldn’t be possible.
  • Step 5: With the IP address in hand, the ‘ping command sends ICMP echo request packets to the IP address of “c2”.

3. Port mappings

Port mappings allow you to link a container to a specific port on the Docker host. Any incoming traffic to the host on the designated port is forwarded directly to the container.

  1. Run a new NGINX web server container and map port 80 to 5001 on the Docker host.
C:\>docker run -d --name web --network mynet --publish 5001:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
a480a496ba95: Pull complete
f3ace1b8ce45: Pull complete
11d6fdd0e8a7: Pull complete
f1091da6fd5c: Pull complete
40eea07b53d8: Pull complete
6476794e50f4: Pull complete
70850b3ec6b2: Pull complete
Digest: sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb
Status: Downloaded newer image for nginx:latest
8c4ba2ed9a5e4b49487875c7870aab7ec0323b04bce01d52789c35d2d2917128

2. Verify the port mapping.

C:\>docker port web
80/tcp -> 0.0.0.0:5001

To verify the configuration, open a web browser and navigate to port 5001 on your Docker host. You will need the IP address or DNS name of the Docker host for this. If you’re using Docker Desktop, you can simply enter localhost:5001 or 127.0.0.1:5001 in the browser.

Any external system can now access the NGINX container (running on the localnet bridge network) by hitting the Docker host on port 5001.

Note: Only a one container can bind to any particular port on the host.

4. Overlay networks

Overlay networks are a crucial feature for enabling multi-host container communication in Docker environments, particularly within a Docker Swarm cluster. These networks allow containers running on different physical or virtual machines (nodes) to seamlessly communicate with each other as if they were part of the same local network. This makes overlay networks ideal for distributed applications where services, spread across multiple hosts, need reliable and efficient communication.

Docker includes a built-in driver specifically for overlay networks, making it easy to create them. You can simply use the -d overlay option when running the docker network create command.

5. Container and Service logs for troubleshooting

If you suspect connectivity problems between containers, it’s a good idea to review both the Docker daemon logs and the individual container logs for potential issues.

On Windows systems, the Docker daemon logs are located in AppData\Local\Docker, and you can view them through the Windows Event Viewer. On Linux, the log location depends on the init system in use. For systems using systemd, the logs are managed by journald and can be accessed with the following command:

journalctl -u docker.service

If you’re not using systemd, the log files are found in different locations:

  • Ubuntu systems running Upstart: /var/log/upstart/docker.log
  • RHEL-based systems: /var/log/messages
  • Debian: /var/log/daemon.log

The following snippet from the daemon.json file enables debugging and sets the log level to debug. This configuration works across all Docker platforms:

{
  "debug": true,
  "log-level": "debug"
}

With this setup, Docker will generate detailed logs, which can be helpful for diagnosing issues or monitoring container activity. If the daemon.json file isn’t present, go ahead and create it. After modifying the file, make sure to restart Docker to apply the changes.

Logs from standalone containers can be viewed with the docker logs command. Every Docker host has a default logging driver and configuration for containers. Some of the drivers include:

  • json-file (default)
  • journald (only works on Linux hosts running systemd)
  • syslog
  • splunk
  • gelf

The following snippet from a daemon.json shows a Docker host configured to use syslog.

{
"log-driver": "syslog"
}

You can set a specific logging driver for an individual container or service using the --log-driver and --log-opts flags. These options will take precedence over any configurations defined in the daemon.json file.

Container logging operates under the assumption that your application is running as PID 1 within the container, sending standard output (logs) to STDOUT and errors to STDERR. The configured logging driver then forwards these logs to the appropriate destination.

Here’s an example of running the docker logs command for a container named “container1” that is using the json-file logging driver:

$ docker logs container1