Containers isolate an application and its dependencies into
a self-contained unit that can run anywhere. Container-to-container and Container-to-host
connectivity focuses primarily on a breakdown of current container networking
types, including:
Here’s an example of the creation flow:
1. A bridge is provisioned on the host.
NAT is used to provide communication beyond the host. While bridged networks solve port-conflict problems and provide network isolation to containers running on one host, there’s a performance cost related to using NAT.
$ sudo iptables -t nat -L -n
...
Virtual Machines, Containers and docker - https://medium.freecodecamp.org/a-beginner-friendly-introduction-to-containers-vms-and-docker-79a9e3e119b
Container Networking -
Understand Container Communication - https://docs.docker.com/v17.09/engine/userguide/networking/default_network/container-communication/
Container Networking deep dive - http://events17.linuxfoundation.org/sites/events/files/slides/Container%20Networking%20Deep%20Dive.pdf
Docker and iptables - https://medium.com/@ebuschini/iptables-and-docker-95e2496f0b45
Connect to a Database Running on Your Docker Host - https://nickjanetakis.com/blog/docker-tip-35-connect-to-a-database-running-on-your-docker-host
Container Networking Interface(CNI) -
- None
- Bridge
- Overlay
- Underlay
None:
None is straightforward in that the container receives a
network stack, but lacks an external network interface. It does, however,
receive a loopback interface. Both the rkt and Docker container projects
provide similar behavior when none or null networking is used. This mode of
container networking has a number of uses including testing containers, staging
a container for a later network connection, and being assigned to containers with
no need for external communication.
Bridge:
A Linux bridge provides a host internal network in which
containers on the same host may communicate, but the IP addresses assigned to
each container are not accessible from outside the host.
Bridge networking leverages iptables for NAT and port-mapping, which provide
single-host networking. Bridge networking is the default Docker network
type (i.e., docker0), where one end of a virtual network interface pair is
connected between the bridge and the container.
Here’s an example of the creation flow:
1. A bridge is provisioned on the host.
2. A namespace for each container is provisioned inside that
bridge.
3. Containers’ ethX are mapped to private bridge interfaces.
4. iptables with NAT are used to map between each
private container and the host’s public interface.
NAT is used to provide communication beyond the host. While bridged networks solve port-conflict problems and provide network isolation to containers running on one host, there’s a performance cost related to using NAT.
Containers can make connections to the outside world, but
the outside world cannot connect to containers.
Each outgoing connection will appear to originate from one of the host
machine’s own IP addresses thanks to an iptables masquerading rule on
the host machine that the Docker server creates when it starts:
$ sudo iptables -t nat -L -n
...
Chain POSTROUTING (policy ACCEPT)
target prot opt
source
destination
MASQUERADE all --
172.17.0.0/16 0.0.0.0/0
...
Host:
In this approach, a newly created container shares its
network namespace with the host, providing higher performance — near metal
speed — and eliminating the need for NAT; however, it does suffer port
conflicts. While the container has access to all of the host’s network
interfaces, unless deployed in privilege mode, the container may not
reconfigure the host’s network stack.
Overlay:
Overlays use networking tunnels to deliver communication
across hosts. This allows containers to behave as if they are on the same
machine by tunneling network subnets from one host to the next; in essence,
spanning one network across multiple hosts. Many tunneling technologies exist,
such as virtual extensible local area network (VXLAN).
Underlays:
Underlay network drivers expose host interfaces (i.e., the
physical network interface at eth0) directly to containers or VMs running on
the host. Two such underlay drivers are media access control virtual local area
network (MACvlan) and internet protocol VLAN (IPvlan). The operation of and the
behavior of MACvlan and IPvlan drivers are very familiar to network engineers.
Both network drivers are conceptually simpler than bridge networking, remove
the need for port-mapping and are more efficient.
Virtual Machines, Containers and docker - https://medium.freecodecamp.org/a-beginner-friendly-introduction-to-containers-vms-and-docker-79a9e3e119b
Container Networking -
- https://jvns.ca/blog/2016/12/22/container-networking/
- https://thenewstack.io/container-networking-breakdown-explanation-analysis/
Understanding communication between Docker containers - https://cloudkul.com/blog/understanding-communication-docker-containers/
Understand Container Communication - https://docs.docker.com/v17.09/engine/userguide/networking/default_network/container-communication/
Container Networking deep dive - http://events17.linuxfoundation.org/sites/events/files/slides/Container%20Networking%20Deep%20Dive.pdf
Docker and iptables - https://medium.com/@ebuschini/iptables-and-docker-95e2496f0b45
Connect to a Database Running on Your Docker Host - https://nickjanetakis.com/blog/docker-tip-35-connect-to-a-database-running-on-your-docker-host
Container Networking Interface(CNI) -
- https://github.com/containernetworking/cni
- https://medium.com/@vikram.fugro/container-networking-interface-aka-cni-bdfe23f865cf
Container Networking standards: CNI vs CNM - https://www.nuagenetworks.net/blog/container-networking-standards/