Building Containers from Scratch (Part 2)
Network Isolation with Namespaces
Table of Contents
1 Introduction
In Part 1, we built a filesystem-isolated container using chroot.
But our container still shares the host's network stack. In practice, that means two containers can't both listen on port 80 without conflicting, the container can see host interfaces it has no business with, and nothing stops it from binding to a port the host is already using.
Real isolation means the container should have no idea the host's network even exists. That's what network namespaces give us.
2 What We're Building
Before diving in, here's the plan.
- Create an isolated network namespace — a completely separate network stack living alongside the host's.
- Wire the two together using a virtual ethernet pair (veth), which acts like a network cable connecting two isolated worlds.
- Combine the network namespace with
chrootto get a process with both filesystem isolation and network isolation at the same time.
3 Understanding Network Namespaces
A network namespace creates a completely isolated network stack. Each namespace gets its own interfaces, IP addresses, and routing tables.
In practice: a process inside a namespace can only see and interact with the interfaces
that live in its namespace. No view of the host's eth0, no access to the host's routing
decisions — completely independent. Two processes in different namespaces can both bind
to port 80 without conflict.
Each namespace also starts with its own lo (loopback) interface, and can have virtual
interfaces connected to other namespaces — which is exactly how we'll wire the container
to the host.
How the pieces fit together
A veth pair is the glue. Think of it as a virtual network cable: two ends, and whatever you send into one end comes out the other. We put one end in the host namespace and one end in the container namespace — that's our communication channel between the two isolated worlds.
4 Building a Networked Container
Prerequisites
Make sure you have the my_container directory from Part 1.
Step 1: Create a Network Namespace
sudo ip netns add container_net
Verify the namespace was created:
ip netns list
container_net
Step 2: Create a Virtual Ethernet Pair
Now we create the virtual cable. Both ends land in the host namespace for now.
sudo ip link add veth-host type veth peer name veth-container
Verify both interfaces exist:
ip link show | grep veth
6: veth-container@veth-host: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 7: veth-host@veth-container: <BROADCAST,MULTICAST,M-DOWN> mtu 1500
Both are in the host namespace and currently down.
Step 3: Move One End into the Container Namespace
sudo ip link set veth-container netns container_net
Verify veth-container is gone from the host:
ip link show | grep veth
7: veth-host@if6: <BROADCAST,MULTICAST> mtu 1500
And verify it exists inside the namespace:
sudo ip netns exec container_net ip link show
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN 6: veth-container@if7: <BROADCAST,MULTICAST> mtu 1500
The namespace now has two interfaces: lo (loopback) and veth-container. Both are
currently down.
You can run commands inside a network namespace with ip netns exec <namespace> <command>.
We'll use this heavily when combining the namespace with chroot.
Step 4: Configure the Host Side
Assign an IP address and bring the interface up:
sudo ip addr add 192.168.10.1/24 dev veth-host sudo ip link set veth-host up
Verify:
ip addr show veth-host
7: veth-host@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 192.168.10.1/24 scope global veth-host
Step 5: Configure the Container Side
First, bring up the loopback interface:
sudo ip netns exec container_net ip link set lo up
Then configure the container's interface:
sudo ip netns exec container_net ip addr add 192.168.10.2/24 dev veth-container sudo ip netns exec container_net ip link set veth-container up
Verify:
sudo ip netns exec container_net ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536
inet 127.0.0.1/8 scope host lo
6: veth-container@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
inet 192.168.10.2/24 scope global veth-container
Step 6: Test Network Connectivity
From the host, ping the container:
ping -c 3 192.168.10.2
PING 192.168.10.2 (192.168.10.2) 56(84) bytes of data. 64 bytes from 192.168.10.2: icmp_seq=1 ttl=64 time=0.053 ms 64 bytes from 192.168.10.2: icmp_seq=2 ttl=64 time=0.042 ms 64 bytes from 192.168.10.2: icmp_seq=3 ttl=64 time=0.039 ms
From the container namespace, ping the host:
sudo ip netns exec container_net ping -c 3 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data. 64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=0.041 ms 64 bytes from 192.168.10.1: icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from 192.168.10.1: icmp_seq=3 ttl=64 time=0.037 ms
The two namespaces can talk to each other.1
5 Running a Service Inside the Container
So far, the network namespace and the chroot environment have been two separate things. Now we bring them together.
The command below drops into a shell that is simultaneously locked into my_container's
filesystem and placed inside the container_net network namespace. Filesystem isolation
and network isolation, at the same time — and what you get starts looking a lot like an
actual container.
sudo ip netns exec container_net chroot my_container /bin/bash
You're now inside a process that can't see the host filesystem and can't see the host network.
Let's simulate an HTTP server running on port 8080:
echo -e "HTTP/1.1 200 OK\r\n\r\nHello from the Container!" | nc -l -p 8080
The server is listening on port 8080 — but only inside the container's network namespace.
Verify Isolation
From the host, check listening ports:
ss -tlnp | grep 8080
(no output)
Port 8080 is not visible on the host's network stack — it only exists inside the container namespace.2
Test from the Host
In another terminal:
curl http://192.168.10.2:8080
Hello from the Container!
And voilà.
Connecting to the Outside World optional
The container can talk to the host, but it's blind to the internet. To fix that, we need two things: a default route so the container knows where to send traffic, and masquerading 3 on the host so packets can leave with a routable source address.
Add a default route inside the container namespace, pointing to the host:
sudo ip netns exec container_net ip route add default via 192.168.10.1 dev veth-container
Verify the routing table:
sudo ip netns exec container_net ip route show
default via 192.168.10.1 dev veth-container 192.168.10.0/24 dev veth-container proto kernel scope link src 192.168.10.2
Before masquerading can work, the host needs to forward packets between interfaces
sudo sysctl -w net.ipv4.ip_forward=1
Enable masquerading3 on the host so outgoing packets get a valid source IP:
sudo firewall-cmd --add-masquerade
success
Ping 8.8.8.8 from inside the container:
sudo ip netns exec container_net ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=113 time=14.2 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=113 time=13.8 ms 64 bytes from 8.8.8.8: icmp_seq=3 ttl=113 time=14.1 ms --- 8.8.8.8 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 13.8/14.0/14.2/0.183 ms
We're online. Now try a domain name:
sudo ip netns exec container_net chroot my_container /bin/bash ping -c 3 google.com
ping: google.com: Temporary failure in name resolution
Makes sense — the namespace has no DNS configured. The container's /etc/resolv.conf is
empty, so it has nowhere to ask. Fix it:
mkdir -p my_container/etc
sudo ip netns exec container_net bash -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf'
Now try again:
sudo ip netns exec container_net chroot my_container /bin/bash
And then
ping -c 3 google.com
PING google.com (142.250.185.46) 56(84) bytes of data. 64 bytes from lga34s32-in-f14.1e100.net (142.250.185.46): icmp_seq=1 ttl=113 time=15.1 ms 64 bytes from lga34s32-in-f14.1e100.net (142.250.185.46): icmp_seq=2 ttl=113 time=14.7 ms 64 bytes from lga34s32-in-f14.1e100.net (142.250.185.46): icmp_seq=3 ttl=113 time=14.9 ms --- google.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 14.7/14.9/15.1/0.163 ms
Fully connected.4
6 What We Achieved
We started with a container that isolated the filesystem but left the network completely exposed. Now it has its own interfaces, its own IP address, its own routing table, and isolated ports — and it can reach the internet.
But the container can still consume as much CPU and memory as it wants, and can bring the host to its knees. That's what Part 3 will be about.
Footnotes:
If ping fails, check that both interfaces are UP, IP addresses are correctly assigned, and the firewall isn't blocking ICMP.
This is what makes multi-tenant environments possible. Containers can all listen on port 80 without conflict because each lives in its own network namespace.
The container's source IP (192.168.10.2) is private — routers on the internet won't know how to reply to it. Masquerading rewrites that source IP to the host's public IP as packets leave, and translates replies back on the way in.
If you're using chroot alongside the network namespace, the container's /etc/resolv.conf
lives at my_container/etc/resolv.conf on the host filesystem — you may want to write
there directly instead.