Using WireGuard For Specific Apps on Linux
If you want to use a WireGuard tunnel only for few specific Linux applications or processes, to send all their traffic through the tunnel, but not the traffic of any other programs running on the same host — a per-application “split tunnel” — the best way to do this is with Linux network namespaces. This article will show you how, covering two primary scenarios:
-
Disable Selectively: In this scenario, we want to send almost all of our traffic through a WireGuard tunnel, but selectively disable it for one or two specific applications that we don’t want to use the tunnel.
-
Enable Selectively: In this scenario, we want to avoid sending almost all of our traffic through a WireGuard tunnel, but selectively enable it for one or two specific applications.
You can combine these two scenarios if you wish to send almost all of your traffic through one WireGuard tunnel, while sending the traffic for one or two specific applications through a second WireGuard tunnel (and sending the traffic for one or two other applications through a third tunnel, and so on). When combining them, you don’t need to use the Disable Selectively technique at all if you want to send the traffic of every application through one of your WireGuard tunnels — you can simply set up one WireGuard interface to use by default as normal, and then use the Enable Selectively technique with the other WireGuard interfaces.
Note
|
It’s much easier to set up a conventional “split tunnel” with Linux, simply by using static routes that match destination IP blocks — for example, see the WireGuard Point-to-Site Configuration, WireGuard With AWS Split DNS, or WireGuard Port Forwarding From the Internet guides. You only need network namespaces if you’re not able to easily segregate traffic by destination address (like because the destination addresses you want to access/avoid are dynamic, or overlapping). |
Note
|
These techniques won’t work with Snapcraft-packaged applications, as snaps don’t play nicely with network namespaces. |
Disable Selectively
In this scenario, we want to send almost all the traffic of Endpoint A through a WireGuard tunnel to Host β, but selectively disable it for one or two applications on Endpoint A that we don’t want to use the tunnel.
To accomplish this, we’ll first set up a basic WireGuard connection between Endpoint A and Host β. The WireGuard interface (wg0
) on Endpoint A will route all IPv4 traffic (ie 0.0.0.0/0
) by default to Host β. In this example, we’ll just have Host β masquerade all this traffic out to the public Internet; but you might instead want to use Host β as a gateway to some private internal network (like Site B in the diagram below).
On Endpoint A we’ll set up a custom network namespace called pvt-net1
, in which we’ll run all the applications that we don’t want to use with the WireGuard tunnel. We’ll connect this custom pvt-net1
namespace to the root namespace (aka the default namespace or init namespace) by setting up a veth
network interface. This interface will act as a point-to-point tunnel between the two namespaces. In the root namespace, the veth
interface will have a name of to-pvt-net1
; and in the pvt-net1
namespace, it will have a name of from-pvt-net1
.
We’ll set the IP address on the to-pvt-net1
side of the veth
interface to 10.99.99.4
, and the IP address on its from-pvt-net1
side to 10.99.99.5
. Within the pvt-net1
namespace, we’ll set up the default route to exit through from-pvt-net1
, using the 10.99.99.4
address on the other side of the connection as its gateway out (aka the next hop). All packets that come out of the pvt-net1
namespace through this interface will use the 10.99.99.5
address as their source IP.
Within the root namespace, we’ll use a policy routing rule to avoid using the WireGuard interface (wg0
) when routing packets with this source IP address of 10.99.99.5
; instead, they’ll be routed directly out a physical Ethernet interface of the host (eth0
), using Endpoint A’s main routing table as if the WireGuard interface was not up.
WireGuard Set-Up
First, we’ll set up WireGuard on Endpoint A and Host β with a basic point-to-site configuration — see the WireGuard Point-to-Site Configuration guide for details. The one thing we’ll do differently from that guide, however, is we’ll configure the AllowedIPs
setting on Endpoint A to 0.0.0.0/0
— meaning that this WireGuard interface will be used by default for all traffic sent from Endpoint A (only traffic with specific route entries or policy rules will skip it).
This is how we’ll configure WireGuard on Endpoint A:
# /etc/wireguard/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821
# remote settings for Host β
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 0.0.0.0/0
And this is how we’ll configure WireGuard on Host β:
# /etc/wireguard/wg0.conf
# local settings for Host β
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# IP masquerading
PreUp = iptables -t mangle -A PREROUTING -i wg0 -j MARK --set-mark 0x30
PreUp = iptables -t nat -A POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
PostDown = iptables -t mangle -D PREROUTING -i wg0 -j MARK --set-mark 0x30
PostDown = iptables -t nat -D POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
If we start up the WireGuard interfaces on both Endpoint A and Host β (by running sudo wg-quick up wg0
), we should be able to access the Internet on Endpoint A through its WireGuard connection to Host β. For example, if we run the following command on Endpoint A, we should see the public IP address of Host β printed out:
$ curl https://ifconfig.me; echo
203.0.113.2
Namespace Set-Up
Now we’re ready to set up the custom pvt-net1
network namespace. Run the following command to do so:
$ sudo ip netns add pvt-net1
Next, start up the loopback interface in the pvt-net1
namespace:
$ sudo ip -n pvt-net1 link set lo up
Tip
|
The |
Then create the veth
interface pair: to-pvt-net1
in the root namespace, connected to from-pvt-net1
in the pvt-net1
namespace:
$ sudo ip link add to-pvt-net1 type veth peer name from-pvt-net1 netns pvt-net1
Set the IP address of the new to-pvt-net1
interface (the end of the connection in the root namespace) to 10.99.99.4
, and start it up:
$ sudo ip address add 10.99.99.4/31 dev to-pvt-net1
$ sudo ip link set to-pvt-net1 up
Tip
|
Using an IPv4 |
Set the IP address of the from-pvt-net1
interface (the other end of the connection, in the pvt-net1
namespace) to 10.99.99.5
, and start it up:
$ sudo ip -n pvt-net1 address add 10.99.99.5/31 dev from-pvt-net1
$ sudo ip -n pvt-net1 link set from-pvt-net1 up
At this stage, the direct point-to-point connection between the to-pvt-net1
interface in the root namespace and the from-pvt-net1
interface in the custom pvt-net1
namespace is up and operational. You can test this out by pinging the custom namespace from the root namespace:
$ ping -nc1 10.99.99.5
PING 10.99.99.2 (10.99.99.5) 56(84) bytes of data.
64 bytes from 10.99.99.5: icmp_seq=1 ttl=64 time=0.024 ms
--- 10.99.99.5 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms
And the root namespace from the custom namespace:
$ sudo ip netns exec pvt-net1 ping -nc1 10.99.99.4
PING 10.99.99.4 (10.99.99.4) 56(84) bytes of data.
64 bytes from 10.99.99.4: icmp_seq=1 ttl=64 time=0.054 ms
--- 10.99.99.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
However, the programs run in the custom pvt-net1
namespace won’t yet be able to access anything other than 10.99.99.4
(and if we’re running a restrictive firewall on Endpoint A, they might not even be able to access 10.99.99.4
, either).
The first step to allowing programs run in the custom pvt-net1
namespace to reach beyond 10.99.99.4
is to set a default route in the namespace, using 10.99.99.4
as its gateway:
$ sudo ip -n pvt-net1 route add default via 10.99.99.4
Then run the following command to avoid using the WireGuard interface when routing any packets in the root namespace that originated from the custom pvt-net1
namespace:
$ sudo ip rule add from 10.99.99.5 table main priority 99
Firewall Set-Up
Packets sent from the custom pvt-net1
network namespace will all have a source IP address of 10.99.99.5
when they emerge into the root network namespace from the to-pvt-net1
interface. To the root namespace, these packets will appear to have come from an external host. To prevent the root namespace from rejecting them, we need to do two more things:
-
Turn on packet forwarding in the root namespace
-
Relax the firewall in the root namespace to accept connections forwarded from the custom namespace
For the first item, Endpoint A may already be set up to allow packet forwarding. Check by running the following command:
$ sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 0
If the result is 0
, run the following command to turn on IPv4 packet forwarding in the root namespace:
$ sudo sysctl -w net.ipv4.conf.all.forwarding=1
net.ipv4.conf.all.forwarding = 1
For the second item, if no host-based firewall has been set up on Endpoint A, we don’t have to do anything. However, if a host-based firewall is running on Endpoint A that prohibits it from forwarding packets, we will need to relax it to allow traffic to be forwarded to and from its to-pvt-net1
network interface. If we’re using iptables with the standard simple stateful firewall configuration, we would run the following commands:
$ sudo iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
$ sudo iptables -A FORWARD -s 10.99.99.5 -j ACCEPT
If we’re using the nftables configuration on Endpoint A recommended in the How to Use WireGuard With Nftables guide, we’d add similar rules to the forward
chain (after defining a from_pvt_net1_ip
variable to hold the IP address of the from-pvt-net1
interface for convenience):
#!/usr/sbin/nft -f
flush ruleset
define pub_iface = "eth0"
define wg_port = 51821
define from_pvt_net1_ip = 10.99.99.5
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# accept all loopback packets
iif "lo" accept
# accept all icmp/icmpv6 packets
meta l4proto { icmp, ipv6-icmp } accept
# accept all packets that are part of an already-established connection
ct state vmap { invalid : drop, established : accept, related : accept }
# drop new connections over rate limit
ct state new limit rate over 1/second burst 10 packets drop
# accept all DHCPv6 packets received at a link-local address
ip6 daddr fe80::/64 udp dport dhcpv6-client accept
# accept all SSH packets received on a public interface
iifname $pub_iface tcp dport ssh accept
# accept all WireGuard packets received on a public interface
iifname $pub_iface udp dport $wg_port accept
# reject with polite "port unreachable" icmp response
reject
}
chain forward {
type filter hook forward priority 0; policy drop;
# forward all packets that are part of an already-established connection
ct state vmap { invalid : drop, established : accept, related : accept }
# forward all packets originating from network namespace pvt-net1
ip saddr $from_pvt_net1_ip accept
# reject with polite "host unreachable" icmp response
reject with icmpx type host-unreachable
}
}
Outbound Connections
Now we’re almost ready to use our custom pvt-net1
network namespace. If we want to use it run a program that initiates outbound connections, like a web browser (or cURL or an SSH client etc), we need to do two more things:
-
Set up masquerading (aka SNAT) in the root namespace for connections initiated from the custom namespace
-
Set up DNS in the custom namespace
To set up a masquerading rule with iptables, run the following command:
$ sudo iptables -t nat -A POSTROUTING -s 10.99.99.5 -j MASQUERADE
This rule rewrites outgoing packets that have a source IP address of 10.99.99.5
(the address of the from-pvt-net1
interface in the pvt-net1
namespace) to instead use the source IP address of whatever interface in the root namespace that they’re being forwarded out of. On Endpoint A, that would be the 192.168.1.11
address of its eth0
interface.
To set up the same masquerading rule with nftables, add the following nat
table postrouting
chain to the bottom of the nftables config file shown above in the Firewall Set-Up section:
table inet nat {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
# masquerade all packets originating from network namespace pvt-net1
ip saddr $from_pvt_net1_ip masquerade
}
}
At this point, outbound connections from the custom pvt-net1
namespace that don’t require DNS should now be working. On some systems, DNS should be working as well. However, on hosts that run their own DNS resolver on a looback interface — like the stub resolver of systemd-resolved — the custom namespace will not be able to resolve DNS names. Instead, we’ll see an error like the following:
$ sudo ip netns exec pvt-net1 curl https://ifconfig.me; echo
curl: (6) Could not resolve host: ifconfig.me
To fix this, we need to set up a separate DNS resolver configuration for the namespace. Run the following commands to configure our custom pvt-net1
namespace to use the 9.9.9.9
(Quad9) resolver:
$ sudo mkdir -p /etc/netns/pvt-net1
$ echo nameserver 9.9.9.9 | sudo tee /etc/netns/pvt-net1/resolv.conf >/dev/null
$ sudo chmod -R o+rX /etc/netns
Tip
|
If you use systemd, you can run the following command to copy systemd’s own DNS configuration for use by the custom
|
Now if we run a command in the custom pvt-net1
namespace that requires DNS, we should see it succeed:
$ sudo ip netns exec pvt-net1 curl https://ifconfig.me; echo
198.51.100.1
And this particular command shows that connections from our custom namespace are avoiding the WireGuard tunnel — the command’s result of 198.51.100.1
instead of 203.0.113.2
shows that our packets to https://ifconfig.me are emerging on the Internet from the NAT (Network Address Translation) router in front of Endpoint A, rather than from the other end of its WireGuard connection to Host β.
Tip
|
To run a command in a custom namespace with some user account other than root, use the
If you see an error like
|
Inbound Connections
If we want to run a program in our custom pvt-net1
network namespace that accepts inbound connections, like a web server (or any other kind of network service), we have to set up port forwarding (aka DNAT) in the root namespace, to forward the appropriate connections to the custom namespace.
For example, if we want to run a simple webserver on port 8080 in the custom pvt-net1
namespace, we could run the following command (which serves the contents of a temporary htdocs
folder):
$ mkdir /tmp/htdocs && cd /tmp/htdocs
$ sudo ip netns exec pvt-net1 sudo -u $USER python3 -m http.server 8080
To set up a port-forwarding rule for this webserver with iptables, we’d run the following command:
$ sudo iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 10.99.99.5
This rule rewrites incoming TCP packets that have a destination port of 8080 to use a new destination address of 10.99.99.5
(the address of the from-pvt-net1
interface in the pvt-net1
namespace).
But if the host (Endpoint A in our example) is running a host-based firewall that prohibits it from forwarding packets by default, we’ll also need to add a rule that allows these packets to be forwarded. Run the following command to set up such a rule:
$ sudo iptables -A FORWARD -d 10.99.99.5 -p tcp --dport 8080 -j ACCEPT
To set up similar rules with nftables, add the following nat
table prerouting
chain to the bottom of the nftables config file shown in the Firewall Set-Up section:
table inet nat {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
# rewrite destination address of all TCP port 8080 packets
# to send them to network namespace pvt-net1
tcp dport 8080 dnat ip to $from_pvt_net1_ip
}
}
And then add the following rule to the existing filter
table forward
chain, right after the rule that allows forwarding of all outbound packets from the custom namespace:
# forward all packets originating from network namespace pvt-net1
ip saddr $from_pvt_net1_ip accept
# forward all packets sent to TCP port 8080 of network namespace pvt-net1
ip daddr $from_pvt_net1_ip tcp dport 8080 accept
Tip
|
For a service that both accepts inbound connections and initiates outbound connections (like a webapp that connects to a back-end database via the network), you’ll need to make the firewall and DNS adjustments described in both the Outbound Connections and Inbound Connections sections. In that case, when using nftables, use the same
|
As a Systemd Service
To run the example webserver from the above Inbound Connections section as a systemd service, make sure you’ve made the firewall adjustments described in the Inbound Connections section to allow connections to the appropriate port to be forwarded to our custom pvt-net1
network namespace. Then create the following systemd unit file /etc/systemd/system/example-pvt-net1.service
:
# /etc/systemd/system/example-pvt-net1.service
[Unit]
Description=example service that uses a custom network namespace
[Service]
Type=simple
ExecStart=python3 -m http.server --directory /var/www/htdocs 8080
BindReadOnlyPaths=/tmp/htdocs:/var/www/htdocs
NetworkNamespacePath=/run/netns/pvt-net1
This simple webserver serves the contents of the /var/www/htdocs
directory on port 8080. The BindReadOnlyPaths
setting directs systemd to map the service’s /var/www/htdocs
directory to the /tmp/htdocs
directory we created in the Inbound Connections section. The NetworkNamespacePath
setting directs systemd to run the service in the pvt-net1
namespace we set up in the Namespace Set-Up section.
Load this configuration file and start the service by running the following commands:
$ sudo systemctl daemon-reload
$ sudo systemctl restart example-pvt-net1.service
Tail the service’s logs by running the following command:
$ journalctl -u example-pvt-net1.service -f
Apr 19 02:45:12 colossus systemd[1]: Started example service that uses a custom network namespace.
Apr 19 02:45:42 colossus python3[215112]: 192.0.2.123 - - [19/Apr/2023 02:45:42] "GET / HTTP/1.1" 200 -
Note
|
The systemd For example, you might have one service, run as root, that executes a set-up script for the custom namespace:
And then a second service, run as an unprivileged user, that actually uses the custom namespace:
|
Tip
|
If the service needs to resolve DNS names, make sure it also includes a
|
Namespace Tear-Down
We can tear down our custom pvt-net1
namespace by running the following commands:
$ ip netns pids pvt-net1 | sudo xargs -r kill
$ sudo ip netns del pvt-net1
This terminates all the processes currently using the namespace, and then deletes the namespace. Deleting a network namespace shuts down all virtual interfaces (including both veth
and WireGuard interfaces) that have been added to the namespace.
Note
|
If you don’t shut down all processes using a namespace before running the |
Deleting the custom pvt-net1
namespace won’t delete any of changes we made in the root namespace, however, including:
-
The
from 10.99.99.5 table main
policy routing rule -
The
net.ipv4.conf.all.forwarding=1
kernel parameter setting -
All the iptables or nftables firewall rules
-
The DNS and other namespace config files in
/etc/netns/pvt-net1
All but the last item will be deleted eventually by the next reboot of Endpoint A. We can either leave them as is, or delete them all manually now:
$ sudo ip rule del from 10.99.99.5 table main priority 99
$ sudo sysctl -w net.ipv4.conf.all.forwarding=0
$ sudo iptables -D FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
$ sudo iptables -D FORWARD -s 10.99.99.5 -j ACCEPT
$ sudo iptables -t nat -D POSTROUTING -s 10.99.99.5 -j MASQUERADE
$ sudo iptables -t nat -D PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 10.99.99.5
$ sudo iptables -D FORWARD -d 10.99.99.5 -p tcp --dport 8080 -j ACCEPT
$ sudo rm -r /etc/netns/pvt-net1
Enable Selectively
In the second main scenario, we want to avoid sending most of Endpoint A’s traffic through a WireGuard tunnel to Host β, but selectively enable the WireGuard tunnel for one or two specific applications run on Endpoint A.
To accomplish this, we’ll follow basic steps from the “Ordinary Containerization” section of the WireGuard Routing & Network Namespace Integration guide. On Endpoint A we’ll set up a custom network namespace called pvt-net1
, in which we’ll run all the applications that we want to use the WireGuard tunnel. We’ll set up the WireGuard interface (wg0
) on Endpoint A directly in this custom namespace, and configure the namespace to route all IPv4 traffic (ie 0.0.0.0/0
) through it to Host β.
In this example, we’ll have Host β masquerade all this traffic out to the public Internet; but you might instead want to use Host β as a gateway to some private internal network (like Site B in the diagram below).
Unlike the Disable Selectively scenario, we won’t use a special veth
network interface to connect our custom namespace to the root namespace. Instead, we’ll first create the WireGuard interface in the root namespace, and then move it into the custom namespace — this will allow the WireGuard interface to send and receive packets via the root namespace’s network interfaces, but prohibit direct access to the WireGuard interface except from within the custom namespace.
All applications running in the root namespace on Endpoint A will use Endpoint A’s main routing table for network access as usual; whereas applications running in the custom pvt-net1
namespace will use the custom namespace’s own default route to direct all traffic through the WireGuard interface.
WireGuard Set-Up
We’ll use almost the exact same WireGuard configuration with this scenario as with the Disable Selectively scenario; just like that scenario, we’ll set up WireGuard on Endpoint A and Host β with a basic point-to-site configuration (see the WireGuard Point-to-Site Configuration guide for details). Like the Disable Selectively scenario, we’ll set the AllowedIPs
configuration on Endpoint A to 0.0.0.0/0
— but unlike the Disable Selectively scenario, we’ll remove the Address
setting from our WireGuard configuration on Endpoint A.
This is how we’ll configure WireGuard on Endpoint A:
# /etc/wireguard/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
# omit the Address, DNS, MTU, Table, and Pre/PostUp/Down settings
# Address = 10.0.0.1/32
ListenPort = 51821
# remote settings for Host β
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 0.0.0.0/0
Because we’re going to use the wg
command instead of the wg-quick
command to configure our WireGuard interface inside the custom namespace, we can’t include the Address
setting (or the DNS
, MTU
, Table
, PreUp
, PreDown
, PostUp
, PostDown
, or SaveConfig
settings) that the wg-quick
command allows.
For Host β, however, we’ll use the exact same configuration as used by the WireGuard Point-to-Site Configuration guide:
# /etc/wireguard/wg0.conf
# local settings for Host β
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# IP masquerading
PreUp = iptables -t mangle -A PREROUTING -i wg0 -j MARK --set-mark 0x30
PreUp = iptables -t nat -A POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
PostDown = iptables -t mangle -D PREROUTING -i wg0 -j MARK --set-mark 0x30
PostDown = iptables -t nat -D POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
Start up WireGuard on Host β by running the sudo wg-quick up wg0
command; but don’t try to start up WireGuard on Endpoint A yet.
Namespace Set-Up
On Endpoint A, first create a new custom network namespace named pvt-net1
, and start up its loopback interface:
$ sudo ip netns add pvt-net1
$ sudo ip -n pvt-net1 link set lo up
Next, create an empty wg0
WireGuard interface in the root namespace — without configuring it yet:
$ sudo ip link add wg0 type wireguard
Then move the wg0
WireGuard interface into the new custom namespace pvt-net1
:
$ sudo ip link set wg0 netns pvt-net1
Now we can configure the wg0
interface by running the wg
command within the pvt-net1
namespace, using its configuration file at /etc/wireguard/wg0.conf
:
$ sudo ip netns exec pvt-net1 wg setconf wg0 /etc/wireguard/wg0.conf
Since we aren’t able to specify the WireGuard interface’s address with the wg
command, we’ll set it instead via the ip
command:
$ sudo ip -n pvt-net1 address add 10.0.0.1/32 dev wg0
And then start up the interface:
$ sudo ip -n pvt-net1 link set wg0 up
Finally, we’ll set the WireGuard interface as the default route for the custom namespace:
$ sudo ip -n pvt-net1 route add default dev wg0
At this point, we should be able to ping Host β from the custom pvt-net1
namespace in Endpoint A, using the WireGuard address for Host β (10.0.0.2
):
$ sudo ip netns exec pvt-net1 ping -nc1 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=2.86 ms
--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 2.863/2.863/2.863/0.000 ms
And once we’ve initiated the WireGuard connection from Endpoint A to Host β, we should be able to ping back the other way from Host β to Endpoint A, using the WireGuard address for Endpoint A (10.0.0.1
):
$ ping -nc1 10.0.0.1
PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.836 ms
--- 10.0.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.836/0.836/0.836/0.000 ms
Outbound Connections
At this point, outbound connections from the custom pvt-net1
network namespace that don’t require DNS should be working. On some systems, DNS should be working as well. However, on hosts that run their own DNS resolver on a looback interface — like the stub resolver of systemd-resolved — the custom namespace will not be able to resolve DNS names. Instead, we’ll see an error like the following:
$ sudo ip netns exec pvt-net1 curl https://ifconfig.me; echo
curl: (6) Could not resolve host: ifconfig.me
To fix this, we need to set up a separate DNS resolver configuration for the namespace. Run the following commands to configure our custom pvt-net1
namespace to use the 9.9.9.9
(Quad9) resolver:
$ sudo mkdir -p /etc/netns/pvt-net1
$ echo nameserver 9.9.9.9 | sudo tee /etc/netns/pvt-net1/resolv.conf >/dev/null
$ sudo chmod -R o+rX /etc/netns
Tip
|
If you’re using WireGuard to connect to a private internal network (instead of the public Internet like this example), you should use use a nameserver available on that private network (instead of a public one like |
Now DNS resolution should succeed in the custom namespace:
$ sudo ip netns exec pvt-net1 curl https://ifconfig.me; echo
203.0.113.2
Inbound Connections
We don’t necessarily have to do anything extra on Endpoint A to allow inbound connections through its WireGuard interface to the pvt-net
network namespace.
For example, if we want to run a simple webserver on port 8080 in the custom pvt-net1
namespace, we could run the following command (which serves the contents of a temporary htdocs
folder):
$ mkdir /tmp/htdocs && cd /tmp/htdocs
$ sudo ip netns exec pvt-net1 sudo -u $USER python3 -m http.server 8080
From Host β, we can access this webserver simply by running the following command:
$ curl 10.0.0.1:8080
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
...
However, if Endpoint A is behind a NAT router, or otherwise doesn’t have a static, publicly-accessible IP address, Host β won’t be able to initiate connections to it. In that case, add a PersistentKeepalive
setting to Endpoint A’s WireGuard configuration:
# /etc/wireguard/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
ListenPort = 51821
# remote settings for Host β
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 0.0.0.0
PersistentKeepalive = 25
This will send a keepalive packet from Endpoint A to Host β every 25
seconds. Then run the following command to reload Endpoint A’s WireGuard configuration:
$ sudo ip netns exec pvt-net1 wg syncconf wg0 /etc/wireguard/wg0.conf
Alternatively, if Endpoint A does have a static, publicly-accessible IP address, add it to the Host β’s WireGuard configuration:
# /etc/wireguard/wg0.conf
# local settings for Host β
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# IP forwarding
PreUp = sysctl -w net.ipv4.ip_forward=1
# IP masquerading
PreUp = iptables -t mangle -A PREROUTING -i wg0 -j MARK --set-mark 0x30
PreUp = iptables -t nat -A POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
PostDown = iptables -t mangle -D PREROUTING -i wg0 -j MARK --set-mark 0x30
PostDown = iptables -t nat -D POSTROUTING ! -o wg0 -m mark --mark 0x30 -j MASQUERADE
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
Endpoint = 198.51.100.1:51821
Tip
|
To enable Host β to forward its own port 8080 to Endpoint A (allowing any other host that can access Host β’s port 8080 to connect through it to the webserver running in the
See the WireGuard Point-to-Site With Port Forwarding guide for more details (or see the “Point to Site With Port Forwarding” section of the How to Use WireGuard With Nftables guide for the equivalent configuration with nftables). |
As a Systemd Service
We can run a systemd service in our custom pvt-net1
network namespace under this scenario exactly the same as with the Disable Selectively scenario. To run the example webserver from the above Inbound Connections section above, create the following systemd unit file /etc/systemd/system/example-pvt-net1.service
:
# /etc/systemd/system/example-pvt-net1.service
[Unit]
Description=example service that uses a custom network namespace
[Service]
Type=simple
ExecStart=python3 -m http.server --directory /var/www/htdocs 8080
BindReadOnlyPaths=/tmp/htdocs:/var/www/htdocs
NetworkNamespacePath=/run/netns/pvt-net1
This simple webserver serves the contents of the /var/www/htdocs
directory on port 8080. The BindReadOnlyPaths
setting directs systemd to map the service’s /var/www/htdocs
directory to the /tmp/htdocs
directory we created in the Inbound Connections section. The NetworkNamespacePath
setting directs systemd to run the service in the pvt-net1
namespace we set up in the Namespace Set-Up section.
Load this configuration file and start the service by running the following commands:
$ sudo systemctl daemon-reload
$ sudo systemctl restart example-pvt-net1.service
Tail the service’s logs by running the following command:
$ journalctl -u example-pvt-net1.service -f
Apr 19 03:00:11 colossus systemd[1]: Started example service that uses a custom network namespace.
Apr 19 03:00:23 colossus python3[216530]: 10.0.0.2 - - [19/Apr/2023 03:00:23] "GET / HTTP/1.1" 200 -
From Host β, we can access this webserver the same as with the previous section:
$ curl 10.0.0.1:8080
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
...
Namespace Tear-Down
We can tear down our custom pvt-net1
network namespace in this scenario with the same commands as the Disable Selectively scenario:
$ ip netns pids pvt-net1 | sudo xargs -r kill
$ sudo ip netns del pvt-net1
This terminates all the processes currently using the namespace, and then deletes the namespace (which shuts down the WireGuard interface we moved into the namespace).
The one thing this doesn’t delete is the namespace’s DNS config files we set up at /etc/netns/pvt-net1
. We can leave them for the next time we want to use this custom namespace, or delete them now:
$ sudo rm -r /etc/netns/pvt-net1