Building, Using, and Monitoring WireGuard Containers
Docker and other OCI (Open Container Initiative) runtimes like Podman or Kubernetes can provide a convenient way to run WireGuard. Pro Custodibus maintains a standalone Docker image for WireGuard, based on Alpine Linux. We also provide a Docker image that combines WireGuard with the Pro Custodibus Agent. We update these images weekly, to make sure they include the latest Alpine, WireGuard, and Pro Custodibus security fixes.
If you want to run just WireGuard by itself, without Pro Custodibus involved, run the base WireGuard image. If, instead, you want to monitor and manage WireGuard with Pro Custodibus, run the Pro Custodibus Agent image. Note that to use either of these images, either the container’s host must be running the Linux kernel version 5.6 or newer, or the container’s host itself must have the WireGuard kernel module installed (see the Not Supported Error section).
This article describes the content of these images, as well as how to use them for a variety of scenarios:
Also see further examples of using these images in the WireGuard Remote Access to Docker Containers and WireGuard in Podman Rootless Containers articles.
Image Details
Base WireGuard Image
The base WireGuard image is lightweight, at only 15MB in size. It consists simply of the base Alpine Linux image, with the Alpine wireguard-tools and openrc packages added. The wireguard-tools package includes the core WireGuard functionality and wg-quick program; plus it pulls in several core Linux networking tools: iptables, iproute2, and resolvconf. The openrc package contains the minimal init system used by Alpine, OpenRC.
Our base WireGuard image runs wg-quick as an OpenRC service. Using OpenRC allows other images to be built from this image to incorporate complimentary services, such as a routing daemon or a DNS server (or the Pro Custodibus agent).
Here is the Dockerfile for the image:
FROM alpine:latest
RUN apk add --no-cache \
openrc \
wireguard-tools
COPY docker/wireguard-fs /
RUN \
sed -i 's/^\(tty\d\:\:\)/#\1/' /etc/inittab && \
sed -i \
-e 's/^#\?rc_env_allow=.*/rc_env_allow="\*"/' \
-e 's/^#\?rc_sys=.*/rc_sys="docker"/' \
/etc/rc.conf && \
sed -i \
-e 's/VSERVER/DOCKER/' \
-e 's/checkpath -d "$RC_SVCDIR"/mkdir "$RC_SVCDIR"/' \
/lib/rc/sh/init.sh && \
rm \
/etc/init.d/hwdrivers \
/etc/init.d/machine-id && \
sed -i 's/cmd sysctl -q \(.*\?\)=\(.*\)/[[ "$(sysctl -n \1)" != "\2" ]] \&\& \0/' /usr/bin/wg-quick && \
rc-update add wg-quick default
VOLUME ["/sys/fs/cgroup"]
CMD ["/sbin/init"]
The first few lines are self-explanatory. Here’s a blow-by-blow explanation of the more cryptic lines:
-
sed -i 's/^\(tty\d\:\:\)/#\1/' /etc/inittab
: Prevents unnecessary tty instances from starting. -
sed -i 's/^#\?rc_env_allow=.*/rc_env_allow="\*"/' /etc/rc.conf
: Propagates Docker environment variables to each OpenRC service. -
sed -i 's/^#\?rc_sys=.*/rc_sys="docker"/' /etc/rc.conf
: Lets OpenRC know it’s running in a Docker container. -
sed -i 's/VSERVER/DOCKER/' /lib/rc/sh/init.sh
: Makes sure the/run
directory is set up appropriately for a Docker container. -
sed -i 's/checkpath -d "$RC_SVCDIR"/mkdir "$RC_SVCDIR"/' /lib/rc/sh/init.sh
: Ensures the needed/run/openrc
directory exists. -
rm /etc/init.d/hwdrivers
: Prevents an ignorable error message from the unneededhwdrivers
service. -
rm /etc/init.d/machine-id
: Prevents an ignorable error message from the unneededmachine-id
service. -
sed -i 's/cmd sysctl -q \(.*\?\)=\(.*\)/[[ "$(sysctl -n \1)" != "\2" ]] \&\& \0/' /usr/bin/wg-quick
: Preventswg-quick
from attempting to set sysctl parameters that have already been set (preventing it from starting up). -
rc-update add wg-quick default
: Sets upwg-quick
to be run as an OpenRC service (via the/etc/init.d/wg-quick
service file copied into the image as part of the earlierCOPY
command). -
VOLUME ["/sys/fs/cgroup"]
: Prevents a bunch of ignorable cgroup error messages. -
CMD ["/sbin/init"]
: Boots OpenRC on container start.
The WireGuard OpenRC service in the image will start up a WireGuard interface for each WireGuard configuration file it finds in its /etc/wireguard
directory, using the wg-quick
program.
Pro Custodibus Agent Image
The agent image is built on top of the base WireGuard image. It’s a bit fatter, at around 285MB in size, largely due to the Python runtime and libraries used by the agent. Here is the Dockerfile for the image:
FROM procustodibus/wireguard:latest
ARG AGENT_VERSION=latest
RUN apk add --no-cache \
curl \
gcc \
gnupg \
libffi-dev \
libsodium \
make \
musl-dev \
py3-pip \
python3-dev && \
gpg --keyserver keys.openpgp.org --recv-keys EFC1AE969DD8159F && \
pip3 install pynacl
RUN \
cd /tmp && \
curl -O https://ad.custodib.us/agents/procustodibus-agent-$AGENT_VERSION.tar.gz && \
curl https://ad.custodib.us/agents/procustodibus-agent-$AGENT_VERSION.tar.gz.sig | \
gpg --verify - procustodibus-agent-$AGENT_VERSION.tar.gz && \
tar xf procustodibus-agent-$AGENT_VERSION.tar.gz && \
pip3 install procustodibus-agent-*/ && \
rm -rf /tmp/*
COPY docker/agent-fs /
RUN rc-update add procustodibus-agent default
The first RUN
step installs the Python runtime and other binaries needed by the agent (as well as downloading the PGP key used to sign the agent source code, and pre-installing the pynacl
library used by the agent, which takes several minutes to build). The second RUN
step downloads the Pro Custodibus Agent source code, verifies it, and installs it. The COPY
step copies the /etc/init.d/procustodibus-agent
OpenRC service file into the container; and the third RUN
step sets up that service to be run by OpenRC.
In addition to WireGuard configuration files in the /etc/wireguard
directory, the Pro Custodibus Agent service in the image will also expect to find a procustodibus.conf
and procustodibus-credentials.conf
(or procustodibus-setup.conf
) file in its /etc/wireguard
directory.
Use for Hub
To use the base WireGuard image for a WireGuard hub in a hub-and-spoke topology, like the “Host C” described in the WireGuard Hub and Spoke Configuration guide, save the WireGuard configuration for the hub in its own directory somewhere convenient on the host, like in the /srv/wg-hub/conf
directory:
# /srv/wg-hub/conf/wg0.conf
# local settings for Host C
[Interface]
PrivateKey = CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCGA=
Address = 10.0.0.3/32
ListenPort = 51823
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
AllowedIPs = 10.0.0.2/32
Note
|
Unlike the directions from the WireGuard Hub and Spoke Configuration guide, do not attempt to set the |
You can then run a container for the hub with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-hub \
--publish 51823:51823/udp \
--rm \
--volume /srv/wg-hub/conf:/etc/wireguard \
procustodibus/wireguard
These are what the command arguments do:
-
--cap-add NET_ADMIN
: Grants the container theNET_ADMIN
capability — this is required to start up a WireGuard interface inside the container. -
--name wg-hub
: Sets the container’s name towg-hub
(you can set this to whatever name you want, or omit it entirely if you don’t care how it’s named). -
--publish 51823:51823/udp
: Forwards the host’s public51823
UDP port to the container’s51823
UDP port — make sure the latter matches theListenPort
setting in the WireGuard config file (the former can be whatever port you want to expose publicly). -
--rm
: Deletes the container when it’s shut down (you can omit this if you don’t want to delete the container). -
--volume /srv/wg-hub/conf:/etc/wireguard
: Maps the/srv/wg-hub/conf
directory on the host to the/etc/wireguard
directory in the container (you can change the host directory to whatever you want). -
procustodibus/wireguard
: Runs the latest version of the WireGuard image.
Alternately, you can place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-hub/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51823:51823/udp
volumes:
- ./conf:/etc/wireguard
And then start up a container for the hub by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-hub/conf
directory (along side, or instead of, the wg0.conf
file).
Use for Site Masquerading
To use the base WireGuard image on a host that provides connectivity to its local site from other remote WireGuard endpoints in a point-to-site topology (with masquerading), like the “Host β” described in the WireGuard Point to Site Configuration guide, save the WireGuard configuration for the site in its own directory somewhere convenient on the host, like in the /srv/wg-p2s/conf
directory:
# /srv/wg-p2s/conf/wg0.conf
# local settings for Host β
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# IP masquerading
PreUp = iptables -t nat -A POSTROUTING ! -o %i -j MASQUERADE
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
Note
|
Unlike the directions from the WireGuard Point to Site Configuration guide, do not attempt to set the Also, since the only thing this container does is forward packets between its WireGuard network and the site, we can simplify its iptables rules to a single line (to masquerade all forwarded packets except those sent out its WireGuard interface). |
You can run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-p2s \
--publish 51822:51822/udp \
--rm \
--volume /srv/wg-p2s/conf:/etc/wireguard \
procustodibus/wireguard
Alternately, you can place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-p2s/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51822:51822/udp
volumes:
- ./conf:/etc/wireguard
And then start up the container by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-p2s/conf
directory.
Use for Site Port Forwarding
To use the base WireGuard image on a host that provides connectivity from its local site to remote services on a WireGuard network with port forwarding, like the “Host β” described in the WireGuard Point to Site With Port Forwarding guide, save the WireGuard configuration for the site in its own directory somewhere convenient on the host, like in the /srv/wg-fwd/conf
directory:
# /srv/wg-fwd/conf/wg0.conf
# local settings for Host β
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# port forwarding
PreUp = iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
Note
|
Unlike the directions from the WireGuard Point to Site With Port Forwarding guide, do not attempt to set the |
You can run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-fwd \
--publish 80:80 \
--publish 51822:51822/udp \
--rm \
--volume /srv/wg-fwd/conf:/etc/wireguard \
procustodibus/wireguard
In this case, the container will forward TCP connections sent to it on port 80 (ie HTTP) to the Endpoint A in its WireGuard network.
Alternately, you can place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-fwd/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 80:80
- 51822:51822/udp
volumes:
- ./conf:/etc/wireguard
And then start up the container by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-fwd/conf
directory.
Use for Host Network
To allow the computer hosting a container with the base WireGuard image to have full access to the container’s WireGuard VPN (Virtual Private Network), run it like this. You may want to do this if you use the image for a point in a point-to-point topology or point-to-site topology, or a spoke in a hub-and-spoke topology (alternatively, for those cases where you want to limit access to the WireGuard VPN to just a few specific containers, see the Use for Container Network section below).
First, save the WireGuard configuration for the container in its own directory somewhere convenient on the host, like in the /srv/wg-host/conf
directory:
# /srv/wg-host/conf/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821
# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 10.0.0.2/32
Note
|
In this example I’m using the WireGuard configuration for Endpoint A from the WireGuard Point to Point Configuration guide, but the exact same same docker configuration applies to Endpoint B from the same guide, as well as Endpoint A and B in the WireGuard Hub and Spoke Configuration guide, and Endpoint A in the WireGuard Point to Site Configuration guide. |
Then you can run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-host \
--network host \
--rm \
--volume /srv/wg-host/conf:/etc/wireguard \
procustodibus/wireguard
When you run the container this way, with the --network host
flag, it will expose the WireGuard VPN to all the rest of the processes on the host. So if, for example, you have an HTTP server running on Endpoint B (10.0.0.2) in the WireGuard VPN (like we do in the scenario for the WireGuard Point to Point Configuration guide), you’ll be able to access that webserver from Endpoint A (the host running the WireGuard container) using cURL (or any web browser) simply like the following:
$ curl 10.0.0.2
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
...
Alternately, you can run the WireGuard container with the docker-compose
command if you place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-host/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
network_mode: host
volumes:
- ./conf:/etc/wireguard
And then start up the container by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-host/conf
directory.
Use for Container Network
To allow other containers on the same host as a container with the base WireGuard image to access the container’s WireGuard VPN, run it like this. You may want to do this if you use the image for a point in a point-to-point topology or point-to-site topology, or a spoke in a hub-and-spoke topology (alternatively, for those cases where you want to allow all processes on the host full access to the WireGuard VPN, see the Use for Host Network section above).
First, save the WireGuard configuration for the spoke or point in its own directory somewhere convenient on the host, like in the /srv/wg-point/conf
directory:
# /srv/wg-point/conf/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821
# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 10.0.0.2/32
Note
|
In this example I’m using the WireGuard configuration for Endpoint A from the WireGuard Point to Point Configuration guide, but the exact same same docker configuration applies to Endpoint B from the same guide, as well as Endpoint A and B in the WireGuard Hub and Spoke Configuration guide, and Endpoint A in the WireGuard Point to Site Configuration guide. |
Then run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-point \
--publish 51821:51821/udp \
--rm \
--volume /srv/wg-point/conf:/etc/wireguard \
procustodibus/wireguard
And with this container running (to which we’ve given the arbitrary name “wg-point”), use the --network container:wg-point
flag to run each sibling container that you want to be able to access the WireGuard VPN, like this:
sudo docker run \
--interactive --tty \
--name example-sibling \
--network container:wg-point \
--rm \
alpine \
sh
The above “example-sibling” container will start up an interactive shell in a blank Alpine Linux container. If you have an HTTP server running on Endpoint B (10.0.0.2) in the WireGuard VPN (like we do in the scenario for the WireGuard Point to Point Configuration guide), you’ll be able to access it from this “example-sibling” container using cURL like the following:
/ # apk add curl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
...
(5/5) Installing curl (7.79.1-r0)
...
OK: 8 MiB in 19 packages
/ # curl 10.0.0.2
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
...
Alternately, you can run the WireGuard container together with its siblings using the docker-compose
command if you place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-point/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51821:51821/udp
volumes:
- ./conf:/etc/wireguard
example-sibling:
command: sh -c 'apk add curl; while true; do curl -I 10.0.0.2; sleep 10; done'
image: alpine
network_mode: 'service:wireguard'
Then start up the containers by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-point/conf
directory.
Use for Inbound Port Forwarding
Sometimes you may want to set up a point in a point-to-point topology, or a spoke in a hub-and-spoke topology, that forwards a specific publicly-exposed port on the point or spoke to another peer connected to your WireGuard VPN. In this scenario, the point or spoke effectively serves as a public proxy for the peer.
For example, we might have a point-to-point VPN between Endpoint A and Endpoint B, similar to the one outlined by the WireGuard Point to Point Configuration guide, but where instead of Endpoint A being an end-user workstation, it’s actually a server in some datacenter with a publicly-exposed TCP port 80. Endpoint B is a webserver in a different datacenter, but with no public ports exposed, except for its WireGuard port (UDP port 51822 in this example).
To use the base WireGuard image on Endpoint A to forward HTTP traffic sent to Endpoint A’s publicly-exposed TCP port 80 over WireGuard to Endpoint B, do this: First, save the WireGuard configuration for Endpoint A in its own directory somewhere convenient on the host, like in the /srv/wg-ifw/conf
directory. Within Endpoint A’s WireGuard config, use some PreUp
settings to configure port forwarding with iptables:
# /srv/wg-ifw/conf/wg0.conf
# local settings for Endpoint A
[Interface]
PrivateKey = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEE=
Address = 10.0.0.1/32
ListenPort = 51821
# port forwarding
PreUp = iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.2
PreUp = iptables -t nat -A POSTROUTING -p tcp --dport 80 -j MASQUERADE
# remote settings for Endpoint B
[Peer]
PublicKey = fE/wdxzl0klVp/IR8UcaoGUMjqaWi3jAd7KzHKFS6Ds=
Endpoint = 203.0.113.2:51822
AllowedIPs = 10.0.0.2/32
The first PreUp
command above will forward any packets that the container on Endpoint A receives at TCP port 80 on to Endpoint B (altering the destination IP address of these packets from Endpoint A’s own public IP address to Endpoint B’s private WireGuard IP address). The second PreUp
command will “masquerade” those forwarded packets to Endpoint B as if they had originated from Endpoint A itself (altering the source IP address of the packets to Endpoint A’s private WireGuard IP address), so that Endpoint B will send responses to them back through Endpoint A.
Then run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-ifw \
--publish 80:80 \
--publish 51821:51821/udp \
--rm \
--volume /srv/wg-ifw/conf:/etc/wireguard \
procustodibus/wireguard
Alternately, you can place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-ifw/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 80:80
- 51821:51821/udp
volumes:
- ./conf:/etc/wireguard
And then start up the container by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-ifw/conf
directory.
Use for Outbound Port Forwarding
Sometimes you may want to set up a point in a point-to-point topology, or a spoke in a hub-and-spoke topology, that forwards traffic from within your WireGuard VPN to a particular external service. In this scenario, the point or spoke effectively serves as a private proxy to the external public service.
For example, we might have a point-to-point VPN between Endpoint A and Endpoint B, similar to the one outlined by the WireGuard Point to Point Configuration guide, but where instead of Endpoint B being a webserver itself, it merely forwards traffic sent to it on TCP port 80 to some other external webserver.
To use the base WireGuard image on Endpoint B to forward HTTP traffic sent to it from Endpoint A on to some other server (say to one at IP address 192.0.2.3), do this: First, save the WireGuard configuration for Endpoint B in its own directory somewhere convenient on the host, like in the /srv/wg-ofw/conf
directory. Within Endpoint B’s WireGuard config, use some PreUp
settings to configure port forwarding with iptables:
# /srv/wg-ofw/conf/wg0.conf
# local settings for Endpoint B
[Interface]
PrivateKey = ABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBFA=
Address = 10.0.0.2/32
ListenPort = 51822
# port forwarding
PreUp = iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.0.2.3
PreUp = iptables -t nat -A POSTROUTING -p tcp --dport 80 -j MASQUERADE
# remote settings for Endpoint A
[Peer]
PublicKey = /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
AllowedIPs = 10.0.0.1/32
The first PreUp
command above will forward any packets that the container on Endpoint B receives at TCP port 80 on to 192.0.2.3 (altering the destination IP address of the packets from Endpoint B’s own WireGuard IP address of 10.0.0.2 to 192.0.2.3). The second PreUp
command will “masquerade” those forwarded packets to the public network as if they had originated from Endpoint B itself (altering the source IP address of the packets to Endpoint B’s publicly-visible IP address, 203.0.113.2), so that the external server will send responses back through Endpoint B.
Then run a container for this WireGuard interface with the following docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-ofw \
--publish 51822:51822/udp \
--rm \
--volume /srv/wg-ofw/conf:/etc/wireguard \
procustodibus/wireguard
Alternately, you can place the following docker-compose.yml
file in the directory above the WireGuard configuration file:
# /srv/wg-ofw/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51822:51822/udp
volumes:
- ./conf:/etc/wireguard
And then start up the container by running sudo docker-compose up
from the same directory as the docker-compose.yml
file.
To manage this WireGuard interface with Pro Custodibus, simply replace the procustodibus/wireguard
image with the procustodibus/agent
image; and after adding a host in the Pro Custodibus UI for the container, download the procustodibus.conf
and procustodibus-setup.conf
files for the host and place them in the /srv/wg-ofw/conf
directory.
Troubleshooting
Sysctl Errors
If you see an sysctl: error setting key
when the WireGuard container starts up, like the following:
sysctl: error setting key 'net.ipv4.conf.all.src_valid_mark': Read-only file system
You can usually fix it by setting the offending sysctl parameter via the --sysctl
flag in your docker run
command, like the following:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-point \
--publish 51821:51821/udp \
--rm \
--sysctl net.ipv4.conf.all.src_valid_mark=1 \
--volume /srv/wg-point/conf:/etc/wireguard \
procustodibus/wireguard
Or if you’re using the docker-compose
command, set the sysctl parameter via the sysctls
key in your docker-compose.yml
file:
# /srv/wg-point/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51821:51821/udp
sysctls:
net.ipv4.conf.all.src_valid_mark: 1
volumes:
- ./conf:/etc/wireguard
However, if you’re also using host networking (ie using the --network host
flag with docker run
, or the network_mode: host
setting in your docker-compose.yml
file), you’ll have to instead set the sysctl parameter on the host itself. You can do that temporarily with the following command:
sudo sysctl -w net.ipv4.conf.all.src_valid_mark=1
But you’ll probably also want to set this permanently on the host. The best way to do that is usually by setting it in a file in your /etc/sysctl.d
directory (for example, /etc/sysctl.d/local.conf
):
# /etc/sysctl.d/local.conf
net.ipv4.conf.all.src_valid_mark=1
Not Supported Error
If you see a not supported
error when the WireGuard container starts up, like the following:
[#] ip link add wg0 type wireguard
RTNETLINK answers: Not supported
Unable to access interface: Protocol not supported
It probably means that the host does not have the WireGuard kernel module installed. The WireGuard kernel module is part of all Linux kernels version 5.6 and newer.
If the host is running an older version of the Linux kernel, you can check to see if your distribution provides a WireGuard kernel module that you can install from the package manager; but often you will have to compile the WireGuard kernel module from source.
SELinux Volume Access
If the WireGuard container starts up with no errors, but logs no WireGuard-related messages (such as a line like [#] ip link add wg0 type wireguard
), and the host has SELinux enabled, you may need to re-label the WireGuard configuration directory with a SELinux label that allows the container to access it. To do this automatically, add the :Z
option to the --volume
flag of your docker run
command:
sudo docker run \
--cap-add NET_ADMIN \
--name wg-point \
--publish 51821:51821/udp \
--rm \
--volume /srv/wg-point/conf:/etc/wireguard:Z \
procustodibus/wireguard
Or if you’re using the docker-compose
command, add the :Z
option to the volumes
entry in your docker-compose.yml
file:
# /srv/wg-point/docker-compose.yml
version: '3'
services:
wireguard:
image: procustodibus/wireguard
cap_add:
- NET_ADMIN
ports:
- 51821:51821/udp
volumes:
- ./conf:/etc/wireguard:Z
Shell Into the WireGuard Container
If you launched the WireGuard container with a docker run
command like sudo docker run … --name wg-point …
, you can shell into the same container with the following command (where wg-point
is the name you gave to the container):
sudo docker exec -it wg-point sh
If you launched the WireGuard container with a docker-compose
command, you can shell into the same container with the following command from the same directory you launched the container (assuming you named the container’s service wireguard
in your docker-compose.yml
file):
sudo docker-compose exec wireguard sh
Once in, you can view the status of its WireGuard interfaces with the wg
command:
/ # wg
interface: wg0
public key: /TOE4TKtAqVsePRVR+5AA43HkAK5DSntkOCO7nYq5xU=
private key: (hidden)
listening port: 51821
peer: jUd41n3XYa3yXBzyBvWqlLhYgRef5RiBD7jwo70U+Rw=
endpoint: 203.0.113.2:51822
allowed ips: 10.0.0.2/32
latest handshake: 7 seconds ago
transfer: 5.93 KiB received, 21.76 KiB sent
And you can view its internal routing table with the ip route
command:
/ # ip route
default via 172.18.0.1 dev eth0
10.0.0.2/32 dev wg0 scope link
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.2
And view its iptables rules with the iptables-save
command:
/ # iptables-save
# Generated by iptables-save v1.8.7 on Fri Nov 5 02:53:27 2021
*filter
:INPUT ACCEPT [66:7920]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [461:35316]
COMMIT
# Completed on Fri Nov 5 02:53:27 2021
# Generated by iptables-save v1.8.7 on Fri Nov 5 02:53:27 2021
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [1:176]
:POSTROUTING ACCEPT [1:176]
:DOCKER_OUTPUT - [0:0]
:DOCKER_POSTROUTING - [0:0]
-A OUTPUT -d 127.0.0.11/32 -j DOCKER_OUTPUT
-A POSTROUTING -d 127.0.0.11/32 -j DOCKER_POSTROUTING
-A DOCKER_OUTPUT -d 127.0.0.11/32 -p tcp -m tcp --dport 53 -j DNAT --to-destination 127.0.0.11:46045
-A DOCKER_OUTPUT -d 127.0.0.11/32 -p udp -m udp --dport 53 -j DNAT --to-destination 127.0.0.11:40609
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p tcp -m tcp --sport 46045 -j SNAT --to-source :53
-A DOCKER_POSTROUTING -s 127.0.0.11/32 -p udp -m udp --sport 40609 -j SNAT --to-source :53
COMMIT
# Completed on Fri Nov 5 02:53:27 2021
And attempt ICMP pings with the following command:
/ # ping -nc1 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=2.444 ms
--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 2.444/2.444/2.444 ms
If you are using the Pro Custodibus Agent image, you can check on the status of the agent service with the following command:
/ # rc-status
Runlevel: default
wg-quick [ started ]
procustodibus-agent [ started ]
Dynamic Runlevel: hotplugged
Dynamic Runlevel: needed/wanted
Dynamic Runlevel: manual
And you can verify that the agent is sent up correctly and can access the Pro Custodibus API with the following command:
/ # procustodibus-agent --test
... 1 wireguard interfaces found ...
... 192.0.2.123 is pro custodibus ip address ...
... healthy pro custodibus api ...
... can access host record on api for My Test Host ...
All systems go :)
Run Diagnostic Tools in the WireGuard Namespace
You can also use the nsenter tool on the host to run any command-line diagnostic tools you have installed on the host inside the WireGuard container’s network namespace.
First identify the PID (process ID) of the container with the following command, where wg-point
is the container’s name:
$ sudo docker container inspect wg-point -f '{{.State.Pid}}'
12345
Then use nsenter with that PID to run tools installed on the host, like cURL:
$ sudo nsenter -t 12345 -n curl 10.0.0.2
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd">
<html>
...
Or tcpdump:
$ sudo nsenter -t 12345 -n tcpdump -niany tcp port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
03:49:52.374483 IP 198.51.100.1.48764 > 172.17.0.2.80: Flags [S], seq 836029496, win 62727, options [mss 8961,sackOK,TS val 427845568 ecr 0,nop,wscale 6], length 0
03:49:52.374508 IP 198.51.100.1.48764 > 10.0.0.2.80: Flags [S], seq 836029496, win 62727, options [mss 8961,sackOK,TS val 427845568 ecr 0,nop,wscale 6], length 0
...