I prefer a remotely-accessible Linux box for development and any terminal-required work and spend most of my time on it. Recently I changed my job, but still have the same preference. At my previous employer, I ran a physical Linux workstation in office, and used Mac laptop and Windows desktop to access via corporate network.
At my new employer, having a Linux box was easy, but accessing it remotely posed a challenge. First, my new office lacks external access capability. Second, I picked up a Windows workstation for desktop environment and placed it in my home; My Linux box runs in Hyper-V instead of on physical hardware. This choice was made to keep expenses minimal (as a new employee) while satisfying my needs: having a Windows desktop in my primary work location (home), plus Mac laptop for remote and office, and Linux box for my dev work.
Fortunately, my employer was experiencing productivity issues on accessing internal resources via AWS Client VPN. I recently introduced Twingate, which I love and which worked great at my previous company for accessing internal resources - including my Linux workstation. So I can now do the same at my new employer by just spinning up a Twingate connector on the Linux box. However, as a connector technically can send packets to arbitrary destinations - would allow free access to admins, I didn't want to allow that to my admin colleagues; even though I trust them.
I achieved a restricted Twingate connector setup by combining netns, nftables, and systemd tricks. Continue reading for deployment details.
Plan
The final goal is to restrict network exposure to the Twingate connector and allow it to connect to the SSH daemon on my Linux box. As this post's title says, I used Linux network namespace - netns for isolation. Netns is a Linux kernel feature that creates isolated network environments. Each namespace gets its own network configuration, separate from the host namespace. It is widely used by container runtimes nowadays.
First, Twingate connector works as a TCP/UDP proxy (rather than a traditional VPN tunnel), and needs access to the Internet (specifically, Twingate control plane and clients) and resources (actual packet destinations). I like this simplicity of Twingate as it removes the requirement for end-to-end L3 connectivity.
There are several methods to implement network restriction: systemd directives, UID/GID based netfilter rules. However, I used netns and isolated the connector completely. This approach was chosen because my Linux box also runs a Twingate client to access other internal servers in our Twingate network, and I didn't want to run into any issues by running client and connector concurrently, such as DNS conflicts, though I believe it's possible.
I am also a systemd enthusiast, so it is straightforward to utilize systemd capabilities as much as possible to keep things simple and manageable.
We need to create a netns, assign a network interface, configure IP, set up the firewall, then finally run the connector. Every step is implemented as a systemd unit.
1. Creating netns
Create a netns using systemd and bind-mount it to /var/run/netns for later use. PrivateMounts=
must be set to no as PrivateNetwork=
implies it to yes if unspecified.
# /etc/systemd/system/twingate-connector-netns.service
[Unit]
Description=Twingate Connector Netns
Wants=network-online.target
After=network-online.target
[Service]
PrivateNetwork=yes
PrivateMounts=no
ExecStartPre=/bin/bash -e -c 'mkdir -p /var/run/netns && touch /var/run/netns/twingate-connector && mount --bind /proc/self/ns/net /var/run/netns/twingate-connector'
ExecStart=/bin/bash -c 'echo twingate-netns; while true; do sleep 365d; done'
2. Assigning a network interface
Assign an interface via ip-link(8). For simplicity, I added an interface from Hyper-V that connects to the same L2 network as the primary interface. Alternatively, this could be a veth between the host netns, but I didn't want to set up NAT in host netns; except for Docker containers.
# /etc/systemd/system/twingate-connector-eth.service
[Unit]
Description=Twingate Connector Eth
After=twingate-connector-netns.service
Requires=twingate-connector-netns.service
[Service]
RemainAfterExit=yes
ExecStart=/usr/bin/ip link set name eth1 netns twingate-connector
3. Configure IP
Since we chose to use a logical network interface connected to the existing network, we need to configure IP. The connected network is just a bridge to my home network subnet, so we need to configure IP via DHCP. It was too complex to run a separate systemd-networkd in netns, so I ended up installing and using dhcpcd instead.
Note: I used NetworkNamespacePath=
instead of JoinsNamespaceOf=
as the latter didn't work in this context
# /etc/systemd/system/twingate-connector-dhcp.service
[Unit]
Description=Twingate Connector DHCP
After=twingate-connector-netns.service twingate-connector-eth.service
Requires=twingate-connector-netns.service twingate-connector-eth.service
[Service]
NetworkNamespacePath=/var/run/netns/twingate-connector
ExecStart=/usr/bin/dhcpcd --nobackground -h %H-tg-connector --nohook test --nohook resolv.conf --nohook hostname --nohook timesyncd eth1
4. Setup firewall
Following the original unit (/usr/lib/systemd/system/nftables.service) but with different dependency and netns directives.
# /etc/systemd/system/twingate-connector-nftables.service
[Unit]
Description=Twingate Connector Nftables
After=twingate-connector-netns.service twingate-connector-eth.service
Requires=twingate-connector-netns.service twingate-connector-eth.service
[Service]
NetworkNamespacePath=/var/run/netns/twingate-connector
Type=oneshot
ExecStart=/usr/bin/nft -f /etc/twingate/connector-nftables.conf
The nftables rules are like the following. As the netns is isolated, we need to connect to sshd using the external private IP of the primary network interface. It allows packets to the primary while dropping other packets to my home network prefix.
# /etc/twingate/connector-nftables.conf
# vim: ft=nftables
table inet tgconnector {
chain input {
type filter hook input priority filter; policy drop;
iif lo accept
ct state established,related accept
ip protocol icmp accept
ip6 nexthdr icmpv6 accept
ip protocol udp udp dport 68 accept
drop
}
chain output {
type filter hook output priority filter; policy drop;
ct state established,related accept
ip protocol udp udp dport 67 accept
ip daddr $LINUX_BOX/32 accept
ip daddr 10.0.0.0/8 drop
ip6 daddr $HOME_IPV6_PREFIX/56 drop
accept
}
chain forward {
type filter hook forward priority filter; policy drop;
drop
}
}
5. Run twingate-connector in the netns
Our dedicated netns is now ready to run a connector. Create a drop-in configuration to require units that we created earlier and run in the netns:
# /etc/systemd/system/twingate-connector.service.d/10-netns.conf
[Unit]
After=twingate-connector-netns.service twingate-connector-eth.service twingate-connector-dhcp.service twingate-connector-nftables.service
Wants=twingate-connector-dhcp.service
Requires=twingate-connector-netns.service twingate-connector-eth.service twingate-connector-nftables.service
[Service]
NetworkNamespacePath=/var/run/netns/twingate-connector
If we had multiple services after the netns setup then we'd use .target
unit to aggregate dependencies, but I omitted it this time to keep the number of files to a minimum
6. Configure Twingate resources
Tips when configuring Twingate network to use the isolated connector:
- Connector's TWINGATE_DNS might need to be configured: I put some public DNS resolver instead of using my home one to avoid exposing my private DNS records. This might be overly paranoid, you can just allow it in the nftables rules and use your default resolver.
- Use IP address in resource configuration and set DNS name using alias: Twingate recently introduced Resource Aliases which allows adding an alternative DNS name on the Twingate network, and this lets you assign a DNS name to an IP address target. In my use case, I didn't have an appropriate DNS record for our Twingate network, so I used alias to assign a name.
Outro
It'd be better if we could restrict exposure via connector's own options, but this worked well with a simple setup. Using netns allows isolating all configuration in one place, instead of mixing everything up in the primary namespace making it less maintainable.