I would really really really like to have one device on my tailnet as the exitnode for all other devices on the tailnet. However, most VPNs make this really difficult. Is there any way to do this? I’ve read it’s possible with split-tunnelling, but ProtonVPN (which I use) doesn’t support that. I just installed Alpine Linux on my RPI 4b. And would like to use this as my exit node. Does anyone have any tips for how this could be done?
I’ve been trying to accomplish the exact same thing. In the same vein I’ve also been trying to setup a tailscale exit node with mitmproxy so that i can inspect mobile app traffic without having to fiddle with proxy configs on my phone each time. In relation to that i found this - https://www.aapelivuorinen.com/blog/2022/09/12/transparent-mitmproxy-tailscale-vm/
Let me know how you get on as I’m super keen on having both a VPN and mitmproxy setup as exit nodes.
I have solved this problem! The trick is to use two Docker containers:
- Gluetun (https://github.com/qdm12/gluetun): set this up to connect to your VPN.
- Tailscale (https://tailscale.com/kb/1282/docker/): set this to use the Gluetun network.
Here is an example docker-compose.yml:
version: "3" services: gluetun: image: qmcgaw/gluetun container_name: gluetun # line above must be uncommented to allow external containers to connect. # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-container-to-gluetun.md#external-container-to-gluetun restart: unless-stopped cap_add: - NET_ADMIN devices: - /dev/net/tun:/dev/net/tun volumes: - ./gluetun:/gluetun environment: - VPN_SERVICE_PROVIDER=airvpn - VPN_TYPE=wireguard - WIREGUARD_PRIVATE_KEY=xxx - WIREGUARD_PRESHARED_KEY=xxx - WIREGUARD_ADDRESSES=xxx - WIREGUARD_MTU=1320 - SERVER_COUNTRIES=United States # See https://github.com/qdm12/gluetun-wiki/tree/main/setup#setup # Timezone for accurate log times - TZ=America/New_York # Server list updater # See https://github.com/qdm12/gluetun-wiki/blob/main/setup/servers.md#update-the-vpn-servers-list - UPDATER_PERIOD=24h tailscale: container_name: tailscale cap_add: - NET_ADMIN - NET_RAW volumes: - ./tailscale/var/lib:/var/lib - ./tailscale/state:/state - /dev/net/tun:/dev/net/tun network_mode: "service:gluetun" restart: unless-stopped environment: - TS_HOSTNAME=airvpn-exit-node - TS_AUTHKEY=xxxxxxxx - TS_EXTRA_ARGS=--login-server=https://example.com --advertise-exit-node - TS_NO_LOGS_NO_SUPPORT=true - TS_STATE_DIR=/state image: tailscale/tailscale
Wow! You know what, I was just thinking about using Gleutun for this enefore I went to bed last night, and then I wake up to this gem of a message!! 😅 Well done sir, I’ll be cooking this up ASAP!
For anyone trying this, make sure you do not have “- TS_USERSPACE=false” in your yaml from previous experimentation. After removing this, it works for me too.
In the documentation they say to add sysctl entries, it is possible in docker compose like so:
tailscale: sysctls: - net.ipv4.ip_forward=1 - net.ipv6.conf.all.forwarding=1
But it does not seem to make a difference for me. Does anyone know why these would not be required in this specific setup?