I am not sure I understand why Headscale was excluded. As far as I know, it is made by people not related to Tailscale.
It would be like complaining Vaultwarden is bad because the Bitwarden project is not fully open source even though Vaultwarden is fully open source and has most of the features implemented.
And Headscale kind of ticks all the other boxes mentioned, except "not headscale", because:
* p2p mesh network - it is a mesh network. And even when mesh is blocked, you can use multiple relay servers (derp) which will relay to the mesh from closest location. And you can host your own derp servers.
* Open source and selfhosted - check
* Not Wireguard (Signature-based blocking) - in cases where wireguard is blocked, the derp relay servers run over https and are usually not blocked based on signatures. For example, I use it with Traefik proxy in TCP mode so I could run derp and other http services on the same 443 port and it works great. So - check?
Packaged in nixpkgs - check
On top of that, if you add Headplane admin UI you get nice graphical management, very similar to the one of Tailscale.
I did it mostly by religious reason, but also because everyone writes about Headscale and I didn't want to write about things that everyone already knows.
Religious or political ones? :) I did not know, but after quick search online, it states that Tailscale blocked russian IPs at some point in the past.
I guess it doesn't really matter, but it would have been nice to give some transparency to the reader that it was not due to actual technical limitations.
Religious. Open source versions of products that are created/supported by companies providing proprietary versions of the same product have caused me so many problems in the past that now I don't approve this practice.
It should be the same exact copy, or situation turns into "try demo version, buy the full product". At least from my point of view.
Once you have worked with the large cloud providers (AWS, Azure, GCP), you start to realize that solutions built on those platforms frequently differ a lot from what can realistically be self hosted. Especially when it comes to large SaaS platforms. They probably could not provide it even if they wanted to. Or it would be only viable to large enterprises being direct competitors, but not viable to a homelab.
I think it makes no sense for them to put much focus on developing a separate small open source version of their server. So it is good that they actually support its development.
How did you manage to put the derper behind Traefik? It seems to be not supported at least officially:
“ The DERP protocol does a protocol switch inside TLS from HTTP to a custom bidirectional binary protocol. It is thus incompatible with many HTTP proxies. Do not put derper behind another HTTP proxy.”
I learned this from hosting SmallStep CA behind Traefik. You can use TCP mode for specific subdomains and match it using HostSNI. Then if the client uses SNI it should work. So you can use both TCP and HTTP routers on the same port.
Since we use TLS passthrough and Traefik just proxies TCP, you have to pass certificates to derper, so you either use the Traefik certificate extractor or some other tool to get them.
And initially I though that I would have to integrate libproxyproto into derper in order to handle client IP addresses correctly behind Traefik but it looks like it doesn't really need it.
FWIW; Tinc has been my workhorse between my various cloud providers, some on prem remote access for a few offices I consult with, and my home + workstations.
I keep telling myself I should switch it to a wireguard mesh, but the configuration of tinc + the "right defaults" make it pretty neat. It's fun to watch as you roll out a configuration to one node in the mesh, and ping times drop suddenly once it can make a direct connection with the optimal route.
I keep my provisioning script + the public keys in git; so deploying it to a new machine is a git pull, generate key, push public key, and pull on the other nodes. I have about 17+ hosts on it; not using it for very high bandwidth, but I couldn't do what I do without it.
Nailed it. Long time Tinc user here as well for all my personal stuff. Besides the fact that onboarding individual machines is a garbage process, it's my second best "set it and forget it thing" I use; the first being Syncthing which came later (and additionally, the two work very nicely together through weird edge cases like work VPNS and suc.)
That is sadly, the biggest worry about it. Once I collect a few round tuit's, I wanted to see if I could build up a tool to build up the same mesh + routing, perhaps confined to a linux namespace so as to keep it contained, and try to reproduce the ease of configuration, on top of much better maintained core linux tools.
But the key has to be -- the configuration has to be just as simple as tinc to be effective. Like, almost just parse the configuration files and build it up with WG tunnels.
https://github.com/m13253/VxWireguard-Generator <-- this is what I use with a more serious "internal business" use case, so I know wireguard is in the sensitive security/encryption loop. It uses wireguard tunnels to make a broadcast/mesh Layer2 network with vxlan connections, and you then run OSPF/babel/your-favorite-routing-protocol over that.
I really think of ways on how I can use this which seems an amazing almost uncensorable-ish tech on how to connect two pc's. Like sometimes my brain just thinks "pipes (piping server) just for email". Its a bit of an obsession...
Something like a VPN could theoretically be created where the piping server well "pipes" it in an encrypted manner through internet.
I would love to create something like this just for the funzies but what I am more interested about is the transport layer.
Like I want something which can be independent of udp or whatever and the only thing I am worried about is how I will transport them to the other pc and then I can then lets say send them over piping server, send them over matrix or signal if need be too idk.
Is there any foss projects that can help me just hook up into things in a similar manner as to what I am asking?
I want a implementation independent-ish transport layer so that I can experiment with things which I can just pipe if I can be really really honest.
I also want more people to look into it as I use sometimes piping-server as a way to transporting files between podman containers even though its a bit slow just to try it out and honestly, just having the fun of installing curl and then being ready to go makes it so much more easier to transport files out of the box... and I want to experiment more with it, its been an obsession for almost an year on and off thinking about piping servers and how elegant they are. I used them of sorts to break an intel nat once, but since then we got some better options if somebody wants to know how to break any nats without any root without any emulation but maybe I want to create a blog post about it someday but I am lazy.
> And Amnezia VPN has made their own fork of Wireguard, specifically for breaking through government censorship. But the main problem with obfuscation is the reduction of effective packet MTU
The "obfuscation" in Amnezia's fork does not shrink the available MTU (important for QUIC as it requires a minimum MTU of 1280 while WireGuard itself needs +80 bytes or so for route encapsulation). Amnezia's fork modifies the 4 WireGuard header values (which must be pre-agreed between peers) & occassionally appends (to handshake packets) or sends randomly generated "junk" data.
I was testing zrok [1] until they went paid, then I went to ongoing experiments with Lanemu [2] (a bittorrent-based P2P VPN) and Anywhere Lan (AWL) [3].
So far, the best is AWL - it actually works, peer discovery is fast, and it gives you mDNS-style domains for connected machines; using it is very similar to Syncthing. I wish the peer discovery in Lanemu worked better, as it works all the way back to WinXP. I made a custom build of AWL that works on Win7 (https://github.com/anywherelan/awl/issues/174)
I am not sure what the author thought was so pompous about nebula. It's unfortunate because it seems to have prevented them from reading the documentation. The ssh interface is used as a debug tool not a configuration tool:
https://nebula.defined.net/docs/guides/debug-ssh-commands/
> Because we wrap one packet in another packet, and this overhead takes up space. And that's not good.
This is totally fine, because you must set the interface MTU in the first place. WireGuard is already running at 1420 and there is no problem with SMB over it despite all network participants having the default 1500. I'm get the whole 100Mbps SMB transfers over my connection over WireGuard and that's because I have only the two pairs in my UTP.
> And also Nebula's interface is absolutely shit.
Instead of a normal CLI, you need to configure an internal sshd and connect via SSH to localhost.
Maybe it's more secure, but it's utterly disgusting.
This seems to be a strong misunderstanding? The ssh interface is for debugging only. You can disable it and configuration is solely handled by the daemon configuration file.
I operate a small (few dozen hosts) network on Nebula with mostly NixOS hosts, so I have some applicable experience. Nebula was primarily chosen because it allows me to, among other things, assign fixed prefixes to hosts and have a full declarative config.
You don't configure a host via a CLI, instead you provide it a signed cert for the privkey + CA cert + private key + lighthouses and that's it. The daemon listens on the IPs from the cert and the lighthouses offer a public exchange where peers advertise their IPs (and associated endpoints for p2p).
The CA approach is also an important part here as all your peers effectively have the CA cert in their config file and use it to verify other peers. The CA signs a cert for each host that contains the IP prefixes that a peer may handle packets for. The only CLI you actually regularly use is here, for creating keys and signing certs.
I am not sure I understand why Headscale was excluded. As far as I know, it is made by people not related to Tailscale.
It would be like complaining Vaultwarden is bad because the Bitwarden project is not fully open source even though Vaultwarden is fully open source and has most of the features implemented.
And Headscale kind of ticks all the other boxes mentioned, except "not headscale", because:
* p2p mesh network - it is a mesh network. And even when mesh is blocked, you can use multiple relay servers (derp) which will relay to the mesh from closest location. And you can host your own derp servers.
* Open source and selfhosted - check
* Not Wireguard (Signature-based blocking) - in cases where wireguard is blocked, the derp relay servers run over https and are usually not blocked based on signatures. For example, I use it with Traefik proxy in TCP mode so I could run derp and other http services on the same 443 port and it works great. So - check?
Packaged in nixpkgs - check
On top of that, if you add Headplane admin UI you get nice graphical management, very similar to the one of Tailscale.
I did it mostly by religious reason, but also because everyone writes about Headscale and I didn't want to write about things that everyone already knows.
Religious or political ones? :) I did not know, but after quick search online, it states that Tailscale blocked russian IPs at some point in the past.
I guess it doesn't really matter, but it would have been nice to give some transparency to the reader that it was not due to actual technical limitations.
Religious. Open source versions of products that are created/supported by companies providing proprietary versions of the same product have caused me so many problems in the past that now I don't approve this practice.
It should be the same exact copy, or situation turns into "try demo version, buy the full product". At least from my point of view.
Once you have worked with the large cloud providers (AWS, Azure, GCP), you start to realize that solutions built on those platforms frequently differ a lot from what can realistically be self hosted. Especially when it comes to large SaaS platforms. They probably could not provide it even if they wanted to. Or it would be only viable to large enterprises being direct competitors, but not viable to a homelab.
I think it makes no sense for them to put much focus on developing a separate small open source version of their server. So it is good that they actually support its development.
How did you manage to put the derper behind Traefik? It seems to be not supported at least officially:
“ The DERP protocol does a protocol switch inside TLS from HTTP to a custom bidirectional binary protocol. It is thus incompatible with many HTTP proxies. Do not put derper behind another HTTP proxy.”
I learned this from hosting SmallStep CA behind Traefik. You can use TCP mode for specific subdomains and match it using HostSNI. Then if the client uses SNI it should work. So you can use both TCP and HTTP routers on the same port.
https://github.com/Janhouse/tailscaled-derper/blob/main/dock...
Since we use TLS passthrough and Traefik just proxies TCP, you have to pass certificates to derper, so you either use the Traefik certificate extractor or some other tool to get them.
And initially I though that I would have to integrate libproxyproto into derper in order to handle client IP addresses correctly behind Traefik but it looks like it doesn't really need it.
> As far as I know, it is made by people not related to Tailscale.
I thought the Headscale dev had been hired by Tailscale, didn’t he?
Can’t find references right now but I have a distinct memory of reading about it.
I think you are correct. https://news.ycombinator.com/item?id=33990413 From the headscale commit log, seems like it was https://github.com/kradalby not the owner of the headscale repo. And he is making a lot of commits.
So, looks like Tailscale is paying one of the developers to develop for headscale, as par of his job.
This is one of the best ways a company can support an open source project though.
It's also not hindered. It works completely fine, but it doesn't have users (self hosted competitor).
It uses the same plain WG which doesn't pass across borders by the rules of this experiment.
FWIW; Tinc has been my workhorse between my various cloud providers, some on prem remote access for a few offices I consult with, and my home + workstations.
I keep telling myself I should switch it to a wireguard mesh, but the configuration of tinc + the "right defaults" make it pretty neat. It's fun to watch as you roll out a configuration to one node in the mesh, and ping times drop suddenly once it can make a direct connection with the optimal route.
I keep my provisioning script + the public keys in git; so deploying it to a new machine is a git pull, generate key, push public key, and pull on the other nodes. I have about 17+ hosts on it; not using it for very high bandwidth, but I couldn't do what I do without it.
Nailed it. Long time Tinc user here as well for all my personal stuff. Besides the fact that onboarding individual machines is a garbage process, it's my second best "set it and forget it thing" I use; the first being Syncthing which came later (and additionally, the two work very nicely together through weird edge cases like work VPNS and suc.)
Tinc sounds pretty awesome, but based on the repo activity and a post from the author[1], it looks to be unmaintained.
[1]: https://github.com/gsliepen/tinc/issues/443#issuecomment-184...
That is sadly, the biggest worry about it. Once I collect a few round tuit's, I wanted to see if I could build up a tool to build up the same mesh + routing, perhaps confined to a linux namespace so as to keep it contained, and try to reproduce the ease of configuration, on top of much better maintained core linux tools.
But the key has to be -- the configuration has to be just as simple as tinc to be effective. Like, almost just parse the configuration files and build it up with WG tunnels.
https://github.com/m13253/VxWireguard-Generator <-- this is what I use with a more serious "internal business" use case, so I know wireguard is in the sensitive security/encryption loop. It uses wireguard tunnels to make a broadcast/mesh Layer2 network with vxlan connections, and you then run OSPF/babel/your-favorite-routing-protocol over that.
Sometimes you don't need a P2P VPN, but rather a P2P stream manager (eg. BitTorrent).
A somewhat nice solution for that is Iroh (QUIC P2P w/ hole punching): https://www.iroh.computer
They also provide a solution to discoverability: https://www.iroh.computer/docs/concepts/discovery
Which boils down to storing ECC signed arbitrary data on the mainline DHT.
Two showcase Iroh utilities that are actually useful in practice:
https://github.com/n0-computer/dumbpipe
https://github.com/n0-computer/sendme
Hey amazing post. I have a really interesting project which I want to share which I really obsess about which is called piping server.
https://github.com/nwtgck/piping-server
I really think of ways on how I can use this which seems an amazing almost uncensorable-ish tech on how to connect two pc's. Like sometimes my brain just thinks "pipes (piping server) just for email". Its a bit of an obsession...
Something like a VPN could theoretically be created where the piping server well "pipes" it in an encrypted manner through internet.
I would love to create something like this just for the funzies but what I am more interested about is the transport layer.
Like I want something which can be independent of udp or whatever and the only thing I am worried about is how I will transport them to the other pc and then I can then lets say send them over piping server, send them over matrix or signal if need be too idk.
Is there any foss projects that can help me just hook up into things in a similar manner as to what I am asking?
I want a implementation independent-ish transport layer so that I can experiment with things which I can just pipe if I can be really really honest.
I also want more people to look into it as I use sometimes piping-server as a way to transporting files between podman containers even though its a bit slow just to try it out and honestly, just having the fun of installing curl and then being ready to go makes it so much more easier to transport files out of the box... and I want to experiment more with it, its been an obsession for almost an year on and off thinking about piping servers and how elegant they are. I used them of sorts to break an intel nat once, but since then we got some better options if somebody wants to know how to break any nats without any root without any emulation but maybe I want to create a blog post about it someday but I am lazy.
> And Amnezia VPN has made their own fork of Wireguard, specifically for breaking through government censorship. But the main problem with obfuscation is the reduction of effective packet MTU
The "obfuscation" in Amnezia's fork does not shrink the available MTU (important for QUIC as it requires a minimum MTU of 1280 while WireGuard itself needs +80 bytes or so for route encapsulation). Amnezia's fork modifies the 4 WireGuard header values (which must be pre-agreed between peers) & occassionally appends (to handshake packets) or sends randomly generated "junk" data.
Yes, this is just a bad writing. I wanted to say something about Amnezia, but didn't find a good place for that
There used to be many p2p vpn (full mesh) solutions, which disappeared into obscurity.
Social VPN, Remobo, NeoRouter, GBridge, Wippien, PeerVPN. Remember any of these?
Just checked — none of the domains are working.
Yep, lots of VPN options: https://gist.github.com/mrbluecoat/e725474483dbd81b6195bd3b9...
I'll need to add EasyTier - https://github.com/EasyTier/EasyTier
A yggdrasil private mesh [0] might be worth evaluating too.
- [0] https://www.complete.org/using-yggdrasil-as-an-automatic-mes...
Yggdrasil is great, but AFAIK it doesn't have built-in NAT traversal, so you need at least some publicly reachable nodes.
Yes, from what I understand this is true.
Regarding peer to peer VPNs:
I want to access homeservers and LAN videogames.
I was testing zrok [1] until they went paid, then I went to ongoing experiments with Lanemu [2] (a bittorrent-based P2P VPN) and Anywhere Lan (AWL) [3].
So far, the best is AWL - it actually works, peer discovery is fast, and it gives you mDNS-style domains for connected machines; using it is very similar to Syncthing. I wish the peer discovery in Lanemu worked better, as it works all the way back to WinXP. I made a custom build of AWL that works on Win7 (https://github.com/anywherelan/awl/issues/174)
[1]: https://zrok.io/ [2]: https://gitlab.com/Monsterovich/lanemu [3]: https://github.com/anywherelan/awl
I am not sure what the author thought was so pompous about nebula. It's unfortunate because it seems to have prevented them from reading the documentation. The ssh interface is used as a debug tool not a configuration tool: https://nebula.defined.net/docs/guides/debug-ssh-commands/
But I need debug tool. How I can debug without proper tools?
> Because we wrap one packet in another packet, and this overhead takes up space. And that's not good.
This is totally fine, because you must set the interface MTU in the first place. WireGuard is already running at 1420 and there is no problem with SMB over it despite all network participants having the default 1500. I'm get the whole 100Mbps SMB transfers over my connection over WireGuard and that's because I have only the two pairs in my UTP.
> And also Nebula's interface is absolutely shit. Instead of a normal CLI, you need to configure an internal sshd and connect via SSH to localhost. Maybe it's more secure, but it's utterly disgusting.
This seems to be a strong misunderstanding? The ssh interface is for debugging only. You can disable it and configuration is solely handled by the daemon configuration file.
I operate a small (few dozen hosts) network on Nebula with mostly NixOS hosts, so I have some applicable experience. Nebula was primarily chosen because it allows me to, among other things, assign fixed prefixes to hosts and have a full declarative config.
You don't configure a host via a CLI, instead you provide it a signed cert for the privkey + CA cert + private key + lighthouses and that's it. The daemon listens on the IPs from the cert and the lighthouses offer a public exchange where peers advertise their IPs (and associated endpoints for p2p).
The CA approach is also an important part here as all your peers effectively have the CA cert in their config file and use it to verify other peers. The CA signs a cert for each host that contains the IP prefixes that a peer may handle packets for. The only CLI you actually regularly use is here, for creating keys and signing certs.
I'm wondering if a VPN in layer 2 mode would work better for such cases.
- Make all participants keep a connection with each other, each connection equivalent to an ethernet cable, and each participant a network switch.
- Just like real switches, use spanning-tree to decide which connection is used for data and which is kept as redundancy.
- Bring your own router/dhcp/etc
I think that would be quite robust.
This is interesting! I've been using mosh, Xpra, WireGuard, ZeroTier, and Git (which I know is a stretch), but mostly plain old ssh -L.
tinc is probably the best alternative for cheap OpenVZ VPS with a tun/tap device, if you don't want a userspace wireguard one...
(said the lazy guy that checks if he only needs TCP and, in that case, uses sshuttle via SSH)
Another one to look at is vpncloud: https://github.com/dswd/vpncloud. It's written in Rust. I've used it in production for several years now.
It seems abandoned.
all i want is a daemon that exchanges public keys and ips between main nodes