It's funny that layer 7 remains in the vernacular. Nobody talks about layer 6 proxies. Or occasionally somebody will mention a layer 3 proxy. But never layer 5.
For folks in the networking space, differentiating between L4 and L7 proxies is pretty important. And while you could call it an HTTP proxy in many circumstances, some proxies support other protocols e.g a mysql proxy.
Layers, P's… blimey, leave them all out of my PSTN connections and bring X.25 back!
To rectify this most grievous transgression, I now unveil a device of eternal ingenuity and enchanting craftsmanship, a veritable marvel, which shall restore order to the realm of networking with unparalleled precision and grace: «Whispering X.Gate», a X.25 API Gateway – https://pastebin.com/S11LRJNS
I don't have much to add other than to compliment the README. At least it shows some concern about documenting the higher level architecture... I get discouraged of contributing to open source due to the laziness of basically having to reverse engineer the code
Just like ever single proxy written in Go, it just uses the core httputil library with a shit ton of custom code on top of it.
Anyone who writes Go does not need any of this. And those who do not write Go, can still write their own in no time because it is literally couple of lines of code. No harder than running a webserver in Go(two lines of code).
It's good that this exists, but new projects that come into a well established space should make it clear how they differentiate themselves from existing solutions.
For example, it's not clear to me why anyone would choose to use this instead of Caddy, which is a robust server that already has all these features, whether OOB or via plugins.
This space may be well established, but it still does not fullfill all needs. For my own:
- NGINX does not support ACME, and I'm fed up with dealing with Lego and other external ACME clients. Also the interactions between locations has never been clear to me.
- Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
HA Proxy already exists and will do both. You can even redirect Layer 4 HTTP/HTTPS Ports to another reverse proxy server if you want to get inception levels of crazy.
But see how in your project the very first paragraph explains why it exists, and what it does differently. This is what I think is missing from Dito. It doesn't have to be super in depth.
I do disagree with your argument against Caddy, though. How often do you realistically rebuild your services? If it's anytime you would upgrade, then it seems manageable. xcaddy makes this trivial, anyway. Though you don't really need to use it. There's a convenient pro-tip[1] about doing this with a static main.go file instead.
FWIW, it doesn't handle your use case of Layer 4, but for the people at Layer 7, another option is good ol' Apache: it is so flexible and extensible it is almost a problem, people tend to not know it long ago went event-oriented with mpm_event, and it supports ACME internally (as a module, but a first-party module that is seamlessly integrated). (I do appreciate, though, that it is critically NOT written in a memory safe language, lol, but neither is nginx.)
> Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy
You would be surprised by how many infrastructures have software running without any container :) I'm running FreeBSD on my servers so containers are out, but even if I was Linux, why would I use containers for base services?
This is a supported feature of podman which can generate systemd units to make system services.
But, as for advantages (system has some of them too), sandboxing, resource constraints, ease of distribution, not being broken by system updates (glibc update on RHEL broke some Go binaries iirc).
My rule of thumb is that only system software (e.g. DE, firewall, drivers, etc) belong on the base system, everything else is going in a container. This is very easy to do now on modern Linux distros.
> - Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
I mean, you don't havse to "rebuild services" -- if you need the plugin, just deploy the binary with the plugin. It's not like it changes (other than upgrades, which you'd do regardless of the plugin).
I wish there was an alternative to Kong API gateway where I didn't need to write my plugins in Lua (the go and js sdks seem abandoned and are incomplete).
Having an API gateway between the internet and your service(s) is a great idea and one I’ve implemented no less than 3 times. But you should really just roll your own. It’s a few dozen lines of code with go’s standard library reverse proxy and gives you way more flexibility than trying to yaml-configure someone else’s.
Unless you have a wild use-case that hasn't been tackled by what's out there, why on earth would rolling your own be a good idea? Building a proper, secure, and performant API gateway is NOT a few dozen lines of code.
There are some super robust (and fast) Go API gateways that take care of all the things you didn't think about when trying to roll your own.
I can absolutely assure you that building a fast and secure gateway is not as hard as you seem to be implying. This is, again, based on my real world experience.
As someone who already did this (because no other solution with our needs was available), I strongly disagree.
Most of the time, NGINX, Caddy, Traefik or APISIX are enough. The only time I felt the need to implement an API Gateway from scratch was to support a very specific use case with a specific set of constraints. No matter how robust the Go standard library is, implementing an API Gateway from scratch is rarely a good idea.
In my experience those specific sets of constraints come sooner or later. Someone is going to ask for some complex auth or routing rules and it’s easier to just write it in go than it is to learn a whole new DSL or lua to implement it.
If you need something more than this, you're either in a very specific situation (where an API Gateway written from scratch might be a good idea) or that someone is doing something wrong
this is such a wild take to me. why on earth are there complicated routing rules happening at the API gateway at all?
In MY real world experience, the API gateway does some sort of very simple routing to various services and any complex auth or routing rules would be the service's responsibility.
If the API gateway has your application logic in it it's not a separate component at all.
How complex can you really get with HTTP requests anyway?
Use something that solves 1000 use cases of which yours is one. Some would say that's simplicity while others would say that's complexity. When it breaks do you know why? Can you fix it properly or are just layering band-aid on a bigger problem inside the component.
Or... build something that solves exactly your use-case but probably doesn't handle the other 1000 use-cases and needs to be put through trial-by-fire to fix all the little edge-cases you forgot about?
Early in my career I opted for #1 but nowadays I generally reach for #2 and really try to nail the core problem I'm tackling and work around the gotchas I encounter.
I love the focus on flexibility & integration with Redis.
We use a mix of Traefik and Envoy for complex + dynamic LB configurations. Doing anything related to custom middleware, dynamic configuration, and caching feels archaic on Traefik and requires a non-trivial amount of code on Envoy. I hope Dito becomes the next gold standard for load balancing.
One caveat — one of my biggest complaints with Traefik is the memory usage, which makes it difficult to run as an mTLS proxy between services. We use Envoy for these use cases instead. I’m curious to see how Dito compares on memory usage, despite also being written in Go.
It's funny that layer 7 remains in the vernacular. Nobody talks about layer 6 proxies. Or occasionally somebody will mention a layer 3 proxy. But never layer 5.
For folks in the networking space, differentiating between L4 and L7 proxies is pretty important. And while you could call it an HTTP proxy in many circumstances, some proxies support other protocols e.g a mysql proxy.
Yes, IMHO calling it a Layer 7 proxy it quite misleading. I was expecting something closer to an ALG.
Calling a reverse HTTP proxy a Layer 7 proxy is misleading? Why?
Since noone else has posted it, I will: https://docs.google.com/document/u/0/d/1iL0fYmMmariFoSvLd9U5...
The OSI Deprogrammer
We think in TCP/IP but use ISO layer names.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Layers, P's… blimey, leave them all out of my PSTN connections and bring X.25 back!
To rectify this most grievous transgression, I now unveil a device of eternal ingenuity and enchanting craftsmanship, a veritable marvel, which shall restore order to the realm of networking with unparalleled precision and grace: «Whispering X.Gate», a X.25 API Gateway – https://pastebin.com/S11LRJNS
We used to at least think about it, but noone seems to be running DECNet, GOSIP or the rest any more.
aren't SSH tunnels layer six proxies in essence?
I don't have much to add other than to compliment the README. At least it shows some concern about documenting the higher level architecture... I get discouraged of contributing to open source due to the laziness of basically having to reverse engineer the code
Just like ever single proxy written in Go, it just uses the core httputil library with a shit ton of custom code on top of it.
Anyone who writes Go does not need any of this. And those who do not write Go, can still write their own in no time because it is literally couple of lines of code. No harder than running a webserver in Go(two lines of code).
https://github.com/andrearaponi/dito/blob/a57d396476cc618678...
It's good that this exists, but new projects that come into a well established space should make it clear how they differentiate themselves from existing solutions.
For example, it's not clear to me why anyone would choose to use this instead of Caddy, which is a robust server that already has all these features, whether OOB or via plugins.
This space may be well established, but it still does not fullfill all needs. For my own:
- NGINX does not support ACME, and I'm fed up with dealing with Lego and other external ACME clients. Also the interactions between locations has never been clear to me.
- Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
So I'm building my own server/reverse proxy (https://github.com/galdor/boulevard). Competition is good for everyone!
HA Proxy already exists and will do both. You can even redirect Layer 4 HTTP/HTTPS Ports to another reverse proxy server if you want to get inception levels of crazy.
Sure, but c'mon, HAProxy is the 800lb gorilla in this case when you just need something simple.
> Competition is good for everyone!
Definitely!
But see how in your project the very first paragraph explains why it exists, and what it does differently. This is what I think is missing from Dito. It doesn't have to be super in depth.
I do disagree with your argument against Caddy, though. How often do you realistically rebuild your services? If it's anytime you would upgrade, then it seems manageable. xcaddy makes this trivial, anyway. Though you don't really need to use it. There's a convenient pro-tip[1] about doing this with a static main.go file instead.
Good luck with your project!
[1]: https://github.com/caddyserver/xcaddy#warning-pro-tip
FWIW, it doesn't handle your use case of Layer 4, but for the people at Layer 7, another option is good ol' Apache: it is so flexible and extensible it is almost a problem, people tend to not know it long ago went event-oriented with mpm_event, and it supports ACME internally (as a module, but a first-party module that is seamlessly integrated). (I do appreciate, though, that it is critically NOT written in a memory safe language, lol, but neither is nginx.)
Layer 4 and 7? HAProxy will do that no problem.
Boulevard has to be compiled at some point. Wouldn’t it extremely simple to setup a GitHub Action to build Caddy in the way you desire?
Yes, it can be compiled and packaged so that I can one day install it as any other package, in my case on FreeBSD.
And of course it's not just about avoid recompilation, there are a lot of features I want to add.
> Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy
That's why containers exist.
You would be surprised by how many infrastructures have software running without any container :) I'm running FreeBSD on my servers so containers are out, but even if I was Linux, why would I use containers for base services?
> why would I use containers for base services?
This is a supported feature of podman which can generate systemd units to make system services.
But, as for advantages (system has some of them too), sandboxing, resource constraints, ease of distribution, not being broken by system updates (glibc update on RHEL broke some Go binaries iirc).
My rule of thumb is that only system software (e.g. DE, firewall, drivers, etc) belong on the base system, everything else is going in a container. This is very easy to do now on modern Linux distros.
DE?
I think they meant use a container to build caddy with xcaddy.
It is essentially a one liner to cross compile caddy for all your use cases as long as you have access to a container runtime to build it.
> - Caddy plugins mean I have to download xcaddy and rebuild the server. I really do not want to rebuild services on my servers just because I need a simple layer 4 reverse proxy (e.g. so that I can terminate TLS connections in front of my IRC server).
I mean, you don't havse to "rebuild services" -- if you need the plugin, just deploy the binary with the plugin. It's not like it changes (other than upgrades, which you'd do regardless of the plugin).
Which Caddy plugins are you using?
> new projects that come into a well established space should make it clear how they differentiate themselves from existing solutions.
You sound like a helpful person. Maybe you're volunteering to create a site to do just that?
It's written in Go. /s
So is caddy fwiw
I wish there was an alternative to Kong API gateway where I didn't need to write my plugins in Lua (the go and js sdks seem abandoned and are incomplete).
Have you tried KrakenD? https://www.krakend.io/ - plugins are written in Go
Have you seen https://github.com/zalando/skipper? You can implement custom filters in Golang.
Lua has some pretty neat sandbox-type features. I wonder if choosing it is related.
Assuming it is built on top of OpenResty https://github.com/openresty/openresty
I used kong on premise is there a use case for it when aws and the likes offer API gateway solutions?
we're using the kong gateway controller on aws eks, it's pretty neat. I prefer to manage it via argocd/gitops over terraform.
What would you prefer to write plugins in?
Already doing this kind of stuff with Caddy. Unclear why this would be any better.
Is this compatible with covering your tracks behind 7 reverse proxies in IRC? Asking for a friend.
Having an API gateway between the internet and your service(s) is a great idea and one I’ve implemented no less than 3 times. But you should really just roll your own. It’s a few dozen lines of code with go’s standard library reverse proxy and gives you way more flexibility than trying to yaml-configure someone else’s.
Unless you have a wild use-case that hasn't been tackled by what's out there, why on earth would rolling your own be a good idea? Building a proper, secure, and performant API gateway is NOT a few dozen lines of code.
There are some super robust (and fast) Go API gateways that take care of all the things you didn't think about when trying to roll your own.
I can absolutely assure you that building a fast and secure gateway is not as hard as you seem to be implying. This is, again, based on my real world experience.
But maybe your experience just means you’ve never had the same problems as the authors of the libraries?
That’s the point, I’m solving my problems not using the kitchen sink solution
> But you should really just roll your own
As someone who already did this (because no other solution with our needs was available), I strongly disagree.
Most of the time, NGINX, Caddy, Traefik or APISIX are enough. The only time I felt the need to implement an API Gateway from scratch was to support a very specific use case with a specific set of constraints. No matter how robust the Go standard library is, implementing an API Gateway from scratch is rarely a good idea.
In my experience those specific sets of constraints come sooner or later. Someone is going to ask for some complex auth or routing rules and it’s easier to just write it in go than it is to learn a whole new DSL or lua to implement it.
Complex Auth: https://doc.traefik.io/traefik/middlewares/http/forwardauth/
Complex routing rules: https://doc.traefik.io/traefik/routing/routers/
If you need something more than this, you're either in a very specific situation (where an API Gateway written from scratch might be a good idea) or that someone is doing something wrong
Why would I want to write another service to handle auth? That’s somehow less complex than just writing it into my gateway?
I sort of agree with you, but only if you aren’t in a position to say ‘the software doesn’t support it’.
No idea why you would roll your own, easiest thing is run nginx in docker. No way writing your own is the first thing you should do.
Until you want your gateway to handle some complex auth or routing rules and don’t want to learn a whole new programming language to implement that.
this is such a wild take to me. why on earth are there complicated routing rules happening at the API gateway at all?
In MY real world experience, the API gateway does some sort of very simple routing to various services and any complex auth or routing rules would be the service's responsibility.
If the API gateway has your application logic in it it's not a separate component at all.
How complex can you really get with HTTP requests anyway?
Authn is gateway’s responsibility. Authz is subservice’s.
think about two product people with opposing goals. that’s how you get a mess. nothing technical
Completely disagree.
Use something off the shelf that’s mature and tested until you encounter such complexity that it’s no longer feasible or practical.
Two sides of the same coin!
Use something that solves 1000 use cases of which yours is one. Some would say that's simplicity while others would say that's complexity. When it breaks do you know why? Can you fix it properly or are just layering band-aid on a bigger problem inside the component.
Or... build something that solves exactly your use-case but probably doesn't handle the other 1000 use-cases and needs to be put through trial-by-fire to fix all the little edge-cases you forgot about?
Early in my career I opted for #1 but nowadays I generally reach for #2 and really try to nail the core problem I'm tackling and work around the gotchas I encounter.
I love the focus on flexibility & integration with Redis.
We use a mix of Traefik and Envoy for complex + dynamic LB configurations. Doing anything related to custom middleware, dynamic configuration, and caching feels archaic on Traefik and requires a non-trivial amount of code on Envoy. I hope Dito becomes the next gold standard for load balancing.
One caveat — one of my biggest complaints with Traefik is the memory usage, which makes it difficult to run as an mTLS proxy between services. We use Envoy for these use cases instead. I’m curious to see how Dito compares on memory usage, despite also being written in Go.