I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.
L4 vs L7 load balancing and the ability to send clients to specific addresses seem like major differences to me? I'm not seeing how subdomains are "archaic". It's also obviously desirable on the user side in a world of everything using TLS to allow better network management (e.g. blocking some services like youtube and ads on a child's device while allowing e.g. maps. Or if reddit used subdomains, you could allow e.g. minecraft.reddit.com without allowing everything else) without needing to install CAs on every device to MITM everything.
Subdomains are archaic in the context of high availability. 15 years ago, it was impractical to expect a single system (google.com) to reliably handle hundreds of millions of requests per second and so distributing services across subdomains was important because it distributed the traffic.
Today, hundreds of millions of requests per second can be handled by L4 systems like Google Cloud and Cloudflare. Today traffic to a subdomain is almost certainly being routed through the same infrastructure anyway so there is no benefit to using a subdomain. That's why I describe subdomains as archaic in the context of building a highly available system like Google's.
If you're Google in 2010, maps.google.com is a necessity. If you're Google in 2025, maps.google.com is a choice. Subdomains are great for many reasons but are no longer a necessity for high availability.
It has nothing to do with high availability. It's useful to separate out different traffic patterns. e.g. if you're serving gazillions of small requests over short lived connections, you want a hardware accelerated device for TLS termination. You'd be wasting that device to also use it for something like video (large data transfers on a long-lived connection). An L7 loadbalancer (i.e. one that's looking at paths) needs to terminate TLS to make its decision.
You're making a different point. Of course, there are use cases for subdomains, I'm talking specifically about the transition of maps.google.com to google.com/maps. google.com/maps always made sense but wasn't technically viable when Google Maps launched and that's why they've transitioned to it now. I'm arguing that Google Maps being on a subdomain was an infrastructure choice, not a product choice.
I'm not trying to be argumentative, but by saying:
>Subdomains are archaic
You presented a bit different argument. Also I disagree - maps.google.com is a fundamentally different service, so why should it share a domain with google.com? The only reason it's not googlemaps.com is because being a subdomain of google.com implies trust.
But I guess it's pretty subjective. Personally I always try to separate services by domain, because it makes sense to me, but maybe had the internet went a different path I would swear path routing makes sense.
DNS-based (hostname) allowlisting is just starting to hit the market (see: Microsoft's "Zero Trust DNS" [1]) and this would kill that. Even traditional proxy-based access control is neutered by this and the nice thing about that is that it can be done without TLS interception.
If you're left with only path-based rules you're back to TLS interception if you want to control network access.
Yes it’s easy route paths, I’ve been using Fastly to do it for years.
But the vast majority of users don’t care about URL structure. If a company goes through the effort to change them, it’s because the company expects to benefit somehow.
It is not, it is for your local resolver to distinguish a top-level domain from a subdomain (i.e. `foo` gets rewritten to `foo.mydomain.com` or `foo.local`)
man resolv.conf, read up on search domains and the ndots option
I wonder if this is related to the first party cookie security model. That is supposedly why Google switched maps from maps.google.com to www.google.com/maps. Running everything off a single subdomain of a single root domain should allow better pooling of data.
Subdomains were chosen historically because it was the sane way to run different infrastructure for each service. Nowadays, with the globally distributed frontends that Google's Cloud offers, path routing and subdomain routing are mostly equivalent. Subdomains are archaic, they are exposing the architecture (separate services) to users out of necessity. I don't think cookies were the motivation but it's probably a nice benefit.
https://cloud.google.com/load-balancing/docs/url-map-concept...
L4 vs L7 load balancing and the ability to send clients to specific addresses seem like major differences to me? I'm not seeing how subdomains are "archaic". It's also obviously desirable on the user side in a world of everything using TLS to allow better network management (e.g. blocking some services like youtube and ads on a child's device while allowing e.g. maps. Or if reddit used subdomains, you could allow e.g. minecraft.reddit.com without allowing everything else) without needing to install CAs on every device to MITM everything.
Subdomains are archaic in the context of high availability. 15 years ago, it was impractical to expect a single system (google.com) to reliably handle hundreds of millions of requests per second and so distributing services across subdomains was important because it distributed the traffic.
Today, hundreds of millions of requests per second can be handled by L4 systems like Google Cloud and Cloudflare. Today traffic to a subdomain is almost certainly being routed through the same infrastructure anyway so there is no benefit to using a subdomain. That's why I describe subdomains as archaic in the context of building a highly available system like Google's.
If you're Google in 2010, maps.google.com is a necessity. If you're Google in 2025, maps.google.com is a choice. Subdomains are great for many reasons but are no longer a necessity for high availability.
It has nothing to do with high availability. It's useful to separate out different traffic patterns. e.g. if you're serving gazillions of small requests over short lived connections, you want a hardware accelerated device for TLS termination. You'd be wasting that device to also use it for something like video (large data transfers on a long-lived connection). An L7 loadbalancer (i.e. one that's looking at paths) needs to terminate TLS to make its decision.
You're making a different point. Of course, there are use cases for subdomains, I'm talking specifically about the transition of maps.google.com to google.com/maps. google.com/maps always made sense but wasn't technically viable when Google Maps launched and that's why they've transitioned to it now. I'm arguing that Google Maps being on a subdomain was an infrastructure choice, not a product choice.
I'm not trying to be argumentative, but by saying:
>Subdomains are archaic
You presented a bit different argument. Also I disagree - maps.google.com is a fundamentally different service, so why should it share a domain with google.com? The only reason it's not googlemaps.com is because being a subdomain of google.com implies trust.
But I guess it's pretty subjective. Personally I always try to separate services by domain, because it makes sense to me, but maybe had the internet went a different path I would swear path routing makes sense.
> allow better network management
Yeah, this would definitely block that.
DNS-based (hostname) allowlisting is just starting to hit the market (see: Microsoft's "Zero Trust DNS" [1]) and this would kill that. Even traditional proxy-based access control is neutered by this and the nice thing about that is that it can be done without TLS interception.
If you're left with only path-based rules you're back to TLS interception if you want to control network access.
[1] https://techcommunity.microsoft.com/blog/networkingblog/anno...
Yes it’s easy route paths, I’ve been using Fastly to do it for years.
But the vast majority of users don’t care about URL structure. If a company goes through the effort to change them, it’s because the company expects to benefit somehow.
Has anything changed about the risks of running everything with the same key, on the apex domain?
Why doesn't Google have DNSSEC.
Subdomains can be on the same architecture
Google owns the .google TLD, could they theoretically use https://google or is that not allowed?
Not allowed for gTLDs. Some ccTLDs do it, http://ai/ resolved as recently as a year ago though I can't get it to right now.
https://xn--l1acc./ and https://uz./ connect, though there's a cert issues in both cases.
You need a dot at the end for it to resolve correctly
https://ai.
It’s unreachable anyway
That is cursed
It is not, it is for your local resolver to distinguish a top-level domain from a subdomain (i.e. `foo` gets rewritten to `foo.mydomain.com` or `foo.local`)
man resolv.conf, read up on search domains and the ndots option
You're currently browsing `news.ycombinator.com.`[0].
[0]: https://jvns.ca/blog/2022/09/12/why-do-domain-names-end-with...
Going by the Wayback Machine it looks like it used to redirect to http://www.ai, which still works, but only over HTTP.
Well of course http://www.ai would work. That's no different from http://foo.ai .
http://uz./ serves a 500 error.
Not sure if that’s allowed, but that sure feels like a throwback to AOL keywords if it was — just at the DNS level.
This has been working the other way up until now right?
At Google scale redirecting requests to ccTLD versions uses up plenty of resources and bandwidth:
Get request to .com (like from urlbar searches)
GeoIP lookup or cookie check
Redirect to ccTLD
Much of this is then repeated on the ccTLD.
This change should decrease latency for users (no redirect, no extra DNS lookups, no extra TLS handshake) and enhance caching of resources.
This is so they can use the same tracking cookies across all their products.
Seems like a pretty big SPOF
I think they probably know what they are doing.