> Really it's hard to point to any popular open-source tools that fully support HTTP/3: rollout has barely even started.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.
I'm not an expert on HTTP/3, but vehemently disagree about IPv6. It removes tons of overhead and cruft, making it delightful for datacenter work. That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
Major reason for that is BSD Sockets and their leaky abstraction that results in hardcoding protocol details in application code.
For a good decade a lot of software had to be slowly patched in every place that made a socket to add v6 support, and sometimes multiple times because getaddrinfo didn't reach everyone early enough.
> results in hardcoding protocol details in application code
Are you suggesting that this could have been implemented a different way? Example: IP could be negotiated to upgrade from v4 to v6? I am curious about your ideas.
I think in principle, an application didn't need to know the exact format of an IP address, even if connecting directly to an IP. A simple idea that could have made application code much more IP-agnostic would have been for SOCK_ADDR_IN to take the IP in string format, not as a four-byte value. That way, lots of application code would not need to even be recompiled to move from a 4 byte IPv4 address to a 16 byte IPv6 address, whereas today they not only need to be recompiled, they need to be changed at the source level to use a new type that allows for both.
Of course, code that operates on packets, in the TCP/IP stack of the OS would have still needed to be rewritten. But that is far less code than "every application that opens a socket".
Of course, this only applies to code that uses IPs only to open connections. There's lots of application code that does more things with IPs, such as parsing, displaying, validating etc. All of this code would still need to be rewritten to accept IPv6 addresses (and its much more complex string representations), that part is inevitable.
Yeah, the big issue is that any code that took addresses from user input had to do validation to make sure addresses were valid, in allowed ranges, etc.
While the sockaddr struct allowed to to abstractly handle v4/v6 socket connections, there wasn’t a clean way to do all of that additional stuff and IP address logic leaked into all kinds of software where you wouldn’t first expect it.
Something as simple as a web app that needs to inspect proxy headers would even have it.
It also didn’t help that it became practice to explicitly not trust the addr resolution offered by the sockets API because it would do unexpected things like resolving something that looked like an integer to a uint32 and then a 4 byte V4 addr.
This is vastly oversimplifying the problem, the difference between IPv4 and IPv6 is not just the format of the address. Different protocols have different features, which is why the sockaddr_in and sockaddr_in6 types don't just differ in the address field. Plus the vast majority of network programs are using higher level abstractions, for example even in C or C++ a lot of people would be using a network library like libevent or asio to handle a lot of these details (especially if you want to write code that easily works with TLS).
There isn't much need for many applications to know or care what IP protocol they are speaking, they are all just writing bytes to a TCP stream. I think the parent is saying that existing socket abstractions meant that these applications still had to be "upgraded" to support IPv6 whereas it could/should have been handled entirely by the OS with better socket APIs.
The simplest case would have been using a variant of Happy Eyeballs protocol.
Resolve the A and AAAA records, and try to connect to them at the same time. The first successful connection wins (maaaaybe with a slight bias for IPv6).
This would have required an API that uses the host name and folds the DNS resolution and connection into one call. Instead, the BSD socket API remained at the "network assembly" level with the `sockaddr_in/sockaddr_in6` structures used for address information.
For the examples I am going to use the typical "HTTP to example.com" case.
Some OSI-focused stacks provided high level abstraction that gave you a socket already set for listening or connected to another service, based on combination of "host name", "service name", and "service type".
You'd use something like
connect("example.com", "http", SVC_STREAM_GRACEFUL_CLOSE) // using OSI-like name for the kind of service TCP provides
and as far as application is concerned, it does not need to know if it's ipv4, ipv6, X.25, or a direct serial connection (OSI concept of separating "service" from "protocol" is really a great idea that got lost)
Similar approach was done in Plan 9 (and thus everyone who uses Go is going to see something similar) with the dial API:
dial("example.com!http",0,0,0)
As part of IPv6 effort an attempt at providing something similar with BSD Sockets was made, namely getaddrinfo which gives back information to be fed to socket/bind/connect calls - but for a long time people still learnt from old material which had them manually fill in socket parameters without GAI so adoption was slowed down.
No, for example on Android and iOS the APIs for when you connect to a server the hostname is a string. This hostname can be either an ipv4 address, an ipv6 adress, or a domain. The BSD sockets API on the other hand forces each application to implement this themselves and a lot of them took the shortcut of only supporting ipv4.
It isn't about upgrading one protocol to another but about having the operating system abstract away the different protocols from the application.
Yep, it's tragic because it all stems from unforced differences vs ipv4. The design was reasonable, but with perfect hindsight, it needed to be different. They needed to keep the existing /32s and just make the address field bigger, despite the disadvantages.
"Everywhere but nowhere" is sorta how I'd describe ipv6. Most hardware and lower-level software supports it, so obviously it wasn't impossible to support a new protocol, but it's not being used.
And it would have failed for exactly the same reasons, because just changing the address field size is enough to have everyone who uses BSD Sockets to rewrite all parts of code that create sockets.
Especially since getaddrinfo was ported over from more streams/OSI oriented stacks pretty late, precisely because BSD Sockets required separate path for every protocol.
On hw side, by mid-1990s even changing one routing-important field would mean possibly a new generation of ASICs needed with more capabilities.
Essentially, once you agree to break one field, the costs are so big why not try fixing other parts? Especially given that IETF has rejected an already implemented solution of just going with OSI for layer 3.
All that code using BSD sockets is rewritten by now to support v6, right? If so, that can't be the reason, cause v4 is still dominant.
And btw, what I suggested would actually work without userspace code changes until you want to start subdividing the /32s. Cause v4 addresses would've still been valid in v6.
Ipv6 packet format was needed either way, but only with the 32-bit address space at first (the other 92 bits set to 0). You simply tell your system to start using v6 instead, and everything else stays the same. No dual-stack.
Next step would be upgrading all those parts like DNS, DHCP, etc to accept the 128-bit addrs, which again can be done in isolation. Then finally, ISPs or home routers can start handing out longer addresses like 1.1.1.1.2.
There are two ways for me to interpret "simply tell your system to start using v6".
If it means upgrading every program, then your plan works but it's the same as how things work today. You're telling people to do a thing, and they aren't bothering. The "simple" step isn't simple at all.
If it doesn't mean upgrading every program, then your rollout fails on the last step. You start handing out longer addresses and legacy programs can't access them.
It's the second one. But legacy programs did get upgraded, so I don't see why they wouldn't under this other plan. If anything, it's easier because you're only making the address field bigger and it's not a separate case. Some routers struggled with 128-bit addrs due to memory, and could've gotten away with like 48 or 64 bits if they're using DHCP.
Lots of legacy programs, and current programs, and other things that could have been upgraded did not get upgraded. Getting to the situation where you can just flick a switch is not a realistic dream. There's not enough motivation for the average business to add support for a version that isn't in use yet.
Disconnect your phone from Wi-Fi and visit https://ifconfig.co/ . If you're a Verizon customer, it's probably going to show you an IPv6 address. It's huge, right now, today.
Fair. I bet that'll change soon though. My prediction is that it'll be a mobile-first game, like the next Pokemon Go sort of thing, that'll be IPv6-only.
Plenty of mobile users use wifi at home/work. Telling them to disable their ipv4-only wifi just to play your game is going to be a non-starter, especially when the cost of ipv4 address adds negligible cost to infrastructure. Is your CTO really going to massively increase user friction ("turn of your wifi to play!") just so try to save a few cents (comparatively speaking) on infra?
this isn't true. I know because at some point XFinity started dropping ipv6 connections for me and I noticed because a number of sites (forget which) were broken
What do you mean by dropping ipv6 connections, like dropping ipv6 packets? That's only an issue if you're using v6. I disabled ipv6 on my router years ago and have never had a problem just using v4.
True, but irrelevant to my point. Whether a particular ISP supports doesn’t matter: it is being widely used by the rest of the world, to the point that it’s half of Google’s traffic.
Vodafone's network is reported to handle around 20% of the world's traffic. It's not a random ISP. It's network does not support IPv6. It is how a big chunk of all internet users experience the internet. Claiming it doesn't matter in a discussion over IPv6 adoption rate is ludicrous.
> Yep, it's tragic because it all stems from unforced differences vs ipv4. The design was reasonable, but with perfect hindsight, it needed to be different. They needed to keep the existing /32s and just make the address field bigger, despite the disadvantages.
Exactly. I would love to have seen the world in which that happened, and where all the other parts of IPv6 were independently proposed (and likely many of them rejected as unwanted).
The main problem wasn't all the smaller features but one big one in particular that can't be split into smaller pieces, the new addressing scheme. They wanted to replace all the existing addresses, which meant replacing all the routes. Besides the difficulty of that by itself, it automatically meant that the v6 versions of DNS, DHCP, NAT, etc wouldn't support v4, rather it'd be a totally separate stack.
There were also some small things. And routers often having bad defaults for v6, which btw, would not even be a concern if they left the big thing alone.
> Besides the difficulty of that by itself, it automatically meant that the v6 versions of DNS, DHCP, NAT, etc wouldn't support v4, rather it'd be a totally separate stack.
Sure, "make the addresses bigger" would have required providing DHCPv6, DNS AAAA records, and various other protocol updates for protocols that embedded IP addresses. And making changes to the protocol header at the same time (e.g. removing the redundant checksum) were also a good idea.
It didn't require pushing SLAAC instead of DHCP.
It didn't require recommending (though fortunately not requiring) IPsec for all IPv6 stacks.
It didn't require changing the address syntax to use colons, causing pain for all protocols that used `IP:port` or similar.
It didn't require mandating link-local addresses for every interface.
It didn't require adding a mandatory address-collision-detection mechanism.
I wonder if something like HTTP connection upgrade would have been possible with ipv6-ipv4, maybe something like imagine Machine 1 with ips 1.1.1.1 and 11::11 and machine 2 with ips 2.2.2.2 and 22::22.
When machine 2 receives a packet from 1.1.1.1 at 2.2.2.2 it sends a ipv6 ping-like packet to the ipv4-mapped address ::ffff:1.1.1.1 saying something like "hey you can also contact me at 22::22 and if machine 1 undertands then it can try to use the new address for the following packets.
I can see how it would be hard to secure this operation.
For those building on AWS with VPC per service and using PrivateLink for connections between services, the whole IP conflict problem just evaporates. Admittedly, you’re paying some premiums to Amazon for that convenience.
>That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
I always found that to be a desperate talking point. 'Prepare your network for the incredibly rare event where you intend to integrate directly' (didn't anyone hear of network segmentation?). It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
> It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
ISPs unilaterally change your DHCP address on IPv4 all the time. And in any situation where you would have a static address for IPv4, your ISP should have no problem giving you a static v6 prefix. This argument makes no sense at all.
IPv6 packets can still be fragmented, but only at the source. IPv4 fragmentation has only worked this way in practice for a long time.
Private addressing is still needed with IPv6, it's a crucial part of how address allocation works, and it's the only way to reliably connect to a client-like IPv6 device on the local network, since its public IP address will change all the time for privacy reasons, assuming it respects best practices.
Routing is only simpler if the ISPs actually hand out the large prefixes they are supposed to. Not all of them do.
DHCP is still required for many use cases. So now you have two solutions for handing out addresses, and you need to figure out when to use SLAAC and when to use DHCP. This is strictly more complex than IPv4, not simpler. SLAAC is mostly just unnecessary cruft, a cute little simple path for limited use cases, but it can never replace DHCPv6 for all use cases (e.g. for subnets smaller than a /64, for communicating additional information like a local DNS server or NTP server, for complex network topologies, for server machines etc).
* First three points matter more on bad connections, but are less of a problem on good ones.
* Private addressing is a feature, not a bug, in the datacenter.
* NAT is a feature, not a bug, in the datacenter.
* Simpler routing matters more on bad connections, but is less of a problem on good ones.
* DHCP is a feature, not a bug, in the datacenter.
Overall, it adds features that I don't need in my datacenter, and takes away others that I do and now need to add back. Like I said: it's great outside the datacenter, not so great inside it.
> * No more private addressing (unless you're a glutton for punishment).
The question of whether or not you use private addressing is, AFAICT, independent of the protocol. I mean, there's no material difference between private and public addressing.
> * No more NAT (see above).
Ditto. You don't have to NAT over IPv4, and you can NAT over IPv6; and - you may want to or need to, depending on restrictions on your connection.
I really have to agree with the "easier to debug" part. I one time had to debug a particularly nasty networking issue that was causing HTTP connections to just "stop" midway through sending data. Turned out to be a confusion mismatch between routers and allowed packet sizes. It would have been so much worse with a non-plaintext protocol.
Totally agree. Most of the benefit of HTTP 2/3 comes from minimizing TCP connections between app->lb. Once you are past the lb the benefits are dubious at best.
Most application frameworks that I've dealt with have limited capabilities to handle concurrent requests, so it becomes a minor issue to have 100+ connections between the app and the lb.
On the flipside, apps talking to the LB can create all sorts of headaches if they have even modest sized pools. 20 TCP connections from 100 different apps and you are already looking at hard to handle TCP flooding.
HTTP/3 is not a mobile centric technology. Yes there was a lot of discussion of packet pacing and it's implications for mobile in early presentations on QUIC but that's not the same as "centric", that's one application of the behavior. Improved congestion control, reduced control plane cost and removal of head of line blocking behaviors have significant value in data center networks as well. How often do you work with services where they have absolutely atrocious tail latencies and wide gaps between median and mean latencies? how often is that a side effect of http/tcp semantics?
IPv6 is the same deal, I sort of understand where the confusion comes from around QUIC because so much was discussed about mobile early on, and it just got parroted heavily in the rumor mill, but IPv6? that long predates the mobile explosion, and again, it helps as an application, but ascribing it as the only application because of it's applicability somewhere else doesn't hold up to basic scrutiny. The largest data centers these days are pushing up against a whole v4 IP class (I know, classes are dead, sorta) in hardware addressable compute units - a trend that is not slowing.
We did this with quic data center side: https://tailscale.com/blog/living-in-the-future#the-c10k-pro... and while it might be slightly crazy in and of itself, it's far more practical with multiplexing than with a million excessively sized buffers competing over pools and so on.
There is absolutely value to quic and ipv6 in the data center, perhaps it's not so useful for traditionally shaped and sized LAMP stacks, but you can absolutely make great use of these at scale and in modern architectures, and they open a lot of doors/relax constraints in the design space. This also doesn't mean everyone needs to reach for them, but I don't think they should be discarded or ascribed limited purpose so blithely.
I will acknowledge that truly massive datacenter deployments can and do use these technologies to good effect, but I haven't worked at any of these kinds of places in the last fifteen years and I suspect many (most?) of my colleagues haven't either. Anything smaller than a /8, they usually don't add much and just get in the way more often than not.
HTTP3 is patch that unfucked some stupid design choices from HTTP2[1]
However IPv6 is perfectly suited to the datacentre. So long as you have properly infrastructure setup (ie properly functioning DNS) IPv6 is a godsend for simplifying medium scale infra.
In fact, if you want to get close to a million hosts, you need ipv6.
[1] Me and http/2 have beef, TCP multiplexing was always going to be a bad idea, but because idealism got in the way of testing
Sure, but now you've lost some of the benefits of HTTP/3, such as the header compression and less head-of-line blocking. To some degree the load balancer can solve this by using multiple parallel HTTP 1.1 streams, but in practice I've seen pretty bad results in many common scenarios.
No one cares about those "benefits" _on a gigabit line_. The head of your line is not blocked at such speeds, believe you me. Same thing with compression. Like, why. Other than to make it harder to debug?
I had head-of-line-blocking issues recently on a 10 Gbps data centre link!
HTTP client packages often use a small, fixed number of connections per domain. So if you have two servers talking to each other and there's slow requests mixed in with short RPCs, the latter can sit in a queue for tens of seconds.
QUIC/HTTP3 relies on TLS. If you already have some encrypted transport, like an Istio/Envoy service mesh with mutual TLS, or Zerotier/Tailscale/Wireguard style encrypted overlay network, then there are no benefits to using HTTP3. Moreover native crypto libraries tend do a better job handling encryption anyway so rather than wasting cycles doing crypto in Go or Node it makes more sense to let the service mesh or the overlay handle encryption and let your app just respond to clear requests.
Sure I was responding to the context as I understood it here which was listening on HTTP/3 as an application rather than a service mesh layer. HTTP/3 can definitely be a choice for service mesh or some sort of overlay. Personally if I were setting up a new cloud/DC today I'd probably just use ZeroTier (or Tailscale) and let the overlay deal with encryption while I just have my sources and destinations do IP based filtering.
The protocol isn’t but its deployment is - real-world deployment is predominantly mobile. That’s nothing to do with the inherent technical features of the protocol, it is a consequence of market history
HTTP/2 is still mostly implemented only over TLS, and that can mean significant and completely useless overhead if the server-LB connection is already encrypted using some VPN solution like WireGuard.
Speaking of gRPC, it's unfortunate that they went all-in on HTTP/2. Should have made it work over HTTP/1.1. I know others made it work, but it wasn't first-party. Maybe it could've been more popular than JSON-over-HTTP by now.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.