Note well: the claims about TCP come with some evidence, in the form of a graph. The claims for QUIC do not.
Many of the claims are dubious. TCP has "no notion of multiple steams"? What are two sockets, then? What is poll(2)? The onus is on QUIC to explain why it’s better for the application to multiplex the socket than for the kernel to multiplex the device. AFAICT that question is assumed away in a deluge of words.
If the author thinks it’s the "end of TCP sockets", show us the research, the published papers and meticulous detail. Then tell me again why I should eschew the services of TCP and absorb its complexity into my application.
Even the TCP graph is dubious. Cubic being systematically above the link capacity makes me chuckle. Yes bufferbloat can have cubic "hug" a somewhat higher limit, but it still needs to start under the link capacity.
The obnoxious thing is that overly aggressive firewalls have killed any IP protocols that are not TCP or UDP. Even ICMP is often blocked or partially blocked.
Why would you need another IP protocol besides UDP? Anything you can do directly under an IP header, you can do under a UDP header as well, and the UDP header itself is tiny.
Going back to David Reed, this is specifically why UDP exists: as the extension interface to build more non-TCP transport protocols.
UDP introduced ports. Ports are not always the best abstraction for specifying which application is talking to which other application. They are finite.
I am very aware of what you can do with UDP, I have done some very fun work trying to minimize bandwidth usage on crappy mobile connections by using and abusing it. But I think at the end of the day it is an engineering crutch.
If we insisted on properly supporting a diverse set of L4 protocols years ago we wouldn’t have wound up with NAT and slow adoption of IPv6. Address exhaustion would have been a real pressing issue. Instead you can’t even ping a server behind a NAT and firewalls run out of memory trying to manage stateful connections.
UDP is a pretty elegant design for what it is but it is barely good enough to allow us some room to make things work. Ultimately it did limit us more than it enabled us.
Ports are just a multiplexing device, the same as the IP protocol number. Besides the tiny number of bytes in the UDP header, what's the practical difference?
And I agree: it stifled what could have been a much nicer to work with set of protocols and who knows what could have been created had we not just said "well there is always UDP if you want to do your own thing".
The same argument is made about HTTP. But at least in the HTTP case, you can point to protocol behavior the middle-layer protocol is enforcing on you. You can't do that with UDP; UDP is just IP, with some ports, and a checksum.
Yeah? It's an eight byte header. The OS needs something to tag IP packets to get them delivered to the correct application. So you're thinking maybe a four byte header for 50% savings here?
Good point on there needing to be some application-level addressing anyway.
On top of that, I believe the UDP checksum can be omitted as well at least on some OSes (and is arguably not necessary for fully encrypted/authenticated payloads) – leaving really just the two bytes for the "length field".
So we have a checksum of the IP header, a checksum of the UDP header and a port number, an application level stream ID or message ID or whatever the application transport protocol is using, and finally almost certainly an even higher level message ID such as a URI. And that’s before you introduce encryption into it with all that overhead. A level 4 protocol providing full integrity verification, encryption, multi homing, multiplexing, out of band control, and control over transmission reliability would be amazing. But the only way you can experiment with these things is if you use UDP and ports. We take the concept of ports for granted but if you think of ICMP or some other L4 protocols that isn’t the only way to identify the sending and receiving application.
If we just allowed all L4 protocol numbers through and ditched NAT we could have nice things. Or we could kick it up two layers to use QUIC for what SCTP could have been.
^^^^ this. I work for a big company (15k engineers). Trying to use anything that is not TCP or UDP simply doesnt work here. For years, even UDP was blocked and the answer we got was always "why are you using UDP, use TCP instead". Yep you read that right. Most of these folks are very short sighted or narrow minded. We tried to use SCTP for one project, major blunder. Zero support from network teams. Sctp is blocked everywhere. All their custom software and scripts for network deployments only work with tcp and udp. And they will not change that. And that comes from higher ups the ppl in charge. They are set in there ways and will not budge. As for as QUIC support? Never gonna happen here.
I have also had to deal with UDP getting blocked by a middle box before. It is rare now across the internet but you never know what intranet grey beards can do.
And UDP hole punching is a crutch for the fact that we still use IPv4.
I get today’s realities of it but I would have preferred a world where IPv6 killed NAT and middleboxes properly supported more protocols than TCP and UDP. The original intent of IP was to have more than two protocols built on top of it. Many were built and deployed but then killed by IPv4 address exhaustion and NAT, as well as poorly configured firewalls and middleboxes that specifically wanted to mess with level 4 traffic.
UDP is a good solution but all it does is provide an 8 byte overhead and nothing that IP itself doesn’t provide for something like SCTP.
IPv6 doesn't provide a length header, so that's already 2 bytes arguably necessary for all protocols layered on top of that.
Source and destination port just seem like a reasonable baseline for alternate protocols, that's 4 more – leaving just the checksum. (If you're really desperate for space and have integrity provided by your protocol, you can even cram two more bytes in there!)
Sure, it would be conceptually nice to be able to skip UDP, but I think in terms of performance it absolutely does not matter.
QUIC doesn’t use the UDP length header to designate message length, does it?
But my point isn’t even about performance. It is about the fact that NAT and IPv4 address exhaustion and bad firewall practices have killed any innovation of level 4 protocols. Imagine if instead of TCP, SCTP had won the protocol wars in 1980-1990s. Or even better if we had realized that we were going to run out of IPv4 addresses much earlier when the cost of switching was smaller. It would have been so much better to have firewalls that don’t filter anything but protocols 6 and 17. We could have had the opportunity to experiment with different types of transports, baked encryption in at a lower level, etc.
Basically where we are is that we have 6 and 8 dot LEGO bricks to play with and are told that we can build anything with those but aren’t allowed to play with any other shapes.
> QUIC doesn’t use the UDP length header to designate message length, does it?
Does it not? Not sure if it's really mandatory, but I believe one rationale for IPv6 getting rid of both its checksum and length fields was that both TCP and UDP duplicate both fields.
Given that QUIC doesn't have its own length field, I would imagine it relies on that of UDP, at least in situations where the lower layer does not provide reliable framing?
> Imagine if instead of TCP, SCTP had won the protocol wars in 1980-1990s. [...] We could have had the opportunity to experiment with different types of transports, baked encryption in at a lower level, etc.
Would we? Instead of TCP, SCTP would have become ossified in the stack all other things being equal (in particular the need for something like NAT due to IPv6 exhaustion), no?
That's the point he is making. QUIC has to be based on UDP because the networking stack is ossified enough to not allow the addition of any new Layer 4 protocol. It's not a huge drawback though.
I think they’re saying that due to how firewalls are deployed, everything end up either being built on tcp or udp, instead of using existing (or building new) layer four protocols more suited to solving the problem like sctp, et al.
I’m not sure I agree though, because many firewalls already pass other protocols today, like GRE, IPSEC, etc.
That is exactly my point, thank you for clarifying. And yes IPSEC had started forcing people to open up their firewalls. If I had my way though it would be the other way around: all IP protocol numbers except those specifically deemed obsolete or insecure should be allowed, including a range for user defined custom protocols. We really painted ourselves into a corner of 6s and 17s.
Their point is that's a sad design choice, caused by firewalls forcing QUIC to take a first-class TCP-like internet protocol and wrap it with encrypted headers inside UDP for the sole purpose of preventing firewalls from blocking it or making it break in myriad subtle ways. Even QUICs unencrypted header parts are designed to be difficult for intermediate network equipment to track and modify, because of their history of making other protocols over TCP unreliable by doing this.
SCTP is another high-performance protocol that provides many of the same network and performance features as QUIC, but it has been around for ~25 years. It was widely implemented. It was even in Linux 2.4 and available for Windows XP and MacOS. WebRTC in browsers can use SCTP, which makes sense (it predates QUIC) but is actually a bit of a problem if you want wide interoperability with WebRTC.
But despite SCTP being around and widely implemented, and doing many of the good things QUIC does, SCTP was never widely adopted, nor seriously considered for HTTP, mainly because of the problem of firewalls. (And to be fair, some operating system API limitations).
Basically, if TCP were invented today, it could not be deployed succesfully over much of the internet unless it was wrapped in UDP and encrypted.
UDP-with-encryption is now playing the role that the IP protocol is supposed to play, just because too many firewalls block nearly every IP packet that doesn't have type TCP or UDP. If UDP is used but without encryption for a popular protocol, eventually too many firewalls deep-packet-inspect that UDP protocol too, and do the same kinds of blocking, tampering or making protocols break.
There are some downsides to QUIC's ossification resistance. I did some work for a mobile equipment manufacter who adaptively manage the queues for multiple TCP streams from different users and different applications from the same user, to help ensure a better, fairer and lower-latency experience. QUICs properties made it difficult to measure what's going on at the protocol level, forcing the queue management to use crude statistical estimators instead, which don't work well on bursty protocols.
UDP is playing exactly the role it was intended to play. Apart from saving 8 bytes of header and a checksum, what's the advantage to running the protocol directly on top of IP? How is this "ossification"? Again: this was David Reed's original design purpose for UDP! It was exactly for doing shit on top of IP without having to run through TCP's mechanism.
One thing to note is that using HTTP2.0 for anything other than "this is not how to design high throughput protocols" is unfair.
At the time HTTP2.0's multiplexing was known to be bad for anything other than perfect, low latency networks. I hope this was because people had faith in better connectivity, rather than ignorance of how mobile and non-lan traffic worked.
You should probably at least try QUIC now, but you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC.
> you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC
And also super inefficient, since it duplicates the TLS handshake across streams and uses more resources in the OS and middleboxes (like them or hate them, they're a thing that might throttle you if you go too crazy with connection-level parallelism).
That's on top of very poor fairness at bottlenecks (which is per TCP stream unless there's separate traffic policing).
Even on perfect networks HoL blocking is an issue. If the receiving end of a set of stream blocks on one stream, it ends up blocking the whole connection. One stream stops due to backpressure, all streams stop.
I used QUIC extensively to implement https://github.com/connet-dev/connet and while I'm super happy with how it turned out, I think QUIC currently suffers from some immaturity - most implementations are still ongoing/not production ready (for example in java) and in many cases it is only viewed as a way to power on HTTP/3, instead of being self-standing protocol/API that ppl can use (for example, trying to use quic in android).
In any case, I'm optimistic that QUIC has a bright future. I don't expect it to replace TCP, but give us another tool we can use when it is called for.
> QUIC’s design intentionally separates the wire protocol from the congestion control algorithm
Is that not the case for TCP as well? Most congestion control algorithms just assign new meanings to existing wire-level flags (e.g. duplicate ACKs), or even only change sender-side behavior.
> QUIC gives* control back to application developers to tailor congestion control to their use case*
That's what it actually does: It moves the congestion control implementation from the OS to user space.
In that sense, it's the same tradeoff as containers vs. regular binaries linking to shared libraries: Great if your applications are updated more often than your OS; not so great if it's the other way around.
> QUIC gives control back to application developers to tailor congestion control to their use case
If I understood modern application development correctly, this interprets as "The developers will import another library which they don't understand and will wreak havoc on other applications' data streams by only optimizing stuff for themselves".
Again, if I remember correctly, an OS is "the layer which manages the sharing of limited resources among many processes which requests/needs it", and the OS can do system-wide, per socket congestion control without any effort because of the vantage point it has over networking layer.
Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
The whole ordeal is direct violation with application separation coming with protected mode.
> Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
Why? All practically used variants of TCP achieve fairness at the bottleneck without any central arbiter or explicit view of the upstream congestion situation. Besides, UDP has never had congestion control and has been around for decades.
> The whole ordeal is direct violation with application separation coming with protected mode. [...] The whole ordeal is direct violation with application separation coming with protected mode.
Are you concerned about fairness between applications on the same OS or fairness at the bottleneck, usually upstream from both application and OS/host?
In the former case, the OS always gets the last word, whether you're using TCP or your own homebrew congestion control via UDP, and you can make sure that each application gets the exact same fair egress rate if required. In the latter case, nothing prevents anyone from patching their kernel and running "unfair TCP" today.
> Running in user space offers more flexibility for resource management and experimentation.
I stopped reading here. This isn’t really an essential property of QUIC, there’s a lot of good reasons to eventually try to implement this in the kernel.
Maybe not an essential property of QUIC, but definitely one of not using TCP.
Most OSes don't let you send raw TCP segments without superuser privileges, so you can't just bring your own TCP congestion control algorithm in the userspace, unless you also wrap your custom TCP segments in UDP.
> How would you swap out the TCP congestion control algorithm in an OS, or even hardware, you don't control?
On the contrary, introducing their own novel/mysterious/poorly implemented congestion control algorithms is not a thing I want userspace applications doing.
Fortunately you don't get any say in what my userspace applications do on my own hardware.
And if you worry about hostile applications on your own hardware, the OS is an excellent point to limit what they can do – including overwhelming your network interface.
No, congestion control in the network sense is about congestion at choke points in packet networks due to higher inflow than outflow rates.
While you're still on your own host, your OS has complete visibility into which applications are currently intending to send data and can schedule them much more directly.
"Drown other applications" is unfortunately exactly what happens when you let the Linux kernel run your TCP stack. Profile your application and you may discover that your CPUs are being spent running the protocol stack on behalf of other applications.
I mean when your application sends on a socket, the kernel may also send and receive traffic for another task while it's in the syscall, just for funsies, and this is true even if your applications are containerized and you believe their CPU cores are dedicated to the container.
Sure, but the ready appeal of QUIC is that it is in user space by nature, while Linux ties TCP to the kernel. You either need special privileges to run user space TCP on Linux, or you need a different operating system kernel altogether.
Even if you have nothing to hide and don't care about accidental or intentional data modification, the benefit of largely cutting out "clever" middleboxes alone is almost always worth it.
QUIC would be the end of the free internet if it ever "took over" but luckily it won't. It's not built to do so, it's only built for corporate use cases.
QUIC implementations do not allow for anyone to connect to anyone else. Instead, because it was built entirely with corporate for-profit uses cases in mind and open-washed through the IETF, the idea of a third party coporation having to authenticate the identity of all connections is baked in. And 99.999% of QUIC libs, and the way they're shipped in clients, cannot even connect to a server without a third party corp first saying they know the end point and allow it. Fine for corporate/profit use cases where security of the monetary transactions is all that matters. Very much less fine for human uses cases where it forces centralization and easy control by our rapidly enshittifying authoritarian governments. QUIC is the antithesis to the concept of the internet and it's robustness and routing around damage.
Huh, I never knew, I've been using QUIC on my Raspberry Pi's web server for years... Did I unknowingly go corporate!?
Even if you don't want to get a Letsencrypt certificate, you can always use a self-signed one and configure your clients to trust it on first use or entirely ignore it.
SSH also uses "mandatory host keys", if you think about it. It's really not a question of the protocols but rather of common client libraries and tooling.
I guess you are referring to the TLS requirement? I guess I could see how on a more restrictive platform like a phone you could conceivably be prevented from accepting alternate CAs or self signed certificates.
There's a fairly far a long draft for replacing webrtc's SCTP with QUIC for doing p2p work. It doesn't seem to have any of these challenges, seems to be perfectly viable there for connecting peers. https://github.com/w3c/p2p-webtransport
Alas alas, basically stalled out, afaik no implementation. I wish Microsoft (the spec author) or someone would pick this back up.
WebRTC wraps SCTP in DTLS, so the "great challenge of encryption" has never been a problem there.
It just uses self-signed certificates, which is maybe conceptually slightly clunky compared to "pure" anonymous TOFU, but allows reusing existing stacks.
Note well: the claims about TCP come with some evidence, in the form of a graph. The claims for QUIC do not.
Many of the claims are dubious. TCP has "no notion of multiple steams"? What are two sockets, then? What is poll(2)? The onus is on QUIC to explain why it’s better for the application to multiplex the socket than for the kernel to multiplex the device. AFAICT that question is assumed away in a deluge of words.
If the author thinks it’s the "end of TCP sockets", show us the research, the published papers and meticulous detail. Then tell me again why I should eschew the services of TCP and absorb its complexity into my application.
Manipulative tone of the article title says it all. "The end of".
Even the TCP graph is dubious. Cubic being systematically above the link capacity makes me chuckle. Yes bufferbloat can have cubic "hug" a somewhat higher limit, but it still needs to start under the link capacity.
That's easily explained, in the parts of the x axis missing in the plot it actually goes negative and pays back the borrowed bytes.
The obnoxious thing is that overly aggressive firewalls have killed any IP protocols that are not TCP or UDP. Even ICMP is often blocked or partially blocked.
In the mean time we could have had nice things: https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...
SCTP would be a fantastic protocol for HTTP/HTTPS. Pipelining, multi homing, multi streaming, oh my.
Why would you need another IP protocol besides UDP? Anything you can do directly under an IP header, you can do under a UDP header as well, and the UDP header itself is tiny.
Going back to David Reed, this is specifically why UDP exists: as the extension interface to build more non-TCP transport protocols.
UDP introduced ports. Ports are not always the best abstraction for specifying which application is talking to which other application. They are finite.
I am very aware of what you can do with UDP, I have done some very fun work trying to minimize bandwidth usage on crappy mobile connections by using and abusing it. But I think at the end of the day it is an engineering crutch.
If we insisted on properly supporting a diverse set of L4 protocols years ago we wouldn’t have wound up with NAT and slow adoption of IPv6. Address exhaustion would have been a real pressing issue. Instead you can’t even ping a server behind a NAT and firewalls run out of memory trying to manage stateful connections.
UDP is a pretty elegant design for what it is but it is barely good enough to allow us some room to make things work. Ultimately it did limit us more than it enabled us.
Ports are just a multiplexing device, the same as the IP protocol number. Besides the tiny number of bytes in the UDP header, what's the practical difference?
Negligible to none as of now. But take a look at this comment below: https://news.ycombinator.com/item?id=45528837
And I agree: it stifled what could have been a much nicer to work with set of protocols and who knows what could have been created had we not just said "well there is always UDP if you want to do your own thing".
IPSec originally ran on raw IP. These days it has to be tunneled in UDP due to TCP or UDP only ossification.
PMTUD breaks when ICMP is blocked.
The same argument can be made that everything but HTTP being blocked is not a problem because everything can be transported on top of HTTP.
The same argument is made about HTTP. But at least in the HTTP case, you can point to protocol behavior the middle-layer protocol is enforcing on you. You can't do that with UDP; UDP is just IP, with some ports, and a checksum.
> the UDP header itself is tiny.
we're fighting for bits here!!! if you want to be 100% safe you need 576 packet size, UDP header is 1.4% of that
Because it's an extra header. Making data transfer that much less efficient and working to make sure that clients can decide it properly
An extra header of 8 bytes. To put that into perspective, IPv6 added 20 bytes to the IP header compared to v4.
Yeah? It's an eight byte header. The OS needs something to tag IP packets to get them delivered to the correct application. So you're thinking maybe a four byte header for 50% savings here?
Good point on there needing to be some application-level addressing anyway.
On top of that, I believe the UDP checksum can be omitted as well at least on some OSes (and is arguably not necessary for fully encrypted/authenticated payloads) – leaving really just the two bytes for the "length field".
So we have a checksum of the IP header, a checksum of the UDP header and a port number, an application level stream ID or message ID or whatever the application transport protocol is using, and finally almost certainly an even higher level message ID such as a URI. And that’s before you introduce encryption into it with all that overhead. A level 4 protocol providing full integrity verification, encryption, multi homing, multiplexing, out of band control, and control over transmission reliability would be amazing. But the only way you can experiment with these things is if you use UDP and ports. We take the concept of ports for granted but if you think of ICMP or some other L4 protocols that isn’t the only way to identify the sending and receiving application.
If we just allowed all L4 protocol numbers through and ditched NAT we could have nice things. Or we could kick it up two layers to use QUIC for what SCTP could have been.
SRT is hot stuff over UDP: https://www.haivision.com/products/srt-secure-reliable-trans...
But even UDP is heavily restricted in most cases.
Is it really? It's been a while since I've been on a network that actually filters UDP.
Too many things that people actually need to work run over it these days, including VPNs, videoconferencing etc.
SCTP had some other nails to its coffin too. For example it came from the telco world and so had no end user app buyin and poor dev experience.
^^^^ this. I work for a big company (15k engineers). Trying to use anything that is not TCP or UDP simply doesnt work here. For years, even UDP was blocked and the answer we got was always "why are you using UDP, use TCP instead". Yep you read that right. Most of these folks are very short sighted or narrow minded. We tried to use SCTP for one project, major blunder. Zero support from network teams. Sctp is blocked everywhere. All their custom software and scripts for network deployments only work with tcp and udp. And they will not change that. And that comes from higher ups the ppl in charge. They are set in there ways and will not budge. As for as QUIC support? Never gonna happen here.
> Trying to use anything that is not TCP or UDP simply doesnt work here.
Good news, then: QUIC uses UDP.
> As for as QUIC support? Never gonna happen here.
So generic UDP is allowed, but QUIC is specifically detected and filtered? That would be bizarre.
I have also had to deal with UDP getting blocked by a middle box before. It is rare now across the internet but you never know what intranet grey beards can do.
And UDP hole punching is a crutch for the fact that we still use IPv4.
If you like SCPT, nobody's stopping you from using it today! Just put it in a minimal UDP wrapper.
WebRTC has been doing just that for peer to peer data connections (which need TCP-like semantics while traversing NATs via UDP hole punching).
I get today’s realities of it but I would have preferred a world where IPv6 killed NAT and middleboxes properly supported more protocols than TCP and UDP. The original intent of IP was to have more than two protocols built on top of it. Many were built and deployed but then killed by IPv4 address exhaustion and NAT, as well as poorly configured firewalls and middleboxes that specifically wanted to mess with level 4 traffic.
UDP is a good solution but all it does is provide an 8 byte overhead and nothing that IP itself doesn’t provide for something like SCTP.
IPv6 doesn't provide a length header, so that's already 2 bytes arguably necessary for all protocols layered on top of that.
Source and destination port just seem like a reasonable baseline for alternate protocols, that's 4 more – leaving just the checksum. (If you're really desperate for space and have integrity provided by your protocol, you can even cram two more bytes in there!)
Sure, it would be conceptually nice to be able to skip UDP, but I think in terms of performance it absolutely does not matter.
QUIC doesn’t use the UDP length header to designate message length, does it?
But my point isn’t even about performance. It is about the fact that NAT and IPv4 address exhaustion and bad firewall practices have killed any innovation of level 4 protocols. Imagine if instead of TCP, SCTP had won the protocol wars in 1980-1990s. Or even better if we had realized that we were going to run out of IPv4 addresses much earlier when the cost of switching was smaller. It would have been so much better to have firewalls that don’t filter anything but protocols 6 and 17. We could have had the opportunity to experiment with different types of transports, baked encryption in at a lower level, etc.
Basically where we are is that we have 6 and 8 dot LEGO bricks to play with and are told that we can build anything with those but aren’t allowed to play with any other shapes.
> QUIC doesn’t use the UDP length header to designate message length, does it?
Does it not? Not sure if it's really mandatory, but I believe one rationale for IPv6 getting rid of both its checksum and length fields was that both TCP and UDP duplicate both fields.
Given that QUIC doesn't have its own length field, I would imagine it relies on that of UDP, at least in situations where the lower layer does not provide reliable framing?
> Imagine if instead of TCP, SCTP had won the protocol wars in 1980-1990s. [...] We could have had the opportunity to experiment with different types of transports, baked encryption in at a lower level, etc.
Would we? Instead of TCP, SCTP would have become ossified in the stack all other things being equal (in particular the need for something like NAT due to IPv6 exhaustion), no?
Not sure I follow. QUIC is just UDP with an extra header in the opaque payload. To a firewall, it looks just like UDP.
That's the point he is making. QUIC has to be based on UDP because the networking stack is ossified enough to not allow the addition of any new Layer 4 protocol. It's not a huge drawback though.
I think they’re saying that due to how firewalls are deployed, everything end up either being built on tcp or udp, instead of using existing (or building new) layer four protocols more suited to solving the problem like sctp, et al.
I’m not sure I agree though, because many firewalls already pass other protocols today, like GRE, IPSEC, etc.
That is exactly my point, thank you for clarifying. And yes IPSEC had started forcing people to open up their firewalls. If I had my way though it would be the other way around: all IP protocol numbers except those specifically deemed obsolete or insecure should be allowed, including a range for user defined custom protocols. We really painted ourselves into a corner of 6s and 17s.
It exists partially because of ossification of protocols that killed SCTP.
As in, we wouldn't have to implement it on top of UDP
Their point is that's a sad design choice, caused by firewalls forcing QUIC to take a first-class TCP-like internet protocol and wrap it with encrypted headers inside UDP for the sole purpose of preventing firewalls from blocking it or making it break in myriad subtle ways. Even QUICs unencrypted header parts are designed to be difficult for intermediate network equipment to track and modify, because of their history of making other protocols over TCP unreliable by doing this.
SCTP is another high-performance protocol that provides many of the same network and performance features as QUIC, but it has been around for ~25 years. It was widely implemented. It was even in Linux 2.4 and available for Windows XP and MacOS. WebRTC in browsers can use SCTP, which makes sense (it predates QUIC) but is actually a bit of a problem if you want wide interoperability with WebRTC.
But despite SCTP being around and widely implemented, and doing many of the good things QUIC does, SCTP was never widely adopted, nor seriously considered for HTTP, mainly because of the problem of firewalls. (And to be fair, some operating system API limitations).
Basically, if TCP were invented today, it could not be deployed succesfully over much of the internet unless it was wrapped in UDP and encrypted.
UDP-with-encryption is now playing the role that the IP protocol is supposed to play, just because too many firewalls block nearly every IP packet that doesn't have type TCP or UDP. If UDP is used but without encryption for a popular protocol, eventually too many firewalls deep-packet-inspect that UDP protocol too, and do the same kinds of blocking, tampering or making protocols break.
This sad problem is called protocol ossification. As Google's AI puts it: "QUIC is the first IETF transport protocol to deliberately minimise its wire image to avoid ossification.". See also https://en.wikipedia.org/wiki/Protocol_ossification and https://http3-explained.haxx.se/en/why-quic/why-ossification
There are some downsides to QUIC's ossification resistance. I did some work for a mobile equipment manufacter who adaptively manage the queues for multiple TCP streams from different users and different applications from the same user, to help ensure a better, fairer and lower-latency experience. QUICs properties made it difficult to measure what's going on at the protocol level, forcing the queue management to use crude statistical estimators instead, which don't work well on bursty protocols.
UDP is playing exactly the role it was intended to play. Apart from saving 8 bytes of header and a checksum, what's the advantage to running the protocol directly on top of IP? How is this "ossification"? Again: this was David Reed's original design purpose for UDP! It was exactly for doing shit on top of IP without having to run through TCP's mechanism.
Couldn’t have said it better myself.
One thing to note is that using HTTP2.0 for anything other than "this is not how to design high throughput protocols" is unfair.
At the time HTTP2.0's multiplexing was known to be bad for anything other than perfect, low latency networks. I hope this was because people had faith in better connectivity, rather than ignorance of how mobile and non-lan traffic worked.
You should probably at least try QUIC now, but you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC.
> you can get past HOL blocking by having multiple TCP streams. Its super cheap, cheaper than QUIC
And also super inefficient, since it duplicates the TLS handshake across streams and uses more resources in the OS and middleboxes (like them or hate them, they're a thing that might throttle you if you go too crazy with connection-level parallelism).
That's on top of very poor fairness at bottlenecks (which is per TCP stream unless there's separate traffic policing).
Even on perfect networks HoL blocking is an issue. If the receiving end of a set of stream blocks on one stream, it ends up blocking the whole connection. One stream stops due to backpressure, all streams stop.
I used QUIC extensively to implement https://github.com/connet-dev/connet and while I'm super happy with how it turned out, I think QUIC currently suffers from some immaturity - most implementations are still ongoing/not production ready (for example in java) and in many cases it is only viewed as a way to power on HTTP/3, instead of being self-standing protocol/API that ppl can use (for example, trying to use quic in android).
In any case, I'm optimistic that QUIC has a bright future. I don't expect it to replace TCP, but give us another tool we can use when it is called for.
> QUIC’s design intentionally separates the wire protocol from the congestion control algorithm
Is that not the case for TCP as well? Most congestion control algorithms just assign new meanings to existing wire-level flags (e.g. duplicate ACKs), or even only change sender-side behavior.
> QUIC gives* control back to application developers to tailor congestion control to their use case*
That's what it actually does: It moves the congestion control implementation from the OS to user space.
In that sense, it's the same tradeoff as containers vs. regular binaries linking to shared libraries: Great if your applications are updated more often than your OS; not so great if it's the other way around.
> QUIC gives control back to application developers to tailor congestion control to their use case
If I understood modern application development correctly, this interprets as "The developers will import another library which they don't understand and will wreak havoc on other applications' data streams by only optimizing stuff for themselves".
Again, if I remember correctly, an OS is "the layer which manages the sharing of limited resources among many processes which requests/needs it", and the OS can do system-wide, per socket congestion control without any effort because of the vantage point it has over networking layer.
Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
The whole ordeal is direct violation with application separation coming with protected mode.
> Assuming that every application will do congestion control correctly while not choking everyone else even unintentionally with user space's limited visibility is absurd at worst, and wishful thinking at best.
Why? All practically used variants of TCP achieve fairness at the bottleneck without any central arbiter or explicit view of the upstream congestion situation. Besides, UDP has never had congestion control and has been around for decades.
> The whole ordeal is direct violation with application separation coming with protected mode. [...] The whole ordeal is direct violation with application separation coming with protected mode.
Are you concerned about fairness between applications on the same OS or fairness at the bottleneck, usually upstream from both application and OS/host?
In the former case, the OS always gets the last word, whether you're using TCP or your own homebrew congestion control via UDP, and you can make sure that each application gets the exact same fair egress rate if required. In the latter case, nothing prevents anyone from patching their kernel and running "unfair TCP" today.
> Running in user space offers more flexibility for resource management and experimentation.
I stopped reading here. This isn’t really an essential property of QUIC, there’s a lot of good reasons to eventually try to implement this in the kernel.
https://lwn.net/Articles/1029851/
Maybe not an essential property of QUIC, but definitely one of not using TCP.
Most OSes don't let you send raw TCP segments without superuser privileges, so you can't just bring your own TCP congestion control algorithm in the userspace, unless you also wrap your custom TCP segments in UDP.
Why do the hard work if the same thing can be done by the kernel, or even by the card itself?
Drown other applications for your own benefit?
> Why do the hard work if the same thing can be done by the kernel, or even by the card itself?
How would you swap out the TCP congestion control algorithm in an OS, or even hardware, you don't control?
> Drown other applications for your own benefit?
Fairness equivalent to classic TCP is a design goal of practically all alternative algorithms, so I'm not sure what you're implying.
It's entirely possible to improve responsivity without compromising on fairness, as e.g. BBR has shown.
> How would you swap out the TCP congestion control algorithm in an OS, or even hardware, you don't control?
On the contrary, introducing their own novel/mysterious/poorly implemented congestion control algorithms is not a thing I want userspace applications doing.
Fortunately you don't get any say in what my userspace applications do on my own hardware.
And if you worry about hostile applications on your own hardware, the OS is an excellent point to limit what they can do – including overwhelming your network interface.
> the OS is an excellent point to limit what they can do – including overwhelming your network interface
One might even call this "congestion control"!
No, congestion control in the network sense is about congestion at choke points in packet networks due to higher inflow than outflow rates.
While you're still on your own host, your OS has complete visibility into which applications are currently intending to send data and can schedule them much more directly.
"Drown other applications" is unfortunately exactly what happens when you let the Linux kernel run your TCP stack. Profile your application and you may discover that your CPUs are being spent running the protocol stack on behalf of other applications.
What do you mean by "other applications"?
I mean when your application sends on a socket, the kernel may also send and receive traffic for another task while it's in the syscall, just for funsies, and this is true even if your applications are containerized and you believe their CPU cores are dedicated to the container.
Ah, but that's an OS implementation problem, not one with TCP or QUIC, no?
Sure, but the ready appeal of QUIC is that it is in user space by nature, while Linux ties TCP to the kernel. You either need special privileges to run user space TCP on Linux, or you need a different operating system kernel altogether.
The most obnoxious thing about QUIC is I don't need encryption all of the time, actually the majority of the time. Useless overhead.
Even if you have nothing to hide and don't care about accidental or intentional data modification, the benefit of largely cutting out "clever" middleboxes alone is almost always worth it.
QUIC would be the end of the free internet if it ever "took over" but luckily it won't. It's not built to do so, it's only built for corporate use cases.
QUIC implementations do not allow for anyone to connect to anyone else. Instead, because it was built entirely with corporate for-profit uses cases in mind and open-washed through the IETF, the idea of a third party coporation having to authenticate the identity of all connections is baked in. And 99.999% of QUIC libs, and the way they're shipped in clients, cannot even connect to a server without a third party corp first saying they know the end point and allow it. Fine for corporate/profit use cases where security of the monetary transactions is all that matters. Very much less fine for human uses cases where it forces centralization and easy control by our rapidly enshittifying authoritarian governments. QUIC is the antithesis to the concept of the internet and it's robustness and routing around damage.
Huh, I never knew, I've been using QUIC on my Raspberry Pi's web server for years... Did I unknowingly go corporate!?
Even if you don't want to get a Letsencrypt certificate, you can always use a self-signed one and configure your clients to trust it on first use or entirely ignore it.
SSH also uses "mandatory host keys", if you think about it. It's really not a question of the protocols but rather of common client libraries and tooling.
I guess you are referring to the TLS requirement? I guess I could see how on a more restrictive platform like a phone you could conceivably be prevented from accepting alternate CAs or self signed certificates.
There's a fairly far a long draft for replacing webrtc's SCTP with QUIC for doing p2p work. It doesn't seem to have any of these challenges, seems to be perfectly viable there for connecting peers. https://github.com/w3c/p2p-webtransport
Alas alas, basically stalled out, afaik no implementation. I wish Microsoft (the spec author) or someone would pick this back up.
WebRTC wraps SCTP in DTLS, so the "great challenge of encryption" has never been a problem there.
It just uses self-signed certificates, which is maybe conceptually slightly clunky compared to "pure" anonymous TOFU, but allows reusing existing stacks.