The difference is that BBR does not use loss as a signal of congestion. Most TCP stacks will cut their send windows in half (or otherwise greatly reduce them) at the first sign of loss. So if you're on a lossy VPN, or sending a huge burst at 1Gb/s on a 10Mb/s VPN uplink, TCP will normally see loss, and back way off.
BBR tries to find Bottleneck Bandwidth rate. Eg, the bandwidth of the narrowest or most congested link. It does this by measuring the round trip time, and increasing the transmit rate until the RTT increases. When the RTT increases, the assumption is that a queue is building at the narrowest portion of the path and the increase of RTT is proportional to the queue depth. It then drops rate until the RTT normalizes due to the queue draining. It sends at that rate for a period of time, and then slightly increases the rate to see if RTT increases again (if not, it means that the queuing that saw before was due to competing traffic which has cleared).
I upgraded from a 10Mb/s cable uplink to 1Gb/s symmetrical fiber a few years ago. When I did so, I was ticked that my upload speed on my corp. VPN remained at 5Mb/s or so. When I switched to RACK TCP (or BBR) on FreeBSD, my upload went up by a factor of 8 or so, to about 40Mb/s, which is the limit of the VPN.
You seem quite knowledgeable in this domain. Have you authored any blog posts to expand on this topic? I would welcome the chance to learn more from you.
No, fast retransmit basically does what it says -- retransmits things quicker. However, it is orthogonal to what the congestion control (CC) algorithm decides to do with the send window in the face of loss. Older CC like Reno halves the send window. Newer ones like CUBIC are more aggressive, and cut the window less (and grow it faster). However, RACK and BBR are still superior in the face of a lossy link.
Depending on the particular situation maybe vegas would work as well?
In particular, since Wireguard is UDP, using vegas over Wireguard seems to me like it should be good (based on a very limited understanding, though :/ ), it is just a question of how well it would work on the other side of the reverse proxy since I don't think it can be set per link?
Er, I was confused; of course being over UDP won't make the kind of difference I was thinking since the congestion control is just about when packets are sent. Although I heard a while back that UDP packets can be dropped more quickly during congestion. If that is the case and the congestion isn't too severe (but leading to dropped packets because it is over UDP) then possibly vegas would help.
BBR tries to find Bottleneck Bandwidth rate. Eg, the bandwidth of the narrowest or most congested link. It does this by measuring the round trip time, and increasing the transmit rate until the RTT increases. When the RTT increases, the assumption is that a queue is building at the narrowest portion of the path and the increase of RTT is proportional to the queue depth. It then drops rate until the RTT normalizes due to the queue draining. It sends at that rate for a period of time, and then slightly increases the rate to see if RTT increases again (if not, it means that the queuing that saw before was due to competing traffic which has cleared).
I upgraded from a 10Mb/s cable uplink to 1Gb/s symmetrical fiber a few years ago. When I did so, I was ticked that my upload speed on my corp. VPN remained at 5Mb/s or so. When I switched to RACK TCP (or BBR) on FreeBSD, my upload went up by a factor of 8 or so, to about 40Mb/s, which is the limit of the VPN.