April 7th, 2014: So the last comic about how to tune TCP parameters gave what I thought was good advice, but another Ryan disagrees! And since I feel bad for people who are actually looking to tune their TCP parameters, and also since I respect all Ryans, here is what Ryan wrote. If you come to Dinosaur Comics for the comics but stay for the network infrastructure discussion, I have some good news:
I'm going to have to disagree with T-Rex's advice about TCP buffers. Increasing the buffer sizes contributes to the problem of "Bufferbloat". Increasing the buffer size is generally only going to increase the latency.
When you're sending traffic to a remote router, you want to match the rate you're sending the packets with the rate that they can be received. Going too fast will only clog things up. TCP has this algorithm to determine the right rate to send out packets: Increase the rate at which you send packets until some of the packets are dropped. Dropping packets is okay, because TCP has methods for retransmitting. This should let you find the optimum rate.
However, what happens is that people put these huge buffers on their receiving routers. So even if you're sending out packets too quickly, you won't find out until that buffer is full, because only when the buffer is full will you start to get notifications that your packets are being dropped.
So what'll end up happening is that you'll send packets more and more quickly, and then hit the limit, then all the sudden receive a crap-ton of drop notices, and then think that they can only receive packets very slowly, so you start sending at super-slow speed. You don't see any drops, so you start to dial the speed back up, and the cycle starts all over again.
So you want the receiving router to send drops notices, to give the sender a good idea of how quickly to send. So, why even have a buffer at all? Well, you want it there so as to be able to absorb bursty traffic. But if the buffer is consistently full, all you're doing is adding latency.
What should happen is that the receiving router actively manages its queue, and sometimes drops packets before it even has to, so as to give the people sending stuff to them a better idea of how quickly it is capable of receiving.
The new CoDel ("controlled delay") module is the bufferbloat community's answer to this problem. It watches the queue and makes sure that stuff isn't staying in there for too long, and if it is, it starts dropping it.
You can add it to an interface using the tc (traffic control) program. The qdisc is called "fq_codel", because it also performs "fair queueing" similar to what the "sfq (stochastic fair queueing)" qdisc does.
So, if your goal was to troll buffer nerds into sending long-winded techno-ideological rants about active queue management, well played, sir.
One year ago today: this summer... take it to the edge... ONE more time – Ryan