Bufferbloat (nonfiction)

From Gnomon Chronicles
Jump to navigation Jump to search

Bufferbloat is a cause of high latency in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation (also known as jitter), as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like voice over IP (VoIP), online gaming, and even ordinary web surfing.

Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput.

The bufferbloat phenomenon was described as early as 1985. It gained more widespread attention starting in 2009.

Several technical solutions exist which can be broadly grouped into two categories:

  • Solutions that target the network
  • Solutions that target endpoints

In the News

Fiction cross-reference

Nonfiction cross-reference

  • Congestion window (nonfiction) - In TCP, the congestion window is one of the factors that determines the number of bytes that can be sent out at any time. The congestion window is maintained by the sender. Note that this is not to be confused with the sliding window size which is maintained by the receiver. The congestion window is a means of stopping a link between the sender and the receiver from becoming overloaded with too much traffic. It is calculated by estimating how much congestion there is on the link. When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of the MSS allowed on that connection. Further variance in the congestion window is dictated by an AIMD approach. This means that if all segments are received and the acknowledgments reach the sender on time, some constant is added to the window size. When the window reaches ssthresh, the congestion window increases linearly at the rate of 1/(congestion window) segment on each new acknowledgement received. The window keeps growing until a timeout occurs. On timeout: (1) Congestion window is reset to 1 MSS; (2) ssthresh is set to half the congestion window size before the timeout; (3) slow start is initiated. See TCP congestion control (nonfiction).

External links: