Revisiting TCP Congestion Control Using Delay Gradients
Résumé
Traditional loss-based TCP congestion control (CC) tends to induce high queuing delays and perform badly across paths containing links that exhibit packet losses unrelated to congestion. Delay-based TCP CC algorithms infer congestion from delay measurements and tend to keep queue lengths low. To date most delay-based CC algorithms do not coexist well with loss-based TCP, and require knowledge of a network path’s RTT characteristics to establish delay thresholds indicative of congestion. We propose and implement a delay-gradient CC algorithm (CDG) that no longer requires knowledge of path-specific minimum RTT or delay thresholds. Our FreeBSD implementation is shown to coexist reasonably with loss-based TCP (NewReno) in lightly multiplexed environments, share capacity fairly between instances of itself and NewReno, and exhibits improved tolerance of non-congestion related losses (86% better goodput than NewReno in the presence of 1% packet losses).
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...