duality of credits and drops

Tom Anderson (tom@emigrant)
Tue, 9 Jun 1998 16:52:41 -0700 (PDT)

On friday, I drew some stuff on a piece of paper; this note
is to transcribe that discussion into email so it doesn't
escape into the ether.

I would be really interested in writing a Patterson style
paper, Detour: A Case for a Virtual Internet, quantifying
the intuition that wide area bandwidth is becoming
more expensive relative to CPU performance, and outlining
our plans.

I would be really interested in trying to write a paper, The Duality
of Credits, ECN, Rates, and Drops for Congestion Control
The idea here is to try to give an academic resolution to
this debate.

Usually, drops are seen as inefficient -- packets that
will be dropped take up link bandwidth (a scarce resource!)
that could be used given more careful control of buffers,
as with credits. And credits are slammed because they
don't do well when there are too few buffers.

One way to look at this is to imagine a 2x2 matrix, with
many short flows/few long flows x small buffs/large bufs.
Credits do well when there are lots of buffers; drops do well
when there are long flows. So both do well in the cases that
get simulated (few long flows and lots of buffers); but
neither does well in the situation as it exists today (many
short flows and small buffers). And this is where things
are really interesting, since end2end solutions don't work --
you can't deal with lots of short flows by throttling them
back individually, unless you have gobs of buffer space.

Along these same lines, RED suffers from similar problems
to credits if the router shares buffers between output links.
The buffers will accumulate for the bottleneck link,
preventing other links from getting traffic through.

Part of the paper might be to propose a hypothetical new
router drop policy, Hierarchical RED, allowing drops or ECN to
signal aggregates of traffic to back off. Currently,
if there are lots of short flows, RED will degrade to drop-tail.
This is motivated by a 3x4 matrix, considering
perflow-hopbyhop/aggregated hopbyhop/end2end vs. credits/ecn/measurements/drops.
Most of these boxes have been explored, but some, for example,
congestion control via hopbyhop drops have not.

It also seems to me that the RED papers make the absurd claim
that the approach is relatively robust to whichever parameters you pick.
But I believe we could demonstrate that no fixed parameter
choices work well in all environments -- eg, what would work
well in a WAN for large RTTs and high drops won't work as well in LANs.
In other words, you need self-tuning RED, just as you need self-tuning
file systems, and self-tuning everything. I think this would make
a nice paper.

And as a final thought, one of the papers in my queue suggested
that you don't need per-flow credits, that you can get by with
per-destination credits, since you can multiplex together the
packets for the same destination. This seems right, and could
substantially reduce the buffers needed to make a credit scheme work.
One way to look at this is to imagine that we reserve buffers for a
"reverse multicast tree" rooted at the destination -- and if we
want to handle multiple paths, we can have multiple reverse multicast
trees -- or perhaps more simply, a directed acyclic graph --
rooted at the destination.

tom