FW: Seeking info/collaborators re: 802.mumble strangeness

Stefan Savage (savage@cs.washington.edu)
Fri, 6 Nov 1998 08:59:52 -0800

-----Original Message-----
From: Matt Mathis [mailto:mathis@psc.edu]
Sent: Friday, November 06, 1998 7:55 AM
To: end2end-interest@ISI.EDU
Cc: mathis@psc.edu; lappa@psc.edu; huntoon@psc.edu; peterb@psc.edu
Subject: Seeking info/collaborators re: 802.mumble strangeness

We have recently observed a number of performance anomalies in a
couple of different 802.mumble LANs. Further poking around suggests
that these "features" are quite pervasive. I am concerned that they
are either bugs in certain widely distributed chip sets, or
perhaps the 802 standards themselves.

I am seeking either prior documentation, information on current
efforts or people wishing to collaborate to investigate these behaviors.

Some observations:

Under some easily reproduced conditions the available capacity through
10BaseT DROPS by about 10% if window is 15kBytes (10 packets) larger
than sufficient to just fill the link. Put in other words, even under
lossless conditions the 10BaseT service rate drops by about 1% per
queued packet. At larger windows the service rate fluctuates rather
chaoticly by about 10%. (This measurement is done with a non-TCP
diagnostic tool.)

My interpretation of this is that 10BaseT is batching packets
"for efficiency", but in fact the batched packets have more overhead
than unbatched packets.

In the lab where we have been testing SACK, we see persistent,
massive "ACK compression" clearly caused by the the LAN itself. Even
though our FACK code works very hard at producing smooth and even
packet flows, all TCP flows appear as line rate data bursts
separated by line rate ACK bursts.

In another situation, where a wide area OC-3 ATM path is connected via
a 100BaseT LAN, we see what appears to be conflict between burstyness
introduced by the 100BaseT and the ATM rate shaper. Unfortunately,
the interesting part of this path is remote and we can not easily
instrument it.

My fear is that these are all artifacts of some
"son-of-capture-effect" that is present in newer 802.mumble
technologies. I am particularly worried that the negative control
gain (longer queues yield lower throughput, even while lossless) will
induce casual transport implementors to tune their protocols for
false optima, at the expense of the Internet as a whole.

Joe Lappa and I are interested in working on this problem if it is
still unexplored. It would be best if we could find a collaborator who
is familiar with 100BaseT internals.

Has anybody run into this before? Any takers for the collaboration?

Thanks,
--MM--