fyi

Tom Anderson (tom@emigrant)
Fri, 11 Sep 1998 14:03:05 -0700 (PDT)

From: Ed Lazowska <lazowska@cs.washington.edu>
To: John Zahorjan <zahorjan@cs.washington.edu>,
Tom Anderson
<tom@cs.washington.edu>
Subject: FW: network research problem
Date: Thu, 10 Sep 1998 22:21:24 -0700
MIME-Version: 1.0
X-Mailer: Internet Mail Service (5.5.1960.3)
Content-Type: text/plain
Status: R

>From Terry Gray

-----Original Message-----
From: Terry Gray [mailto:gray@cac.washington.edu]
Sent: Thursday, September 10, 1998 10:14 PM
To: Ed Lazowska
Cc: Steve Corbato
Subject: network research problem

Ed,
Steve mentioned that John has some money burning a hole in his pocket
and
you plan to deploy some GE gear.

This would put you in a position to investigate a problem that I think
is
fairly interesting, maybe even important, since it would inform a
current
IETF debate about a fundamental property of IP. It concerns whether or
not there is a need for explicit flow control mechanisms in IP. (I
actually had a conversation with Jon Postel and others about this last
week at SigComm.)

Problem:

While there is network-level flow control inherent in TCP, there is not
for UDP based protocols. Streaming media servers use UDP, and often
serve
low speed clients. Some streaming protocols have some rate control
provisions; others depend on the user to choose the correct speed for
his/her client. But in the general case, it seems like there is a
serious
risk of buffer overrun and consequent packet loss when a streaming
server
is connected to a very high speed network connection.

Research question: is this a real or hypothetical problem?

Measurements:

o Packet loss levels in various configurations with different
application
level UDP-based streaming protocols. Impact of multiple streams.
o Comparison with servers on GE vs. Fast Enet vs. Old Enet when serving
slower/much slower clients.

Issues:

o How big a problem is this?
o How effective are application-level flow-control mechanisms?
o Does 802.3x (?) flow control help or hurt?
o Is putting streaming servers on GE interfaces a good or bad idea?

-teg