My Le
perspective from high speed network development project, working with
network operators, research at Berkeley
High quality presentations!
People always want: Save the world and get your own apartment
start with the first as a goal, get the second as a byproduct
Realize we are competing with hungry Internet startups
worry about reachign moving target
Ex: It took 12 months to get layer 3 switching in hardware
The big challenge for Cisco
Moore's Law for hardware
Software lag -- software engineers don't get twice as smart or
twice as productive every 18 months.
Maybe WDM is path out of the quagmire (by simplifying to
application-specific virtual networks?)
Make sure to talk to hardware people
stay up with hardware
don't tie your research to the current Internet
Greg Minshall
Overall: some terrible ideas and some great ideas.
Anecdote: backbones are 90% loaded and no packet loss
=> strong economic incentive to full loading. Is there
room for improvement there?
Where is the bottleneck?
Backbone? probably not.
Customer side (tail circuit)? This is where we're planning to deploy.
NAP (PacBell)
10 minute averages may be too long to get good feedback;
but 10 minute averages may be as aggressive as you can be.
Is our goal: Efficiency of network vs. speeding up the clients
Don't get painted on the wrong side of this question; hard to get
your credibility back. Case study of Vegas.
Short flows have problems vs. they cause problems
How do you protect network from short flows?
Active Naming
what is long term benefit? Aesthetics?
General insight: what makes the Internet hard is that
you can't trust your intuition: intuition about small scale
doesn't give the right result for the Internet.
Look at correlations between DNS and web requests in web tracing study
Web cache design principle: good web cache would traverse a given
link at most once.
Principle: things that can be done at the end node, and for which
the client has incentive to do, should be done at the client
Ex: use SYN experience at client across connections
Want to use experience across clients too!
Nitin's talk: build client model first.
Detour
TCP: don't go there!
Routing: will have overlay networks, but won't get active routing
in interior switches
Peter Newman
Routers in hardware
Can't get into the data path now, and won't in the future;
entry/exit to network is ok to do transmog and routing.
Complexity of simulation scares him: simplify so you can understand it
(and so it runs fast!)
How complex stuff will be in switches?
QoS at least
Hardware is not immutable => routers will have reprogrammable logic,
particularly at lower speeds
Mike Dahlin
Right type of project: ambitious
Interesting results and ideas for what to do
Lots of discussion: this is a good sign; want to stimulate arguments
Technical:
1. Are we designing tomorrow's network with today's workload?
Will connections be short?
How do we handle audio and video?
How will routing change: peering happens (or doesn't) for a reason
DSL/ADSL will happen at client end; big national backbones
If so, what will break in 3-5 years?
what will Internet look like in 5 years?
2. Experimental methodology
Simulation and deployment -- will need to focus on this
It will be hard, so pay attention to it.
Convincing people will be tricky
For simulation: realism vs. simplicity
Need to build it, understand it and explain it
So don't be too real -- will need to model at some level,
so explicitly pick which places to model
3. Is TCP the right thing to work on?
TCP tricks vs. what are you really trying to accomplish
Enunciate principles (e.g., aggregation, etc.)
4. Pay attention to higher level
Transmog vs. web caching vs. active caching? All at the same time?
Stay focused and stay broad!
David Wetherall
Lots of cool stuff and look forward to coming back
Low level TCP
Two categories:
sharing info (good)
proxies to accomodate deployment (don't go there)
Deployment will happen
Great measurements, butis detour routing the right solution in
the long run? Focus on the long term.
Run it for real, but try to avoid OS hacking
What minimal OS support do we need for doing this?
Steve Corbato
How to do routing in Gigapop
Routing scale:
Campus: 1K routes
Campus-Gpop: 1-50k routes
GPop: 50K routes
NSPs: 1K routes
Bulge at the Gpop. Will be going to BGP-4/interior BGP.
A real problem: enforcing policy in the gigapop router.
Reality check
Internet has autonomous systems!
Everyone doing local optimization, for cost, performance, routing,
security
Are we gaming the gamers? 5000 operators gaming the system.
We use local preference; AS Path prepending to fake out router path
selection
What's happening: RED deployment, MPLS, NSP's think incrementally
Assume assymetry
Don't assume net/host mgt is done by the same person
non-congestion controlled apps
BW won't be precious in the future (but can routers keep up)
Traffic doubles every 2 years
Will pick up after Abilene connection
Key driver will be video
Commodity Internet doubles every 6 months
UUNet paper on using MPLS to optimize for managing hot links
In limit: will need one operator per entity.
Detour: as routing protocol vs.
great tool for multihome customer
How do you choose path?
Real time data collection
Only limitation is that in deployed network, route dampening
to avoid dynamic feedback and route flap.
Bill Bolosky
Great area; try to understand the internet
But it's a hard problem; reason why it isn't solved.
Recycle feedback from NOW project.
Research project:
unlikely to build the software that runs the world
accept it, it's ok.
But still, try off the wall stuff and if there's a benefit,
it will get deployed.
Example: you've shown routes are way worse than anyone expected.
But don't deploy and try to route the Internet.
What are you trying to accomplish?
Our solution isn't the final one, so can't really use our alternative.
We should try to figure out the way it should be.
So could use Detour just to demonstrate measurements are right, and
so you can explore operational issues.
Advice is: use the real network.
TCP congestion is fascinating subject: increasingly important
Two trends:
1. Non-rate controlled traffic (nothing stopping it but thin endpoints)
2. Advent of DSL to home. Will cause it to fail. Do we have
a solution? and can we have it ready in time?
Our ideas for congestion control are hacks and not good ones,
eg., lying at the server.
Aggregating info is good, but bite bullet and assume servers
are modified and so don't use slow start
Do the arithmetic on DSL and video
There are apps that will use multigigabits
ex: Tiger video server
Designed for constant bit rate to home, selling to corporations
Market study: Video on demnad will go into the house is the bandwidth
is cheap; the service is price sensitive.
Simulation:
try to solve analytically;
get lots of memory, it's cheap
Detour simulator -- even harder
What about 7 orders of magnitude (10M more hosts/routers
than we can simulate on one machine)
But: You aren't interested in the whole Internet, but in a
one page summary of your results for a paper.
Simulate one router and model the rest of the traffic in its
effect on that router (e.g., does a router in New Jersey
affect one at UW?)
Simulate part of the network, not the whole network
Web cache: if 50% hit rates, won't be useful.
Focus on why hit rates are so low: cold misses, uncacheable
accesses, dynamic content? Need to know the answer.
Making it go to 90% hit rates is interesting;
making it 60% hit rate is uninteresting.