RE: acks and pacing

Stefan Savage (savage@cs.washington.edu)
Thu, 24 Jun 1999 01:06:54 -0700

Correct, however because TCP increments TCP for each ACK, a ACK-n policy
will make cwnd will grow proportionately more slowly (don't want to do this
over long paths).

As an alternative, a nice switch would be to increase cwnd according to
bytes acked (not ACK's received) and use pacing to smooth out the bytes
without that window. Then you just need to ensure that the ACK rate is high
enough to prevent the sender from going idle (e.g. at least one ack per
window). You also presumably want to have some min threshold on the ack
frequency so you have some guaranteed response-time to network changes.

- Stefan

-----Original Message-----
From: Neal Cardwell [mailto:cardwell@cs.washington.edu]
Sent: Thursday, June 24, 1999 12:55 AM
To: syn@cs.washington.edu
Subject: acks and pacing

this is an interesting point related to ack pacing. at gigabit speeds,
just processing incoming acks can be a pain. you would like for the
receiver to be able to respond with only a few acks per window (assuming
everything is arriving in order). a normal TCP sender would be very bursty
in this situation; a paced TCP would not be.

neal

---------- Forwarded message ----------
Date: Wed, 23 Jun 1999 21:19:09 -0700 (PDT)
From: Joe Touch <touch@ISI.EDU>
To: sm@bossette.engr.sgi.com
Cc: minshall@siara.com, tcp-impl@grc.nasa.gov, touch@ISI.EDU
Subject: Re: Nagle -- again

> From: sm@bossette.engr.sgi.com (Sam Manthorpe)
> Subject: Re: Nagle -- again
> To: touch@ISI.EDU (Joe Touch)
> Date: Wed, 23 Jun 1999 20:57:58 -0700 (PDT)
> Cc: tcp-impl@grc.nasa.gov, minshall@siara.com
>
> Joe Touch wrote:
> >
> > One observation - the paper indicates that there _must_ be at least
> > one ACK every two packets. This is not strictly required, at least
> > last time I checked.
> >
>
> I've been meaning to bring that up for a while. The `ack every other
> segment' algorithm recommended in RFC-1122 is bad news for high bandwidth,
> low packet size media. In particular I'm thinking of gigabit Ethernet
> without jumbograms. When we get an ack for every 2 packets, i.e. one
> ack for every ~2920 bytes of application data, then if we want to obtain a

> TCP goodput of, say, 600 Mbits/sec on a single connection, then our
> sending machine will be receiving approx. 26000 ACKs per second which
> places a substuntial load on the sending hosts's stack (interrupt
> processing, IP packet processing, TCP processing).

This is indeed problem. However, it appears that a significant part
of the issue is the lack of a reasonable sized MTU, i.e., sticking
with non jumbograms. Relying on ACK reduction alone won't fix that.

The key issue is whether ACK clocking is important. It is implicit,
but only if the ACK frequency is high enough. If ACKs are less
frequent, the source becomes more bursty (allowed to send K packets
when an ACK arrives).

> (a) the SHOULD in RFC-1122 should remain a SHOULD and not become
> a MUST, IMO

It seems useful to require _either_ a minimum acceptable ACK frequency
or a separate clocking mechanism that spaces transmission. This could
be something negotiated on a per-pair basis.

> (b) I'm sure there is a clever algorithm to be found that
decides
> when the optimal time to send an ACK is; anybody got any ideas?

Clever, _and_ runs as needed for a gigabit, if you please :-)

Joe