RFC 1257 (rfc1257) - Page 2 of 5


Isochronous applications do not require jitter-controlled networks



Alternative Format: Original Text Document



RFC 1257                 Isochronous and Jitter           September 1991


   workstations, then beyond sensitivity to interarrival times, the
   users will also be sensitive to end-to-end delay.  Consider the
   difference between conferencing over a satellite link and a
   terrestrial link.  Furthermore, for the data to be able to arrive in
   time, there must be sufficient bandwidth.  Bandwidth requirements are
   particularly important for video: HDTV, even after compression,
   currently requires bandwidth in excess of 100 Mbits/second.

   Because multimedia applications are sensitive to jitter, bandwidth
   and delay, it has been suggested that the networks that carry
   multimedia traffic must be able to allocate and control jitter,
   bandwidth and delay [1,2].

   This memo argues that a network which simply controls bandwidth and
   delay is sufficient to support networked multimedia applications.
   Jitter control is not required.

Isochrony without Jitter Control

   The key argument of this memo is that an isochronous service can be
   provided by simply bounding the maximum delay through the network.

   To prove this argument, consider the following scenario.

   The network is able to bound the maximum transit delay on a channel
   between sender and receiver and at least the receiver knows what the
   bound is.  (These assumptions come directly from our assertion that
   the network can bound delay).  The term "channel" is used to mean
   some amount of bandwidth delivered over some path between sender and
   receiver.

   Now imagine an operating system in which applications can be
   scheduled to be active at regular intervals. Further assume that the
   receiving application has buffer space equal to the channel bandwidth
   times the maximum interarrival variance.  (Observe that the maximum
   interarrival variance is always known - in the worst case, the
   receiver can assume the maximum variance equals the maximum delay).

   Now consider a situation in which the sender of the isochronous data
   timestamps each piece of data when it is generated, using a universal
   time source, and then sends the data to the receiver.  The receiver
   reads a piece data in as soon as it is received and and places the
   timestamped data into its buffer space.  The receiver processes each
   piece of data only at the time equal to the data's timestamp plus the
   maximum transit delay.

   I argue that the receiver is processing data isochronously and thus
   we have shown that a network need not be isochronous to support



Partridge