RFC 392 (rfc392) - Page 1 of 6


Measurement of host costs for transmitting network data



Alternative Format: Original Text Document



Network Working Group                                           G. Hicks
Request for Comments: 392                                     B. Wessler
NIC: 11584                                                          Utah
                                                       20 September 1972


        Measurement of Host Costs for Transmitting Network Data

Background for the UTAH Timing Experiments

   Since October 1971 we, at the University of Utah, have had very large
   compute bound jobs running daily.  These jobs would run for many cpu
   hours to achieve partial results and used resources that may be
   better obtained elsewhere.  We felt that since these processes were
   being treated as batch jobs, they should be run on a batch machine.

   To meet the needs of these "batch" users, in March of this year, we
   developed a program[1] to use the Remote Job Service System (RJS) at
   UCLA-CCN.  RJS at UCLA is run on an IBM 360/91.

   Some examples of these jobs were (and still are!):

      (a) Algebraic simplification (using LISP and REDUCE)

      (b) Applications of partial differential equation solving

      (c) Waveform processing (both audio and video)

   The characteristics of the jobs run on the 91 were small data decks
   being submitted to RJS and massive print files being retrieved.  With
   one exception: The waveform processing group needed, from time to
   time, to store large data files at UCLA for later processing.  When
   this group did their processing, they retrieved very large punch
   files that were later displayed or listened to here.

   When the program became operational in late march -- and started
   being used as a matter of course -- users complained that the program
   page faulted frequently.  We restructured the program so that the
   parts that were often used did not cross page boundaries.

   The protocol with RJS at UCLA requires that all programs and data to
   be transmitted on the data connection be blocked[2].  This means that
   we simulate records and blocks with special headers.  This we found
   to be another problem because of the computation and core space
   involved.  This computation took an appreciable amount of time and
   core space we found because of our real core size that we were being
   charged an excessive amount due to page faulting.  The page faulting
   also reduced our real-time transmission rate to the extent that we



Hicks & Wessler