[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference noted::atm

Title:atm
Moderator:NPSS::WATERS
Created:Mon Oct 05 1992
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:970
Total number of notes:3630

817.0. "ATMworks 350 Questions" by NETRIX::"[email protected]" (Youda Kopel) Tue Feb 04 1997 21:57

Hi there Guys ,

Can some please comment on this.....

Current literature indicates that the ATM 350 adapter card is restricted to
one
adapter per Digital Unix system.

o Is this still the case under Digital Unix V4.0

o Is this restriction consistent under other environments; eg NT

o Thoughput is a function of the protocol stacks, I/O bus etc. What is the
maximum sustained throughput under Digital Unix assuming we were looking at a
4100 class system; documentation mentions bandwidth of the order of 130Mbit/s.

o We have an GIS application which is looking to have 24 workstations served
by
a single unix host using a hierarchical storage approach. On-line storage will
be of the order of 300 GB. Under ATM how is performance affected on the host
by
having this quantity of clients switched; does the multiplexing scheme mean
that it is degraded proportionally to the number of clients.

Many thanks in advance,

Youda Kopel ,
NPB Melb.

[Posted by WWW Notes gateway]
T.RTitleUserPersonal
Name
DateLines
817.1SMURF::MENNERit's just a box of Pax..Wed Feb 05 1997 07:5329
>>Current literature indicates that the ATM 350 adapter card is restricted to
>>one adapter per Digital Unix system.
>>
>>o Is this still the case under Digital Unix V4.0
    
    No, the number of adapters varies with the harware platform, i
    believe four ATMworks 350s have been qualified on rawhides (AS4100).

>>o Thoughput is a function of the protocol stacks, I/O bus etc. What is the
>>maximum sustained throughput under Digital Unix assuming we were looking at a
>>4100 class system; documentation mentions bandwidth of the order of 130Mbit/s
    
    Since the driver is still funneled, MP systems may have a harder time
    achieving OC3 line speed (~134 Mbs after discounting protocol/ATM
    cell overhead).  Also it's unlikely that multiple cards will each
    achieve 134 Mbs.

>>o We have an GIS application which is looking to have 24 workstations served
>>by a single unix host using a hierarchical storage approach. On-line storage 
>> willbe of the order of 300 GB. Under ATM how is performance affected on
>>the host by [having this quantity of clients switched; does the 
>>multiplexing scheme mean that it is degraded proportionally to the number of clients.
    
    If you are asking if each client will see MAX thruput the answer is
    a qualified no.  The qualification is that ATM links are full duplex.
    A rawhide should be able to support ~120Mbs in each direction as long
    as large MTUs (~9k) are used.  Also Classical IP is slightly more 
    efficient that LANE.  Note that this question is like asking if
    24 clients on an Ethernet can all transmit/receieve data at 10Mbs.
817.2Re: ATMworks 350 QuestionsQUABBI::"[email protected]"Hal MurrayThu Feb 06 1997 00:5265
The limiting factor will probably be the CPU and memory system.
Don't worry about the IO bus unless you are really pushing the
system.

The ATMworks 350 is full duplex.  It's pretty easy to run TCP
or UDP test programs at >130 megabits one way.  That's the fiber
limit (after correcting for overhead) so you shouldn't expect to
go much faster.  Maybe 134 on a good day.  I've tried several times
but never managed to get that speed running in both directions at
the same time.  I can only get to 250 or 270.

The ATMworks 350 can easily run full speed in both directions
if you are using big packets on hardware test programs.  I often
test multiple boards in a system.  The PCI bus usually saturates
after 2 or 3 boards.  (Each board puts 270 megabits of load on
the bus.)  We had 6 boards running at very close to full speed
during some early testing on a TURBOlaser.  (That took some work
and it was a very busy system.)

With smaller packets, the board runs out of microcode cycles.
You will probably run out of CPU cycles first.


I have a lot of graphs from running TCP and UDP tests.  For
a given hardware setup, the major parameters are the MTU and
the size of the block passed to send/recv.  Running >130 megabits
using 9K MTU and ~50K blocks will take somewhere between 20% and
100% of the CPU/memory.

The older/slower machines use more CPU at a given speed.

Dropping the MTU to 1.5K will more than double the CPU usage.
At that size, 3000-900s will only get ~100 megabits.

You also need the TCP window to be large enough.  The default
of 32K only runs at 100 megabits on a pair of 3000-900s.

SMP systems will use more CPU at a given speed.  (The lock code
gets patched out of the kernal at startup if there is only 1 CPU.)
I don't have any clean SMP data so I can't give you any numbers.

Those are benchmark programs - the things that lie.  Real
programs will need some CPU for the application.  That will
probably impact cache usage so things may be worse than you
would predict if you just added up the % needed for TCP and 
the % needed by the application.


Just to make sure it doesn't fall through the cracks...

If your application is transfering small chunks of data it will
be hard to get full link speed over ATM.  "Small" in this
context is (very roughly) anything under 2K.



My ATM web pages start at:
  http://src-www.pa.dec.com/~murray/an2-perf/perf.html

The TCP/UDP graphs for various machines start on:
  http://src-www.pa.dec.com/~murray/an2-perf/tcpudp-batch.html
Poke around starting on the first page to get the description
for what these graphs are testing.
[posted by Notes-News gateway]