[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::fddi

Title:FDDI - The Next Generation
Moderator:NETCAD::STEFANI
Created:Thu Apr 27 1989
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2259
Total number of notes:8590

2018.0. "CPU sizing for the DEFPA" by SCASS1::DAVIES (Mark, NPB Sales, Dallas,TX) Fri Apr 19 1996 09:59

    My customer is looking for a rule of thumb to use when sizing his Alpha
    processors which are using FDDI via the DEFPA adapter.  He wants to get
    full performance from the adapter (90Mbps+), but also needs some CPU
    power for other tasks.
    
    He had asked if there was a number of "specint's" that could be
    applied, eg, it takes 85 SpecInt's to drive the adapter at 90Mbps.
    
    Any help is appreciated.
    
    Thanks,
    
    Mark
    
T.RTitleUserPersonal
Name
DateLines
2018.1NETCAD::STEFANIFri Apr 19 1996 14:3916
    >>                     -< CPU sizing for the DEFPA >-
    
    Hmmm...well, we used the Bricks demo at Networks Expo two years ago to
    show two DEFPA's in Avanti's (or were they Mustangs?) doing ~90Mbps
    point to point on Digital UNIX.  Each system only had one processor.
    
    I don't know of any rules of thumbs in terms of #'s of CPU's equals X
    number of Mbits/sec. throughput.  Depending on the OS in question
    (UNIX, OpenVMS, or NT) you're likely to find someone with an opinion on
    what you should have.
    
    Remember, there are a LOT of factors in maximizing network throughput. 
    Simply throwing CPU's into a system does not guarantee that you'll see
    any better network throughput.
    
    /l
2018.2what's a MIP?BEET::EAGANAmong the fashion impairedTue Apr 23 1996 00:2614
    Mark (.0) was trying to help me out on this one.
    
    The customer will be using NT.  Specific platform isn't known at this
    point.  DEFPA-UA is the adapter in question.  We were hoping to
    find some benchmarks that show CPU utilization when moving
    data via CDDI.
    
    Yes, there are many variables (UDP vs TCP, etc), data rates,
    payload sizes.  The customer wants to know how much of his horse-
    power he'll be losing driving the card.
    
    His application involves moving images in and performing some
    manipulations that require a tremendous about of SPECints.  We'd
    like to help him size.
2018.3hmmmmmmBEET::EAGANAmong the fashion impairedTue Apr 23 1996 00:296
    I was not being serious about "What's a MIP".  I know what it is.
    It just doesn't make sense here (does it?).
    
    Let's say we have a 250 MHz 2100 system.  That's 1,000 MIPs, right?
    So.... are you saying that to drive a DEFPA at 50 Mbps takes only
    5% of the CPU?  I don't think so!  See what I mean?
2018.4NETCAD::STEFANITue Apr 23 1996 11:0311
    There's no way that I can answer your questions.  Your best bet is to
    get a DEFPA into your customer's hands and have them judge for
    themselves.
    
    Understand that Microsoft admits to performance bottlenecks in NT. 
    That's something that they're trying to address over time.  Note, the
    same Alpha system with the same options (including DEFPA) can get close
    to FDDI line speed running Digital UNIX, but gets less running Windows
    NT.
    
    /l
2018.5one reference pointNETCAD::ROLKETune in, turn on, fail overTue Apr 23 1996 11:0944
There was a review in Network Computing, January 15, 1995, titled
"FDDI and PCI: DEC Makes a Perfect Match". The key extract goes like this:
If you need high-speed networking and low CPU overhead, Ditital's DEFPA
FDDI card is for you.

They recorded the following server statistics for DEFPA:

	Workstations 	Network		Server CPU
	Attached	Utilization	Utilization
	------------	-----------	-----------
	1		70%		10%
	2		75%		10%

The server was a Dell P90 running NetWare 3.12 to Dos workstations.
For comparison the article showed a SysKonnect EISA board running
one workstation at 50% network with 75% CPU.  DEFPA's advantage in
this chart is rather stunning.

I believe that you may infer similar network/CPU utilization ratios
on other systems when offered the same packet-size load�.  This is
because the DEFPA has several architectural features that make it a
winner:

	o On-board CPU to handle Station Management functions
		The host CPU isn't bothered with all the SMT overhead.
	o On-board one megabyte packet memory buffer
		This smooths out huge bursts of network traffic.
	o Clever DMA to many, many queues sorted by hardware
		No host CPU time is spent determining packet destination.
	o Efficient queues
		Host driver and adapter communicate with producer and
		comsumer indices and not with "ownership bits".
	o Etc.

There are lots of features which when combined give a lowly NetWare
server 75% network utilization on 10% of a P90 CPU.

I'm sorry I don't have similar numbers for a NT server.  I don't know
anyone who is able to answer .0 with hard numbers.

Regards,
Chuck

�Would you like to buy the third harbor tunnel?  ;-) ;-)
2018.6CPU speed does matter if the transmit window is too small...STEVMS::PETTENGILLmulpFri Apr 26 1996 03:055
Few protocol implementations do much to tune the transmit window, so the
application needs to set it (if it can) when it needs more.  Windows TCP/IP
stacks generally have values that are best suited to Ethernet, and there
is no way to get significantly better performance with those settings without
a change in the value of C.