T.R | Title | User | Personal Name | Date | Lines |
---|
2018.1 | | NETCAD::STEFANI | | Fri Apr 19 1996 14:39 | 16 |
| >> -< CPU sizing for the DEFPA >-
Hmmm...well, we used the Bricks demo at Networks Expo two years ago to
show two DEFPA's in Avanti's (or were they Mustangs?) doing ~90Mbps
point to point on Digital UNIX. Each system only had one processor.
I don't know of any rules of thumbs in terms of #'s of CPU's equals X
number of Mbits/sec. throughput. Depending on the OS in question
(UNIX, OpenVMS, or NT) you're likely to find someone with an opinion on
what you should have.
Remember, there are a LOT of factors in maximizing network throughput.
Simply throwing CPU's into a system does not guarantee that you'll see
any better network throughput.
/l
|
2018.2 | what's a MIP? | BEET::EAGAN | Among the fashion impaired | Tue Apr 23 1996 00:26 | 14 |
| Mark (.0) was trying to help me out on this one.
The customer will be using NT. Specific platform isn't known at this
point. DEFPA-UA is the adapter in question. We were hoping to
find some benchmarks that show CPU utilization when moving
data via CDDI.
Yes, there are many variables (UDP vs TCP, etc), data rates,
payload sizes. The customer wants to know how much of his horse-
power he'll be losing driving the card.
His application involves moving images in and performing some
manipulations that require a tremendous about of SPECints. We'd
like to help him size.
|
2018.3 | hmmmmmm | BEET::EAGAN | Among the fashion impaired | Tue Apr 23 1996 00:29 | 6 |
| I was not being serious about "What's a MIP". I know what it is.
It just doesn't make sense here (does it?).
Let's say we have a 250 MHz 2100 system. That's 1,000 MIPs, right?
So.... are you saying that to drive a DEFPA at 50 Mbps takes only
5% of the CPU? I don't think so! See what I mean?
|
2018.4 | | NETCAD::STEFANI | | Tue Apr 23 1996 11:03 | 11 |
| There's no way that I can answer your questions. Your best bet is to
get a DEFPA into your customer's hands and have them judge for
themselves.
Understand that Microsoft admits to performance bottlenecks in NT.
That's something that they're trying to address over time. Note, the
same Alpha system with the same options (including DEFPA) can get close
to FDDI line speed running Digital UNIX, but gets less running Windows
NT.
/l
|
2018.5 | one reference point | NETCAD::ROLKE | Tune in, turn on, fail over | Tue Apr 23 1996 11:09 | 44 |
| There was a review in Network Computing, January 15, 1995, titled
"FDDI and PCI: DEC Makes a Perfect Match". The key extract goes like this:
If you need high-speed networking and low CPU overhead, Ditital's DEFPA
FDDI card is for you.
They recorded the following server statistics for DEFPA:
Workstations Network Server CPU
Attached Utilization Utilization
------------ ----------- -----------
1 70% 10%
2 75% 10%
The server was a Dell P90 running NetWare 3.12 to Dos workstations.
For comparison the article showed a SysKonnect EISA board running
one workstation at 50% network with 75% CPU. DEFPA's advantage in
this chart is rather stunning.
I believe that you may infer similar network/CPU utilization ratios
on other systems when offered the same packet-size load�. This is
because the DEFPA has several architectural features that make it a
winner:
o On-board CPU to handle Station Management functions
The host CPU isn't bothered with all the SMT overhead.
o On-board one megabyte packet memory buffer
This smooths out huge bursts of network traffic.
o Clever DMA to many, many queues sorted by hardware
No host CPU time is spent determining packet destination.
o Efficient queues
Host driver and adapter communicate with producer and
comsumer indices and not with "ownership bits".
o Etc.
There are lots of features which when combined give a lowly NetWare
server 75% network utilization on 10% of a P90 CPU.
I'm sorry I don't have similar numbers for a NT server. I don't know
anyone who is able to answer .0 with hard numbers.
Regards,
Chuck
�Would you like to buy the third harbor tunnel? ;-) ;-)
|
2018.6 | CPU speed does matter if the transmit window is too small... | STEVMS::PETTENGILL | mulp | Fri Apr 26 1996 03:05 | 5 |
| Few protocol implementations do much to tune the transmit window, so the
application needs to set it (if it can) when it needs more. Windows TCP/IP
stacks generally have values that are best suited to Ethernet, and there
is no way to get significantly better performance with those settings without
a change in the value of C.
|