[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | FDDI - The Next Generation |
|
Moderator: | NETCAD::STEFANI |
|
Created: | Thu Apr 27 1989 |
Last Modified: | Thu Jun 05 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 2259 |
Total number of notes: | 8590 |
1067.0. "My UDP performance - Is this credible?" by DPDMAI::BEATTIE (Panic=Going to PC's) Fri Aug 20 1993 14:22
Hi!
I'm trying to understand the performance I've observed in a custom
application, and the results of some capacity testing I've recently
been working on. Can you help?
I've wriiten a server application which is designed to send high
quanitities of data over an FDDI channel to multiple black-box clients.
IP UDP is the protocol used for the data transfer, and the server has
been run on a 6000-610 (VMS 5.5-2) using both TCP/IP services for OpenVMS
2.0d and TGV MultiNet with similar results: a little more than 30
clients on one FDDI adapter at 1.5 mbit/sec saturates the CPU.
In all cases, somewhere above about 27-28 clients, CPU consumption
seems to increase non-linearly, and SPM PC sampling points to the
driver for the TCP/IP product in kernel mode.
I wondered what would happen if I bypassed the TCP/IP package on the
FDDI, and built my own UDP packet at the application level. Using a
test program which repeatedly sends the same UDP packet in a
"while ( SYS$QIOW() &1 );" loop, we consistently observed about
44mbit/sec aggregate from the 6000-610, with only about 60% CPU
consumption.
Thinking then that the I/O post processing latency might represent
additional available bandwidth, we came up with a variation of the above
that uses asynchronous I/O completion, and evaluated process quota
consumption to attempt to maintain a small pending workload against the
port. In this case, we monitored I/O per second both internally and
externally, and our counts seem to suggest we hit somewhere in the
neighborhood of 64 to 68 mbit/sec (saturating the CPU). We don't have
a network sniffer, or real data consumers, so I'm unable to verify the
actual load asserted on the FDDI, but I'm suspicious that we
may have overcounted a bit.
To test that, I suggested that we build up a quota-saturating load,
and then begin measuring successful QIO's (as each one we successfully
post would consume quota returned by an actual I/O completion). My
thought was that we would be able to see if the FDDI device was able to
complete more than one packet each time the token arrived. Using
this strategy, CPU saturation happened at a very much lower QIO rate,
leading me to believe that the work required to POST each I/O at process
quota saturation levels was displacing the 6000-610's capacity to
actually complete the I/O.
I have reviewed the numerous performance notes already, and didn't
really get a firm sense of whether my testing was valid, or my results
were reasonable. The test software I'm using represents a possible
actual implmentation in my full server, but before I approach that
work, I'd like to get a feel for the potential return on the effort
invested.
Can anyone tell me
(1) Are the results I'm reporting here *sane*
(2) Is there a *best* method for getting the biggest throughput out
of an FDDI channel *while the normal O/S is active*.
(3) Is there a credible way of measuring actual throughput
acheived *without a smart data consumer*?
-- Thanks a megabit!
Brian Beattie
quantity of data to black-box listeners at a measured rate. The server
runs on a 6000-610, and the application seems to saturate the CPU when
serving a little over 30 clients at about 1.5mbit/sec each.
T.R | Title | User | Personal Name | Date | Lines
|
---|