[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::fddi

Title:FDDI - The Next Generation
Moderator:NETCAD::STEFANI
Created:Thu Apr 27 1989
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2259
Total number of notes:8590

866.0. "FDDI killed my network performance!" by MSBCS::MORGENSTEIN () Thu Feb 18 1993 15:23

I'm in desperate need of FDDI help.  I upgraded my network from Ethernet to
FDDI and my test performance declined.  

I'm running some client-server TPC-B tests.  I have a VAX7640 as a back-end and
many 3100-40 client machines.  The network traffic generated is 1 IO in, 1 IO
out for each transaction/second.  I'm trying to run at throughputs > 500
transactions/second (tps).

I had been running the 7640 with 3 DEMNA's, each daisy-chained to a separate
segment with it's own clients.  I decided to try using FDDI and DECbridges and
when I did this the performance dropped.  The new setup has a DEMFA connected
to a concentrator and another connection to a DECbridge 610.  My 3 segments are
now connected to the ethernet ports on the DECbridge

                               fiber     | thick |thin                    
	7640                                 
                   concentrator              DESTA__uvax__uvax__uvax...
        +---+                       +----+  /                          
        |   |       +-+             |    =_/                           
        |  DEMFA----| |-------------= 610=---DESTA__uvax__uvax__uvax...
        |   |       +-+             |    =_                            
        +---+                       +----+ \                           
                                            \DESTA__uvax__uvax__uvax...
                                                                          
                                                                          

Performance starts out okay, then withers away, finally settling at about 280
tps.  This is about the same throughput I would see if I was running only 1
ethernet segment.

What have I done wrong?  I need for this to work and would appreciate any ideas
you might have.  I am not a network person.  I have not changed any settings on
my bridges or concentrators (though I do have MSU running in case I need to do
so).

Ruth Morgenstein
System Performance Analysis Group
T.RTitleUserPersonal
Name
DateLines
866.1STAR::PARRISVMS is VMS is OpenVMS nowFri Feb 19 1993 11:0013
Your conversion to FDDI did introduce some additional latency (the bridge
receives a packet on one side, sticks it into a queue where it waits for any
packets ahead of it to be transmitted, then arbitrates for the other
interconnect and retransmits the packet).  Having a DEMNA directly on every
Ethernet minimized the packet latency. 

But if the performance starts out high and goes down, perhaps something else is
going on.  Have you checked the DECnet counters to see if any unusual counts
show up?  Perhaps the FDDI adapter could be sending out more data destined for
a single Ethernet segment than that segment can handle?  You could try lowering
the DECnet Executor Pipeline Quota and see if that helps (see SPEZKO::CLUSTER
note 3166 or see James Frazier's article in the Nov. '92 issue of VAXcluster
Systems Quorum Journal). 
866.2any other ideas?MSBCS::MORGENSTEINFri Feb 19 1993 13:4840
I have lowered pipeline quota on the VAX7640 side to 4032.  That doesn't fix
the problem.

The tests always have even distribution of data across the 3 Ethernets.  I
always run even numbers of client on each segment.
The TPC-A transaction is quite simple.

	Client:  "Teller <x> from branch <y> wants to update account <z>"
	Server:	 "Here's the new balance" 



>Your conversion to FDDI did introduce some additional latency (the bridge
>receives a packet on one side, sticks it into a queue where it waits for any
>packets ahead of it to be transmitted, then arbitrates for the other
>interconnect and retransmits the packet).  Having a DEMNA directly on every
>Ethernet minimized the packet latency. 

At 333 buffered IO/sec on each segment on the bridge I couldn't be queueing too
much (I hope).  I have tried some Dtsend tests.  I get slightly more
bytes/second from the server to a client on the other side of the bridge than I
get going to a client attached to an ethernet on the DEMNA.  

DECbridge experts, is there anything in the bridge such that an evenly
distributed flow of traffic back and forth through the bridge would arbitrate
in bursts?  (At this low rate I would hope not).

DEMFA experts, are there any driver updates or ucode updates that might address
this performance problem?  Any problems doing small packets?

Are there any buttons I might have forgotten to push or setting I forget to
set?  This seems a lot like a physical problem, since nothing I'm doing seems
to come close to any limits I've read about.

I check my line counters all the time.  No errors.  No buffers unavailable.  No
excessive collision rates (usually 3-5% deferred+collisions).  I do see
retransmits (steady state should have blocks received = blocks transmit -- when
things fall apart one or the other grows).

Ruth
866.3Some progess with changing LRPSIZEMSBCS::MORGENSTEINWah HeyTue Feb 23 1993 16:0912
I have been watching note 854 with great interest and believe it has a lot to
do with my problem.  I changed the LRPSIZE and my system got to about 365 TPS
before I managed to break something else.  (It used to only get 280 TPS) I
haven't had time since to try it out again.

We're in the process of bringing up Blade to see how it treats our setup.  I
probably won't get to test the network stuff today, but will post results as
soon as I get them.

Thanks to Bill Wade for figuring this out.

Ruth
866.4Blade FDDI looks better, but still not good enoughMSBCS::MORGENSTEINWah HeyWed Feb 24 1993 14:5616
I have data from Blade.  The FDDI still does not keep up with the Ethernets. 
I'm up to 4 Ethernet segments now.  The DEMFA results show 8.5% less throughput
for 12.5% more interrupt stack on the primary.  

The FDDI setup has 4 segments going to 2 DECbridge 610s and 1 DEMFA on the back
end.  The Ethernet setup has the 4 segments connected to 4 DEMNAs on the back
end.  

I'm using V6.0-5KE.  Does this have the "more efficient driver" alluded to in
note 854.8?  If not, how do I get it?

We're working towards an audited TPC-A test.  Will I really have to use the
many DEMNA solution (inelegant, at best) or can someone please help me publish
numbers using a more modern fiber solution?  

Ruth
866.5STAR::GAGNEDavid Gagne - VMS/LAN DevelopmentWed Feb 24 1993 17:018
    BLADE functional code freeze was November 1991.
    
    The more efficient driver was completed about 2 weeks ago; so the
    more efficient driver is after the BLADE release.  In fact, it's
    not available for OpenVMS VAX yet.  It has not been debugged on
    OpenVMS VAX yet.
    
    Sorry.
866.6KONING::KONINGPaul Koning, A-13683Wed Feb 24 1993 17:513
1991?  Is that a typo?

	paul
866.7STAR::GAGNEDavid Gagne - VMS/LAN DevelopmentThu Feb 25 1993 13:521
    1991 is correct.