[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference netcad::hub_mgnt

Title:DEChub/HUBwatch/PROBEwatch CONFERENCE
Notice:Firmware -2, Doc -3, Power -4, HW kits -5, firm load -6&7
Moderator:NETCAD::COLELLADT
Created:Wed Nov 13 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:4455
Total number of notes:16761

2064.0. "Unidirectional IP performance problem..." by KLUSTR::GARDNER (The secret word is Mudshark.) Wed Mar 01 1995 17:29

	I'm having a srange performance problem that I'm hoping someone
	here will recognize and/or be able to help me with...I hope this
	isn't something really stupid on my part.....

	I have a PC running DOS/Windows on Ethernet and an Alpha running
	Digital UNIX aka DEC OSF/1 on FDDI with a DECbridge 900MX in between
	them. (The topology is actually just slightly more complex then that
	but I believe these are the only essential facts. If it becomes
	necessary I can draw a picture of the real topology).

	Using ftp, performance Ethernet->FDDI is great. However,
	performance FDDI->Ethernet is abysmal; the is an almost
	2 orders of magnitude difference. The is observed regardless
	of the amount of data being transfered, except for *very* small
	amounts (like a few hundred bytes).

	Pushing the same bits around over IPX vs IP between the same
	two systems gave me equal good performance in both directions.

	Now my first thought when seeing this was that it probably had
	something to do with IP fragmentation. With HUBwatch, I went
	to play with the DB900MX and was suprised by two things:
	- IP fragmentation was already "enabled" although I'm pretty
	  sure I did not turn it on. I thought (from reading other notes
	  in this conf.) that its default state was disabled.
	- more amazing to me, disabling it HAD NO EFFECT on this problem
	The only "reward" I got for this effort was the sinking feeling
	that either I fundamentally didn't understand what IP fragmentation
	was intended for or something really weird was happening...

	I also tried a variety of ftp/ftpd implementations on both sides
	to assure myself that this wasn't an application layer problem.
	It isn't...

	I have looked at counters on both systems and used IRIS to look
	at things "in the middle" on the Ethernet side of the fence.
	All looks "normal" at least to my eyes. (Unfortunately, I don't
	have a PC FDDI adapter which I could use to either narrow down the
	problem or use IRIS on the FDDI side of the fence.)

	The DB900MX is running FW V1.4.0. (Aside: I recently upgraded
	all the various net bits around here with the FT of HUBloader.
	It was *way* to easy ;-)

	Sooooo, does anyone have some insight on this one? Part of me hopes
	this IS a really stupid problem (but the other part doesn't want
	to feel stupid...)

	Thanx in advance for any input

	_kelley
T.RTitleUserPersonal
Name
DateLines
2064.1I'm not imagining this!KLUSTR::GARDNERThe secret word is Mudshark.Thu Mar 02 1995 16:149
	fwiw I have now verified the results in .0 using a variety of PCs
	with a variety of NICs on the Ethernet side and a variety of Alphas
	running either DEC OSF/1 (aka Digital UNIX) or OpenVMS...the results
	are consistently a twofold order of magnitude differences in
	Kbyte/second performance...

	I'm stumped...its gotta be something simple but what?? HELP!

	_kelley
2064.2Maybe IP MTU Discovery is happening ?NETCAD::KRISHNAThu Mar 02 1995 17:2530
    
    
    On the DB900MX we implement the IP MTU discovery algorithm. One
    possibility of what could be happening is that when the FTP session is
    started, the OSF/1 station tries to use the max size packet (4500
    bytes) but with the Don't fragment bit set. Since we cannot pass the
    packet to Ethernet without fragmenting it, the DB900MX would send an
    ICMP message to the OSF/1 telling it that it can't pass the packet. Now
    the OSF/1 has to retry with a smaller size packet. I am not sure about
    the following:
    
    1. What is the retry algorithm, in other words does the OSF/1 drop the
    packet size to 1500 on receiving an ICMP msg or does it undergo a trial
    and error mecahnism each one resulting in an ICMP msg from the bridge
    until it hits 1500 byte size packets.
    
    2. Once we send the ICMP message back, what priority is it serviced and
    then notifies the FTP application ? Could there be a high latency there
    ?
    
    The best way to debug this problem is to get a FDDI analyzer and
    monitor the exchanges between the bridge and the OSF/1 system looking at
    the timestamps of the messages.
    
    For those who are interested, the IP MTU Discovery algorithm is
    specified in RFC 1191 and is implemented in the DB900MX, DS900EF and
    the PE900TX.
    
    Hope this helps,
    Krishna
2064.3yeah, but...KLUSTR::GARDNERThe secret word is Mudshark.Thu Mar 02 1995 17:5114
	re: .2
	hmmm...thought of that, and some variation of that is what I would
	expect *IF* IP Fragmentation in the DB900MX was Disabled...thing
	is I get the same result with IP Fragementation Enabled which I
	thought would avoid the packetsize renegotiation and from what
	I've read in this conf. I also thought the DB900MX could
	saturate an Ethernet segment in this mode (i.e. IP Frag Enabled
	shouldn't impose a performance penalty)...OR AM I MISSING
	SOMETHING???

	_kelley

	ps - I'll see what I can do about getting an analyzer on FDDI
	(maybe there's a software based one I can use)....
2064.4never mindKLUSTR::GARDNERThe secret word is Mudshark.Fri Mar 03 1995 17:1310
	well it turns out this has something to do with the particular
	network software I'm using on the PCs here...in short my setup
	involves using Novell's ODI Netware client and Microsoft's
	TCP/IP-32a stack...its some sort of tuning problem but, as is
	typical when products from Novell *and* Microsoft are involved,
	there is little hope of finding a quick answer...

	sorry for the noise ;-)

	_kelley
2064.5Raw 802.3 problem?PTOJJD::DANZAKPittsburgher �Sun Mar 05 1995 14:275
    Did you need to set the "enable raw 802.3" option on the
    DECswitches...?  If they are running RAW 802.3 that could cause
    issues...
    j
    
2064.6not the problemKLUSTR::GARDNERThe secret word is Mudshark.Mon Mar 06 1995 09:4611
	re: .5

	- my Netware world is setup to use (the now default for Novell)
	  802.2, not the bogus "raw 802.3"...
	- the problem I'm having is with IP, not IPX...indeed, IPX is
	  performing as expected.....

	(aside: our implementations of Netware for Unix on both OpenVMS
	 and Digital UNIX do not support "raw 802.3" mode)...

	_kelley
2064.7window sizes?WOTVAX::BANKSTNetwork MercenaryMon Mar 06 1995 12:429
    Is there a mismatch of TCP window size perhaps, with the OSF/1 system
    perhaps having a max window smaller by default, than the PC?  SO it
    would request a small window when you initiate from the OSF/1, but use
    a bigger window when accepting PC connects?
    
    Or am I crediting TCP/IP with DECnet/OSI like functionality it doesn't
    have?? :-)
    
    Tim