[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference wonder::turbolaser

Title:TurboLaser Notesfile - AlphaServer 8200 and 8400 systems
Notice:Welcome to WONDER::TURBOLASER in it's new homeshortly
Moderator:LANDO::DROBNER
Created:Tue Dec 20 1994
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:1218
Total number of notes:4645

1091.0. "Config of PCI-box ?" by STKAI1::JKJELLBERG () Thu Feb 06 1997 13:13

    Hi.
    
    What's the best config of the following in the PCI-boxes ?
    I'm concerned about slot priority.
    The 8400's running console FW 4.1-6 (nov-96).
    
    In each pci-box : 2 x KZPSAA, 2 X DEFPA-AA, 2 X CCMAA MC adapters
    
    
    
    regards  
    Jonas
    
T.RTitleUserPersonal
Name
DateLines
1091.1CuriousWONDER::WILLARDFri Feb 07 1997 14:365
	Gee - this slot priority red herring keeps coming up.  What
	document did you see which makes you believe that PCI slots
	in a T/Laser have priority?

Cheers, Bob
1091.2stars article and inst guideSTKAI1::JKJELLBERGMon Feb 10 1997 11:4410
    Memory Channel intallation guide states that adapters should be
    installed in high priority slots.
    There are also an article in "stars hardware" that says that DEFPA in
    8200/8400 must be installed in high priority slots.
    
    So,I'm asking again,what modules should be installed closest to the 
    power in the pci-box. 
    
    thanks in advance , Jonas
    
1091.3TL PCI DMA BW PDQ, OK?WONDER::WILLARDTue Feb 11 1997 16:0055
	The MC installation guide says that, with DWLPA, CCMAAs must only
	be placed in slots 0-7 (not 8-11); with DWLPB, CCMAAs may be
	placed in any slot.

	I don't remember what the DEFPA problem was, so I don't know
	whether or not the DWLPB cures it (but I'm inclined to suspect
	that it does).  Does the stars article specify whether the
	DEFPA placement restriction applies to DWLPBs, or just to DWLPAs?
	Does the stars article explain the underlying problem?  {And, he
	asked with tongue firmly in cheek, what is the URL for stars?}

	There is no relationship between slot number and DMA priority
	with the DWLPx.  There is no PCI-PCI bridge between one segment
	of PCI slots and other 4-slot segments.  There is no such thing as
	a high priority slot w.r.t. DMA, and any well-designed PCI widget
	should operate equally well in any PCI slot of a DWLPB.

	Device drivers (and other software) may, perhaps accidently,
	favor some PCI slots over others.  {I can't think of a way to
	delicately ask driver writers if they have included such
	marketing features.}  If you wish to fine-tune your system
	to minimize any such effects, you will need to understand
	which SCSI bus (and which MC and which FDDI and which NI and ...)
	is more sensitive to driver latency than other kindred buses, and 
	place the more-sensitive on (probably) low-numbered slots of 
	low-numbered PCIs.  In general, I think it would be more rewarding 
	to try to balance the load across however many PCIs you have 
	and - even more importantly - to try to balance the memory-read 
	load across the trio of 4-slot PCI segments within a PCI; this 
	does require that you understand the traffic expected on each 
	PCI device - at least the bandwidth and read:write ratio.

	There is one PCI widget that can cause problems:  the KZPAA,
	which was intended for CDROM use - essentially off-line.  If you
	insist on using the KZPAA for on-line use (connected, say, to
	TZmumbles or RZmumbles), then performance of other PCI widgets
	on that DWLPx may be noticeably affected and performance of
	other PCI widgets on that PCI segment may be seriously affected.
	But, if you cared about performance more than cost, you wouldn't
	have picked the KZPAA.
	
	It is possible that some PCI widgets will have side effects that
	affect the performance of other PCI widgets on the same PCI or
	on the same PCI segment; the KZPAA is an example.  But, we do not
	generally analyze the performance of PCI widgets in sufficient
	detail to predict such side-effects; and, neither widget vendors
	nor widget-driver writers specify such details.  

	If this all sounds esoteric, you're right.  But, the good news
	is that most systems can be cobbled together any way that works, 
	with little impact on performance.  For the few that are IO-intensive, 
	simple load balancing - across PCIs and across PCI segments - is 
	usually the only concern.
	
Cheers, Bob
1091.4NETCAD::STEFANIFDDI Adapters R UsWed Feb 12 1997 08:0629
>>	Device drivers (and other software) may, perhaps accidently,
>>	favor some PCI slots over others.  {I can't think of a way to
>>	delicately ask driver writers if they have included such
>>	marketing features.}  If you wish to fine-tune your system

There's no reason to be delicate.  These are legitimate engineering questions.

Speaking as THIS device driver developer (author of DEFPA NDIS 3 miniport
driver), there are no PCI slot favorites that I've written into my code.

The only discussion I've seen around slot priorities is regarding PCI bus
arbitration and whether some slots are favored for PCI grants.  The DEFPA and
DEFEA are both PDQ-based adapters which are store-and-forward.  Both adapters
can handle long bus latencies on both receive and transmit because we never
send out anything unless we have the entire packet in packet memory, and we're
less likely to drop incoming frames because we have ~1MByte of on-board packet
memory.

Our cut-through, minimum FIFO adapters such as the DE450 and DE500 adapters are
more susceptible to performance problems and/or transmit underruns when kept
off the bus for a considerable amount of time.  On the other hand, they have
very good latency characteristics when given the bus.

Perhaps there were some low level hardware reasons why the DEFPA had slot
recommendations given for this Alpha system, but I can't think of any right
now.

Regards,
   Larry
1091.5DWLPBSTKAI1::JKJELLBERGWed Feb 12 1997 09:3110
    Hi and thanks for your answer.
    
    I should have done some more research.
    It's a DWLPB I got so MC adapters can fit anywhere.
    The article was about the DWLPA so I don't need to worry
    about the DEFPA's.
    
    Sorry for taking your time.
    
    Regards , Jonas 
1091.6No "high priority" slotsSMURF::GREBUSDTN 381-1426 - Digital UnixWed Feb 12 1997 14:1129
    In the DWLPA, the bus arbiter took advantage of the fact that it is
    sometimes allowed to withdraw bus grant from a PCI device before the
    device starts using the bus.  This happens because the arbitration is
    overlapped with the previous bus transaction.
    
    Because some PCI devices (including the DEFPA and the PCI NVRAM) 
    didn't like this behavior, the upper 4 slots were modified to never 
    withdraw grant [long, complicated explanation about "why only 4 slots"
    omitted...]
    
    There was no difference in arbitration priority among the slots.
    
    The arbiter was redesigned for the DWLPB to not have the unfriendly
    behavior, so there shouldn't be any DEFPA slot restrictions on DWLPBs.
    I believe there was also a later pass of the DEFPA gate array that
    fixed its sensitivity to the DWLPA's behavior, so there may be some
    DEFPA's which will work in any slot of a DEFPA (although the console
    was enforcing the original config rule at one time).
    
    The only prioritization that I recall is that interrupts are
    prioritized within a 4 slot segment...priority decreases with
    increasing slot number i.e. 0 > 1 > 2 >3,  4 > 5 > 6 > 7, etc,
    equal priority between segments.
    
    As Bob said, balance the load across the 3 segments and you should get
    the best available performance.
    
    	/gary
    
1091.7x-conUSPS::FPRUSSFrank Pruss, 202-232-7347Fri Feb 14 1997 13:5620
    I had seen this thread earlier and came back after talking to a
    specialist in MFG about X-CON.
    
    He says his understanding of X-CONS rules are as follows:
    
    Use PCI slots in this order:
    
    11, 7, 3, 10, 6, 2, 9, 5, 1, 8, 4, 0
    
    Configure KZPSA's first, DExxx next, other devices next.
    
    Any memory channel  adapters should start with 0 and proceed to 1
    
    He said there was a requirement or rule that dual MC's had to be on the
    same PCI segment and next to each other?
    
    
    
    
    
1091.8NETCAD::STEFANIFDDI Adapters R UsFri Feb 14 1997 16:1819
re: .6
    
>>    Because some PCI devices (including the DEFPA and the PCI NVRAM) 
>>    didn't like this behavior, the upper 4 slots were modified to never 

That's very possible with regards to the DEFPA.
    
>>    The arbiter was redesigned for the DWLPB to not have the unfriendly
>>    behavior, so there shouldn't be any DEFPA slot restrictions on DWLPBs.
>>    I believe there was also a later pass of the DEFPA gate array that
>>    fixed its sensitivity to the DWLPA's behavior, so there may be some
>>    DEFPA's which will work in any slot of a DEFPA (although the console
>>    was enforcing the original config rule at one time).

As an FYI, the new pass of the DEFPA PCI bus chip ("PFI") will be found on all
DEFPA-*B options (DEFPA-AB, DEFPA-DB, ...) so you can distinguish them from 
the current DEFPA-*A options.

- Larry
1091.9Only same boxAFW4::CLEMENCEFri Feb 14 1997 19:0015
RE: .7

>    He said there was a requirement or rule that dual MC's had to be on the
>    same PCI segment and next to each other?

	The official rule is for Unix Trucluster failover. The two MC 
adapters must be in the same DWLPA/DWLPB box. They do not have to next to each
other in the the box, only that they are on the same hose port.


	It comes from the way the software switches the MC adapters around.
    
    

				Bill Clemence
1091.10What is behind X-CON's "logic"?USPS::FPRUSSFrank Pruss, 202-232-7347Sat Feb 15 1997 07:1311
    Are there any practical implications of the ranking of interrupt
    priorities on a segment?
    
    Why would X-CON start with the slots that have the "lowest" interrupt
    priority? Is this because they start with the KZPSA and assume this
    device can safely be given the lowest interrupt priority of all.
    
    Or are these rules primarily a simple way to "enforce" distributing 
    options across the three segments?
    
    FJP
1091.11XCON needs helpWONDER::WILLARDSun Feb 16 1997 07:3423
re .7:

>    He says his understanding of X-CONS rules are as follows:
>    
>    Use PCI slots in this order:
>    
>    11, 7, 3, 10, 6, 2, 9, 5, 1, 8, 4, 0
>    
>    Configure KZPSA's first, DExxx next, other devices next.
>    
>    Any memory channel  adapters should start with 0 and proceed to 1
>    
>    He said there was a requirement or rule that dual MC's had to be on the
>    same PCI segment and next to each other?

	The rules above are clearly wrong: they ignore two-board options
	(the CIPCA), they enforce MC rules that are no longer needed, and
	they don't address multiple PCIs.

	I'll try to get this fixed, but my last effort to get SNAFUed XCON
	rules fixed in a product I did not own required over a year.

Cheers, Bob
1091.12watch those MCs cards..WONDER::MUZZIMon Feb 17 1997 09:1012
    
    
    The unix MC rule of both in the same box...is just that...a unix rule.
    For VMS you can only have ONE MC in a DWLPA. You can have 2 MCs in a
    DWLPB, 1 in each of 2 DWPLAs, or 1 in a DWLPA and 1 in a DWLPB. Unlike
    unix, VMS bring both MCs online and needs all of the DWPLAs DMA maps and
    then some to bring them both online. With the DWLPB there's enough DMA
    maps to bring both MCs online. When unix moves to having both MCs
    online there going to have the same restriction as VMS does not.
    
    
    	-Mark-
1091.13MC clarification, please; what do you mean, X-CON?CAMPY::WILSONWed Mar 05 1997 16:0718
    .12 implies that you can't have a MC in two separate DWLPBs under VMS. 
    In other words, as long as we're in a DWLPB-only universe (no DWLPAs),
    it looks like the requirement of "both MCs in the same PCI box" applies 
    under both UNIX and VMS.  Is this correct?
    
    thanks,
    
    Rick Wilson
    
    P.S.  BTW, references to "X-CON" in earlier notes are odd indeed; XCON
    was retired in June, 1995.  There are some successor applications in
    use: the MEXtools in manufacturing; the Alpha Configuration Utility
    from Kylor Corp.; and the Electronic Solution Provider (ESP - cute,
    eh?) from Trilogy Corp. and Digital.  The last two are used by sales;
    only ESP will survive long-term.  Since I work on ESP (and used to work
    on XCON), I am curious about which tool's rules were being cited.  If
    in fact those outmoded constraints persist, especially in MEX or ESP,
    obviously they should be removed.
1091.142 DWLPBs 1 MC in each...WONDER::MUZZIWed Mar 05 1997 17:107
    
    
    I guess I missed the case of 2 DWPLBs. You can have a MC in each DWLPB.
    I guess the rules will change again when we go to more the 2 MCs per
    system. Stay tuned..
    
    
1091.15Not sure what it was...USPS::FPRUSSFrank Pruss, 202-232-7347Wed Mar 05 1997 22:086
    re:. 13
    I'm travelling and don't have the copy in front of me.  The mail is
    postscript and I don't have a viewer on my laptop.  It came from
    manufaturing, so I'd guess MEXtools, but it  sure looked like XCON.
    
    fjp