T.R | Title | User | Personal Name | Date | Lines |
---|
1091.1 | Curious | WONDER::WILLARD | | Fri Feb 07 1997 14:36 | 5 |
| Gee - this slot priority red herring keeps coming up. What
document did you see which makes you believe that PCI slots
in a T/Laser have priority?
Cheers, Bob
|
1091.2 | stars article and inst guide | STKAI1::JKJELLBERG | | Mon Feb 10 1997 11:44 | 10 |
| Memory Channel intallation guide states that adapters should be
installed in high priority slots.
There are also an article in "stars hardware" that says that DEFPA in
8200/8400 must be installed in high priority slots.
So,I'm asking again,what modules should be installed closest to the
power in the pci-box.
thanks in advance , Jonas
|
1091.3 | TL PCI DMA BW PDQ, OK? | WONDER::WILLARD | | Tue Feb 11 1997 16:00 | 55 |
| The MC installation guide says that, with DWLPA, CCMAAs must only
be placed in slots 0-7 (not 8-11); with DWLPB, CCMAAs may be
placed in any slot.
I don't remember what the DEFPA problem was, so I don't know
whether or not the DWLPB cures it (but I'm inclined to suspect
that it does). Does the stars article specify whether the
DEFPA placement restriction applies to DWLPBs, or just to DWLPAs?
Does the stars article explain the underlying problem? {And, he
asked with tongue firmly in cheek, what is the URL for stars?}
There is no relationship between slot number and DMA priority
with the DWLPx. There is no PCI-PCI bridge between one segment
of PCI slots and other 4-slot segments. There is no such thing as
a high priority slot w.r.t. DMA, and any well-designed PCI widget
should operate equally well in any PCI slot of a DWLPB.
Device drivers (and other software) may, perhaps accidently,
favor some PCI slots over others. {I can't think of a way to
delicately ask driver writers if they have included such
marketing features.} If you wish to fine-tune your system
to minimize any such effects, you will need to understand
which SCSI bus (and which MC and which FDDI and which NI and ...)
is more sensitive to driver latency than other kindred buses, and
place the more-sensitive on (probably) low-numbered slots of
low-numbered PCIs. In general, I think it would be more rewarding
to try to balance the load across however many PCIs you have
and - even more importantly - to try to balance the memory-read
load across the trio of 4-slot PCI segments within a PCI; this
does require that you understand the traffic expected on each
PCI device - at least the bandwidth and read:write ratio.
There is one PCI widget that can cause problems: the KZPAA,
which was intended for CDROM use - essentially off-line. If you
insist on using the KZPAA for on-line use (connected, say, to
TZmumbles or RZmumbles), then performance of other PCI widgets
on that DWLPx may be noticeably affected and performance of
other PCI widgets on that PCI segment may be seriously affected.
But, if you cared about performance more than cost, you wouldn't
have picked the KZPAA.
It is possible that some PCI widgets will have side effects that
affect the performance of other PCI widgets on the same PCI or
on the same PCI segment; the KZPAA is an example. But, we do not
generally analyze the performance of PCI widgets in sufficient
detail to predict such side-effects; and, neither widget vendors
nor widget-driver writers specify such details.
If this all sounds esoteric, you're right. But, the good news
is that most systems can be cobbled together any way that works,
with little impact on performance. For the few that are IO-intensive,
simple load balancing - across PCIs and across PCI segments - is
usually the only concern.
Cheers, Bob
|
1091.4 | | NETCAD::STEFANI | FDDI Adapters R Us | Wed Feb 12 1997 08:06 | 29 |
| >> Device drivers (and other software) may, perhaps accidently,
>> favor some PCI slots over others. {I can't think of a way to
>> delicately ask driver writers if they have included such
>> marketing features.} If you wish to fine-tune your system
There's no reason to be delicate. These are legitimate engineering questions.
Speaking as THIS device driver developer (author of DEFPA NDIS 3 miniport
driver), there are no PCI slot favorites that I've written into my code.
The only discussion I've seen around slot priorities is regarding PCI bus
arbitration and whether some slots are favored for PCI grants. The DEFPA and
DEFEA are both PDQ-based adapters which are store-and-forward. Both adapters
can handle long bus latencies on both receive and transmit because we never
send out anything unless we have the entire packet in packet memory, and we're
less likely to drop incoming frames because we have ~1MByte of on-board packet
memory.
Our cut-through, minimum FIFO adapters such as the DE450 and DE500 adapters are
more susceptible to performance problems and/or transmit underruns when kept
off the bus for a considerable amount of time. On the other hand, they have
very good latency characteristics when given the bus.
Perhaps there were some low level hardware reasons why the DEFPA had slot
recommendations given for this Alpha system, but I can't think of any right
now.
Regards,
Larry
|
1091.5 | DWLPB | STKAI1::JKJELLBERG | | Wed Feb 12 1997 09:31 | 10 |
| Hi and thanks for your answer.
I should have done some more research.
It's a DWLPB I got so MC adapters can fit anywhere.
The article was about the DWLPA so I don't need to worry
about the DEFPA's.
Sorry for taking your time.
Regards , Jonas
|
1091.6 | No "high priority" slots | SMURF::GREBUS | DTN 381-1426 - Digital Unix | Wed Feb 12 1997 14:11 | 29 |
| In the DWLPA, the bus arbiter took advantage of the fact that it is
sometimes allowed to withdraw bus grant from a PCI device before the
device starts using the bus. This happens because the arbitration is
overlapped with the previous bus transaction.
Because some PCI devices (including the DEFPA and the PCI NVRAM)
didn't like this behavior, the upper 4 slots were modified to never
withdraw grant [long, complicated explanation about "why only 4 slots"
omitted...]
There was no difference in arbitration priority among the slots.
The arbiter was redesigned for the DWLPB to not have the unfriendly
behavior, so there shouldn't be any DEFPA slot restrictions on DWLPBs.
I believe there was also a later pass of the DEFPA gate array that
fixed its sensitivity to the DWLPA's behavior, so there may be some
DEFPA's which will work in any slot of a DEFPA (although the console
was enforcing the original config rule at one time).
The only prioritization that I recall is that interrupts are
prioritized within a 4 slot segment...priority decreases with
increasing slot number i.e. 0 > 1 > 2 >3, 4 > 5 > 6 > 7, etc,
equal priority between segments.
As Bob said, balance the load across the 3 segments and you should get
the best available performance.
/gary
|
1091.7 | x-con | USPS::FPRUSS | Frank Pruss, 202-232-7347 | Fri Feb 14 1997 13:56 | 20 |
| I had seen this thread earlier and came back after talking to a
specialist in MFG about X-CON.
He says his understanding of X-CONS rules are as follows:
Use PCI slots in this order:
11, 7, 3, 10, 6, 2, 9, 5, 1, 8, 4, 0
Configure KZPSA's first, DExxx next, other devices next.
Any memory channel adapters should start with 0 and proceed to 1
He said there was a requirement or rule that dual MC's had to be on the
same PCI segment and next to each other?
|
1091.8 | | NETCAD::STEFANI | FDDI Adapters R Us | Fri Feb 14 1997 16:18 | 19 |
| re: .6
>> Because some PCI devices (including the DEFPA and the PCI NVRAM)
>> didn't like this behavior, the upper 4 slots were modified to never
That's very possible with regards to the DEFPA.
>> The arbiter was redesigned for the DWLPB to not have the unfriendly
>> behavior, so there shouldn't be any DEFPA slot restrictions on DWLPBs.
>> I believe there was also a later pass of the DEFPA gate array that
>> fixed its sensitivity to the DWLPA's behavior, so there may be some
>> DEFPA's which will work in any slot of a DEFPA (although the console
>> was enforcing the original config rule at one time).
As an FYI, the new pass of the DEFPA PCI bus chip ("PFI") will be found on all
DEFPA-*B options (DEFPA-AB, DEFPA-DB, ...) so you can distinguish them from
the current DEFPA-*A options.
- Larry
|
1091.9 | Only same box | AFW4::CLEMENCE | | Fri Feb 14 1997 19:00 | 15 |
| RE: .7
> He said there was a requirement or rule that dual MC's had to be on the
> same PCI segment and next to each other?
The official rule is for Unix Trucluster failover. The two MC
adapters must be in the same DWLPA/DWLPB box. They do not have to next to each
other in the the box, only that they are on the same hose port.
It comes from the way the software switches the MC adapters around.
Bill Clemence
|
1091.10 | What is behind X-CON's "logic"? | USPS::FPRUSS | Frank Pruss, 202-232-7347 | Sat Feb 15 1997 07:13 | 11 |
| Are there any practical implications of the ranking of interrupt
priorities on a segment?
Why would X-CON start with the slots that have the "lowest" interrupt
priority? Is this because they start with the KZPSA and assume this
device can safely be given the lowest interrupt priority of all.
Or are these rules primarily a simple way to "enforce" distributing
options across the three segments?
FJP
|
1091.11 | XCON needs help | WONDER::WILLARD | | Sun Feb 16 1997 07:34 | 23 |
| re .7:
> He says his understanding of X-CONS rules are as follows:
>
> Use PCI slots in this order:
>
> 11, 7, 3, 10, 6, 2, 9, 5, 1, 8, 4, 0
>
> Configure KZPSA's first, DExxx next, other devices next.
>
> Any memory channel adapters should start with 0 and proceed to 1
>
> He said there was a requirement or rule that dual MC's had to be on the
> same PCI segment and next to each other?
The rules above are clearly wrong: they ignore two-board options
(the CIPCA), they enforce MC rules that are no longer needed, and
they don't address multiple PCIs.
I'll try to get this fixed, but my last effort to get SNAFUed XCON
rules fixed in a product I did not own required over a year.
Cheers, Bob
|
1091.12 | watch those MCs cards.. | WONDER::MUZZI | | Mon Feb 17 1997 09:10 | 12 |
|
The unix MC rule of both in the same box...is just that...a unix rule.
For VMS you can only have ONE MC in a DWLPA. You can have 2 MCs in a
DWLPB, 1 in each of 2 DWPLAs, or 1 in a DWLPA and 1 in a DWLPB. Unlike
unix, VMS bring both MCs online and needs all of the DWPLAs DMA maps and
then some to bring them both online. With the DWLPB there's enough DMA
maps to bring both MCs online. When unix moves to having both MCs
online there going to have the same restriction as VMS does not.
-Mark-
|
1091.13 | MC clarification, please; what do you mean, X-CON? | CAMPY::WILSON | | Wed Mar 05 1997 16:07 | 18 |
| .12 implies that you can't have a MC in two separate DWLPBs under VMS.
In other words, as long as we're in a DWLPB-only universe (no DWLPAs),
it looks like the requirement of "both MCs in the same PCI box" applies
under both UNIX and VMS. Is this correct?
thanks,
Rick Wilson
P.S. BTW, references to "X-CON" in earlier notes are odd indeed; XCON
was retired in June, 1995. There are some successor applications in
use: the MEXtools in manufacturing; the Alpha Configuration Utility
from Kylor Corp.; and the Electronic Solution Provider (ESP - cute,
eh?) from Trilogy Corp. and Digital. The last two are used by sales;
only ESP will survive long-term. Since I work on ESP (and used to work
on XCON), I am curious about which tool's rules were being cited. If
in fact those outmoded constraints persist, especially in MEX or ESP,
obviously they should be removed.
|
1091.14 | 2 DWLPBs 1 MC in each... | WONDER::MUZZI | | Wed Mar 05 1997 17:10 | 7 |
|
I guess I missed the case of 2 DWPLBs. You can have a MC in each DWLPB.
I guess the rules will change again when we go to more the 2 MCs per
system. Stay tuned..
|
1091.15 | Not sure what it was... | USPS::FPRUSS | Frank Pruss, 202-232-7347 | Wed Mar 05 1997 22:08 | 6 |
| re:. 13
I'm travelling and don't have the copy in front of me. The mail is
postscript and I don't have a viewer on my laptop. It came from
manufaturing, so I'd guess MEXtools, but it sure looked like XCON.
fjp
|