T.R | Title | User | Personal Name | Date | Lines |
---|
2151.1 | I've got the same issue. | NQOS01::nqsrv216.nqo.dec.com::Hobson | Rich Hobson 847-475-8960 | Fri May 17 1996 13:16 | 12 |
2151.2 | | MSBCS::NEE | | Thu May 23 1996 11:24 | 1 |
2151.3 | OK, so if it's supported... | NQOS01::nqsrv229.nqo.dec.com::Hobson | Rich Hobson 847-475-8960 | Thu May 30 1996 00:06 | 12 |
2151.4 | What is the answer? | NQOS01::16.29.16.107::Pellerin | | Thu Feb 20 1997 18:54 | 14 |
| I have a customer that wants to use 4 DE500's in each 2100A in a 2 node
TruCluster ASE for an NFS service cluster.
.0 and replies did not yield an answer to the questions:
1. Can I use more than 2 (in my case 4) DE500's?
2. If not why not? What is the reason for the limitation?
My O/S will be Digital UNIX 4.0b.
Thanks,
-BAP
|
2151.5 | Only 3 PCI slots are available! | FOUNDR::FOX | David B. Fox -- DTN 285-2091 | Fri Feb 21 1997 12:19 | 6 |
| Sable's only have 3 PCI slots. So unless there is an operating system
restriction, AND you don't have any other PCI options, I'd say you
could have 3. There are lots of ISA or EISA slots available but I
don't think we have a FAST Ethernet solution for those busses.
David
|
2151.6 | | AFW3::RENDA | Mike Renda dtn: 223-4663 | Fri Feb 21 1997 12:50 | 11 |
|
Although you have eight PCI slots in an AlphaServer 2100A, the following
recommendation applies with regards to the DE500-AA based on test results:
Unix and WindowsNT test results recommend a maximum of 2 DE500-AAs for
performance and a maximum of 4 for connectivity on the 2100A System.
Support for 3 or 4 DE500-AAs on the AS2100A is only for customers who are
looking for additional connectivity or failover and are willing to accept
significantly reduced throughput rates in order to attain this connectivity
or failover or for customers who do not typically run heavy network loads.
|
2151.7 | OK but confused... | NQOS01::16.29.16.109::Pellerin | | Sun Feb 23 1997 09:56 | 27 |
| re .-1
Thanks for the reply.
If I understand your note correctly, Digital internal tests show that beyond 2
DE500's, the performance on a 2100A suffers. Am I interpreting this
correctly?
I am a bit confused as to why, since the I/O bandwidth is rated at 667 MB/s
(mega-bytes per second) and a single DE500 only imposes a maximum of 100 Mb/s
(mega-bits per second) or about 12 MB/s (mega-bytes per second). I don't
understand how 2 DE500s (total of around 24 mega-bytes per second) can
overpower the 2100A backplane and I/O capability.
There is obviously something I am overooking or do not understand about the
way an network interface card affects a system as opposed to say, a 20 MB/s
KZPSA. There seems to be no "guideline" for KZPSAs (except for the fact that
they have to go in slots 4-8, I believe). So, 4 KZPSAs will impose a
theoretical maximum of 80 MB/s right? What is different?
Can someone please explain why tests yeild that more than 2 DE500s inhibit
performance? Is this "guideline" true for the 4000/4100 or 8xxx servers?
Thanks,
-BAP
|
2151.8 | | TARKIN::LIN | Bill Lin | Sun Feb 23 1997 11:29 | 27 |
| re: .7 by NQOS01::16.29.16.109::Pellerin
Hi BAP,
I'm not all that surprised by Mike's assertion in .6, and I can come up
with some thoughts as to why it might be so.
If I may speculate...
One part of the I/O throughput equation that I think you may be
forgetting is the one of latency and limits to the amount of
prefetching and posted-writing that one can do. Packet sizes also come
into play. Apparently with only two de500s, the system hardly ever
runs out of data to send or write buffers to put data. Beyond that
number, again apparently, one starts to run out of one of these
critical resources for continuous data flow.
For efficient single-stream data flow, packet sizes need to be big,
i.e. with low command overhead relative to data content. However, this
works against you in decreasing latency to other devices. It's a
balancing act.
Hope this helps and I hope I'm not too far off the mark. ;-)
Cheers,
/Bill
|
2151.9 | ok - but what's different? | NQOS01::nqsrv202.nqo.dec.com::Pellerin | | Mon Feb 24 1997 12:51 | 24 |
| re: -.1
Thanks for the analysis. I guess I'm after the reason why a 2100A won't
operate as efficiently with more than 2 DE500s Vs a 4000 0r 8000-class
machine. Since it's not backplane-speed related, then why is a 4100 (in
documents that I have seen. I think...) allowed to have up to 4 DE500s while a
2100A is recommended to have only 2?
Botton line - what is the big difference in the hardware or firmware (since
Digital UNIX is Digital UNIX) that makes for the 2100A to be less able to
handle more than 2 DE500s?
Have we (Digital) actually tested more than 2 DE500s in a 4000-class server?
(I know that's a question for the rawhide notesfile...) If so, and if the
4000-class servers can function ok with 4 DE500s, what is the significant
difference between the 2100A and the 4000-class (I know the backplane-speed is
different, but I believe we have sufficient backplane speed in the 2100A
so...)
The discussion rambles on...
Regards,
-BAP
|
2151.10 | Bridges, etc. | WONDER::WILLARD | | Wed Feb 26 1997 09:39 | 37 |
| The AS4xxx has a rather low latency path between the PCI(s)
and host memory, and 64bit (264 MMB/S) PCIs. The AS2xxx has
a relatively high latency 32bit (132 MMB/S) PCI, and also
has a very limited number of buffers (for DMA pre-fetch).
The AS8xxx has very long read-latency, but to compensate for
that, it has up to 12 32bit PCIs and up to 36 PCI segments, allowing
for lots of concurrency.
Like all vendors, we quote PCI performance in MMB/S, which means
Marketing MegaBytes/Second. If you think the MMB/S spec lets
you predict performance, you have allowed yourself to be conned
by our HypeWare. As the multi-DE500 case shows, there is far
more to PCI performance than MMB/S.
The short answer to your question about the reason for the
PCI perfomance difference between platforms is: bridge design.
The PCI-host bridges are totally different for these three
platforms, in ways that strongly affect DMA (and other)
performance. And, to complicate matters, some PCI widgets are
far more sensitive to these differences than others in several
ways. As it happens, the KZPSA is pretty good (insensitive),
due to its FIFO+buffers design; the DE500 is not good, due to
being optimized for cheap little PCs; the KZPAA is absolutely
terrible.
Now, I'm sure that one of you noters will react to the above with a
demand that we explain how our bridges and widgets work and
interact; we've heard this all before. We do not have a process
in place to capture enough detail about the workings of bridges
and widgets and drivers (D'U, O'V, WNT) to be able to predict
performance of multi-widget configurations, and IMHO we will never
be able to justify the expenditures necessary to get one. So,
don't demand what you won't fund.
Sorry for sounding harsh and pessimistic, but this is deja vu all over again.
Cheers, Bob
|
2151.11 | Thanks | NQOS01::nqsrv422.nqo.dec.com::Pellerin | | Thu Feb 27 1997 08:46 | 10 |
| Thanks. That was the info I was looking for. I'll encourage my customer to
try 2 DE500s, and if need be, experiment with adding DE435s (or run additional
DE500s in 10 MB/s mode) to gain incremental performance - and watch closely.
I'll post any revelations.
Thans to all for responding.
Regards,
-BAP
|