T.R | Title | User | Personal Name | Date | Lines |
---|
522.1 | | KONING::KONING | Paul Koning, NI1D | Mon Mar 30 1992 16:25 | 7 |
| Sounds reasonable. Watch out for the bandwidth of the 6xx bridges, it isn't
quite the full 30 Mb/s you might hope for.
Latency shouldn't be an issue; another ms or two isn't going to break the
cluster.
paul
|
522.2 | DECNIS for back-to-back DECbridge | BONNET::LISSDANIELS | | Mon Mar 30 1992 16:54 | 8 |
| Shortly you should be looking at the DECNIS 600 with FDDI interfaces
to replace the back-to-back DECbridges for FDDI-FDDI connectivity.
If will route TCP, DECnet and OSI at 50000 packets/sec and bridge
all other traffic. FCS sept/oct but test units might be available
already in July...
Torbjorn
|
522.3 | There is no denying that Latency affects cluster performance... | STAR::SALKEWICZ | It missed... therefore, I am | Tue Mar 31 1992 12:35 | 29 |
| Latency should not be treated so lightly. Latency has become one
of the two criteria that are specified in the clusters SPD (throughput
being the other) to determine supportable LAVC (LAN) cluster
configurations. Statements like "another ms or two of latency
will not break the cluster" are probably true,. however,.. the SPD
is worded to also guarantee certain performance levels. The two
(latency and performance) are directly related, and so if you vary
one, you vary the other. If you set up configurations with high
latency, perhaps the cluster will not "break", but it is very easy
to get into a situation where the performance does not live up
to the level expected or specified in the SPD,.. which to a paying
customer is just as good as broken. And one unhappy customer is
just as bad for DEC as another.
There is also the distinct possibility that anotehr ms or two
of latency will indeed break the cluster. This would have to be
a configuration that is really pushing all the other limits, but
it is possible.
It seems in general Paul that the cluster protocol does not adhere
to your understanding of how protocols should be implemented. I do
not claim to be an expert on the LAVc protocol, but your replies
based on "general understanding of the way things ought to be" are
more often than not, wrong. Admittedly, this is because of a poorly
designed protocol, but the level of misinformation is potentially
dangerously high, and I ask that you avoid extrpoltaing in this area.
/Bill
|
522.4 | The road to minimal network latencies | ORACLE::WATERS | I need an egg-laying woolmilkpig. | Tue Mar 31 1992 13:20 | 23 |
| VAXcluster performance isn't just a question of protocol design.
If Paul were to say "an extra 10 ms of average latency doesn't make
must difference", we'd club him regardless of whether the comment was
applied to VAXclusters. Even for file-level servers, an extra 10 ms
of latency will greatly impact the speed of, say, a string search
through thousands of files.
There are but two speed constants in a system that reasonably serve
to hide other delays: disk seek time, and network packet duration.
For Ethernet, a useful ~1000-Byte message lasts 1 ms. So even 1 ms
of latency can slow down applications significantly, if the CPUs were
fast enough to exchange Ethernet packets at line rates.
All of the newly purchased computers in HLO can (and should) be connected
directly to FDDI segments: VAXstation 4000/60, VAX 6000-500, VAX 6000-600,
Laser, DECstation 5000/xx, etc. All of the equipment that operates at
the same line speed (100 Mb/s for FDDI) enjoys an extra latency benefit:
no store-and-forward delay in the communications equipment. When
talking from one FDDI host to another, the delay barely increases as
you go through extra multi-ported FDDI bridges (if all FDDI rings in
the path are very lightly loaded). Of course, you can't benefit from
this principle until multi-ported FDDI bridges with "cut-through" hit
the market. No "Digital has it now", but you can plan to do it some day.
|
522.5 | | KONING::KONING | Paul Koning, NI1D | Tue Mar 31 1992 16:30 | 7 |
| It's certainly true that a ms of latency has an impact on your
performance. How much? Don't know. There are plenty of other
significant delays in the system, from protocol processing overhead to
disk access times. Also keep in mind that the 1 ms for a bridge is under
pretty high load; it will often do the job significantly faster.
paul
|
522.6 | Please clarify the delay of bridges | ORACLE::WATERS | I need an egg-laying woolmilkpig. | Tue Mar 31 1992 17:27 | 12 |
| >disk access times. Also keep in mind that the 1 ms for a bridge is under
>pretty high load; it will often do the job significantly faster.
Is that true going from FDDI to Ethernet through a 10-100 bridge?
(It certainly can't be true going from Ethernet to FDDI.)
I assumed that DECbridge 6xx nevers forwards a packet before fully
receiving it. Receiving ~1000 bytes over Ethernet takes ~1 ms.
The application in .0 is concerned about the latency going from
Ethernet, to FDDI, and back to another Ethernet. No matter how fast
the bridge is, this will take at least 1 ms longer than going straight
from an Ethernet host to an Ethernet workstation. That's not a big
deal, but 10 ms would be a disaster.
|
522.7 | what latency means to me | QUIVER::HARVELL | | Wed Apr 01 1992 10:17 | 19 |
| re .6
Because the latency of bridges and routers is usually measured by last
bit in to the first bit out of a packet then the reception of the frame
is usuall not included in the latency values. So you are correct in
saying that there is an ~1 ms cost (for an ~1000 byte packet) just by
going through any store and forward device. Bridge latency is on top
of that given delay.
If you take two stations that are on the same LAN and you then seperate
them by a bridge, router or just more cable (due to the increased
propagation delay of the cable, you have to do real extremes to measure
this one) then you would see a drop in performance in any request/responce
protocol between those two stations. What an FDDI backbone buys you is not
increased performance between any two stations but the ability to have many
more pairs of stations communicating at the same time.
If you really need high performance between two stations they should be
located on the same LAN.
|
522.8 | Yes and no and maybe! | STAR::SALKEWICZ | It missed... therefore, I am | Wed Apr 01 1992 16:05 | 17 |
|
Well having direct FDDI connections between systems *will certainly*
improve performace of request/response protocols when comapred to
the performance over EThernet. This assuming equvalent levels
(kbytes/sec,.. not 'utilization') of background traffic in both
cases. Also assuming that some reasonable amount of large datagrams
are being exchanged (Direct Ethernet latency is potentially much
smaller than FDDI under load, and if the packets are small, latency
is a heavier factor than throughput or bandwidth)
I agree that this is not the typical motivation for migrating to FDDI.
The ability to allow more nodes to use the LAN at once is more
important to most customers than getting any two machines to really
scream.
/Bill
|
522.9 | more questions | CADSYS::LEMONS | And we thank you for your support. | Wed Oct 28 1992 11:59 | 18 |
| I'm bbbbbbbaaaaaaaccccccccckkkkkk, with a few more questions on this topic:
1) how can I tell what % of the SCA traffic contains 'data' as opposed to
'overhead'?
The scheme described in .0 is predicated on the assumption that most of the SCA
traffic is carrying data, and that a small percentage of SCA traffic is the
overhead of VAXcluster communication. Does that sound like a valid assumption?
How can I measure this, short of breaking apart the packets?
2) In a VAX 6000 system running VMS V5.5+, can both an Ethernet and FDDI
interface device be active at the same time? If so, what are the restrictions
(if any)?
Thanks!
tl
[cross-posted in the CLUSTER and FDDI conferences]
|