T.R | Title | User | Personal Name | Date | Lines |
---|
561.1 | | QUIVER::SLAWRENCE | | Mon Dec 13 1993 12:39 | 16 |
| The 'someone in Digital' was correct. The DECbridge90 backplane port
should never be configured such that you can get to another bridge port
from it. It has a 200 address limit on that port (due to which we were
able to build it more cheaply); if that limit is exceeded it goes into
Flood mode and does not do all of what you want it to do.
The problem with a configuration such as the one you describe is that
the spanning tree might reconfigure such that the thinwire between hubs
becomes the backbone of the network; putting all the addresses on the
net onto the workgroup port of each bridge. The bridges would then go
into Flood mode and the spanning tree would reconfigure... you get the
picture.
On the other point: the DECbridge90 can forward packets at full wire
speed. If there is a drop in speed it's coming from some other
problem.
|
561.2 | transit delay | WARNUT::2H0533::Tim_Banks | Stealth Marketing :-) | Tue Dec 14 1993 06:57 | 13 |
| Flooding as you dewscribed wasn't something I had considered, and I can see
why potentialy this config might be cause problems.
Thanks for that input.
On the bridge performance, what is the transit delay when crossing a bridge?
I knew the bridge would work at full ethernet frames/sec forward/filter
speed, but is there any significant delay when a packet transits a bridge?
Thanks
Tim
|
561.3 | Dual DECbridge 90 OK. | CGOS01::DMARLOWE | dsk dsk dsk (tsk tsk tsk) | Tue Dec 14 1993 13:33 | 21 |
| re. 0
Dual bridges in a single hub do work and there is enough power
to power them both plus a full complement of other modules. Also
daisy chained hubs each with a DB90 works just fine. Rules are:
1. keep the work group under 200.
2. each bridge must connect to the same backbone in such a way
that there is no way that a break in the backbone could cause backbone
traffic from both sides to flow through the dual bridges.
For more details see reply 34.1 here and in the Ethernet_V2 notes file
703.x
With LTM I have seen a file transfer between systems run at 70%
spikes, usually averaging around 65%. With a DB90 between
the systems the spikes were eliminated and traffic was around a
very conststant 62-65%. Probably due to the store and forward
nature of the bridge. That's lower than 17%.
dave
|
561.4 | | QUIVER::SLAWRENCE | | Tue Dec 14 1993 14:26 | 12 |
| I passed on the performance question to Jeff Lomicka, who did more work
on it than anyone else; his response:
> Where there is no outbound congestion, the design of the bridge is to
> begin the process of retransmitting a packet a couple of clock ticks
> after the last byte of the FCS is received. Thus, there is a full
> packet-length latency. I don't know how you would express that as a
> percent. This is subsantially less than software-based bridges that
> have to perform hashes and lookups on received packets after they
> arrive (DB90 makes the decision before the full packet is in), but
> substantially more than the "cut-through" bridges offered by, I think,
> Kalpana.
|
561.5 | | LEMAN::CHEVAUX | Patrick Chevaux @GEO, DTN 821-4150 | Tue Jan 04 1994 08:56 | 13 |
| .4� > Thus, there is a full
.4� > packet-length latency.
In other words: 51.2us for a minimum sized frame
up to
1.2ms for a maximum sized frame
I assume that this is just transmission delay (latency) but that the
throughput is not affected ie if I had a station sending frames back-
to-back (with a 9.6us interframe) with no other traffic on the LANs the
bridge would not drop any frame.
Am I correct ?
|
561.6 | test it | QUIVER::SLAWRENCE | | Tue Jan 04 1994 13:59 | 3 |
| Assuming your math works out (I didn't check) that is correct.
If it really matters, do a test; we'd love to hear the results.
|