| On point 3, in general the answer is no. As you suggest in point 4, the
filtering is the rate that the packets can be removed from the ring and
usually there will be a number of buffers on the ring. What will determine
the lost packet rate is simply the average packet rate. If the average packet
rate is 45,000 packets per second, then the lost packet rate will be 5000
per second. It is possible to come up with senerios where the average is
20,000 but all the traffic occurs in � second slots, but that isn't realistic.
But, see below.
Another factor that could come into play is mmemory access time. If the memory
bandwidth isn't high enough to give each process deterministic access to it,
then not only will the packet rate, but also the data rate, will determine
the performance. For example, if the memory bandwidth is only 3.3 mega longwords
per second (300ns), bursts of traffic on both FDDI and NI will lock out the
process making the forwarding decisions and doing translations.
A conservative design makes almost everything deterministic. The time to
look up an address has an absolute upper bound which is less than needed to
forward at the specified rate. The memory bandwidth is high enough to give
every process accessing memory the full bandwidth needed. And so on. This
is the design philosophy used, although there are probably some compromises.
The key is that when there is a compromise, exceeded a threshold doesn't
cause incorrect behavior, like discarding every packet or forwarding every
packet or failing to recognize changes in topology. These issues can't be
verified or validated with any certainty by testing. Testing will prove that
something doesn't meet spec, but won't prove that it does. The question is,
does the design ensure that the performance requirement will be met. While
I don't know the specific design, knowing some of the people involved, I'm
confident that the design is correct.
Now, the question is, what happens when a design rated at 40,000 packets per
second is given 45,000. The obvious question is, `does it stop forwarding
all together?' This needs to be in the product spec! Does it continue to
handle topology changes when overloaded? If it can't handle topology changes
then it must stop forwarding, but that means that your LAN was partially
shutdown. If it continues to forward without being able to process topology
changes, then it will require powering off the bridge manually because a very
likely cause is a bridge coming on line forming a loop which is circulating
packets at the maximum rate possible. Does its packet dropping decision result
in more packets being retransmit which just maintains the overload?
I'd say that a 10/100 bridge that can only handle 40,000 packets per second
is a time bomb.
|
| Mike brings up the key point... if you have a bridge that can't handle
full-speed traffic, losing some of the data is actually the least of your
problems. The bigger problem is that you will almost certainly lose some
of your bridge control messages. If you lose more than a couple in a row,
various bridges will conclude that the topology has changed and will start
to adjust things, exactly as if a link had shut down. This is NOT a
theoretical problem: various large extended LANs at Digital (and presumably
elsewhere) have completely fallen apart on occasion due to the presence of
a misdesigned bridge somewhere in the extended LAN.
A bridge, in order to be useable, MUST be able to receive and act on all
the bridge control messages. A good way to do that is for it to receive and
keep up with worst case traffic. Failing that, it must at least sort the
control messages from the data messages at maximum rate; once it does that
it can get away with discarding some of the data messages.
The usual short-cut design does neither; it merely sorts and filters at
some lower rate. So long as you don't go over that rate, you're lead to
believe that you have a good network. Cross the limit and all hell breaks
loose.
As for the data, while I say that bridges can get away with discarding
data messages, this isn't exactly what you want to happen! The whole point
of high-speed filtering is to make sure that the bridges have enough
throughput to be able to hang on to ALL the messages that need to be forwarded.
If they aren't designed to filter at max rate, then in traffic bursts you
will lose some data that should have been forwarded. This will cause a
drastic performance hit in higher layer protocols.
Incidentally, bursty traffic is the rule, not the exception. Especially with
client-server setups you can get a lot of that. Also keep in mind that
FDDI, because it uses a token, tends to increase the burstiness (traffic
backs up at the sender waiting for the token, then goes out "all at once").
paul
|
| While I agree with .1 and .2, be aware that it *is* possible to design
systems which cannot forward at the maximum datalink rate but which behave
correctly (and even pleasantly). It is difficult, it requires careful
design and it is *not cheap*. But it is possible.
For example, the product I am working on is designed to receive and filter
at the maximum data rate of the line (and select out and pass on control
traffic) even if it may not be able to forward at the maximum rate. To do
this requires (amongst other features) dedicated processors per line and so
it is fairly expensive.
I would be rather surprised if the equipment mentioned in .0 is designed
that way!
Graham
|