[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::fddi

Title:FDDI - The Next Generation
Moderator:NETCAD::STEFANI
Created:Thu Apr 27 1989
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2259
Total number of notes:8590

130.0. "DECBridge 500 Performance - a winner ?" by SAC::PRUDEN_G () Mon Sep 03 1990 06:35

As the number of FDDI products on the market grows product differentiation is
becoming more crucial. One area of vital concern is bridges. Looking at the
information we have been able to get about our competeitors their bridge
forwarding rate seems to be well down on the DECbridge 500 460,000 pps. Alot of
firms are quoting a filetring rate of 40,000pps. It seems to me that this is a
similar situation to the LAN Bridge 100 which was faster than any other bridge
for years.

The questions I have are therefore around performance of the DECbridge 500 and
the impact of less performance :

1) Have we tested the Bridge 500 at full FDDI speed i.e. 460,000pps ?

2) Have these figures been independantly verified ?

3) If a bridge only filters at 40,000pps then does this mean that it may miss
   packets if say 20 come along back to back ?

4) What determines the filtering rate e.g. speed that buffers can be taken off
   and put back onto LANCE buffer ring?

Any help may produce added sales,

	Thanks
							Gary
 
T.RTitleUserPersonal
Name
DateLines
130.1Bridges and router can be probabilistic only within narrow limitsCVG::PETTENGILLmulpMon Sep 03 1990 22:0343
On point 3, in general the answer is no.  As you suggest in point 4, the
filtering is the rate that the packets can be removed from the ring and
usually there will be a number of buffers on the ring.  What will determine
the lost packet rate is simply the average packet rate.  If the average packet
rate is 45,000 packets per second, then the lost packet rate will be 5000
per second.  It is possible to come up with senerios where the average is
20,000 but all the traffic occurs in � second slots, but that isn't realistic.
But, see below.

Another factor that could come into play is mmemory access time.  If the memory
bandwidth isn't high enough to give each process deterministic access to it,
then not only will the packet rate, but also the data rate, will determine
the performance.  For example, if the memory bandwidth is only 3.3 mega longwords
per second (300ns), bursts of traffic on both FDDI and NI will lock out the
process making the forwarding decisions and doing translations.

A conservative design makes almost everything deterministic.  The time to
look up an address has an absolute upper bound which is less than needed to
forward at the specified rate.  The memory bandwidth is high enough to give
every process accessing memory the full bandwidth needed.  And so on.  This
is the design philosophy used, although there are probably some compromises.
The key is that when there is a compromise, exceeded a threshold doesn't
cause incorrect behavior, like discarding every packet or forwarding every
packet or failing to recognize changes in topology.  These issues can't be
verified or validated with any certainty by testing.  Testing will prove that
something doesn't meet spec, but won't prove that it does.  The question is,
does the design ensure that the performance requirement will be met.  While
I don't know the specific design, knowing some of the people involved, I'm
confident that the design is correct.

Now, the question is, what happens when a design rated at 40,000 packets per
second is given 45,000.  The obvious question is, `does it stop forwarding
all together?'  This needs to be in the product spec!  Does it continue to
handle topology changes when overloaded?  If it can't handle topology changes
then it must stop forwarding, but that means that your LAN was partially
shutdown.  If it continues to forward without being able to process topology
changes, then it will require powering off the bridge manually because a very
likely cause is a bridge coming on line forming a loop which is circulating
packets at the maximum rate possible.  Does its packet dropping decision result
in more packets being retransmit which just maintains the overload?

I'd say that a 10/100 bridge that can only handle 40,000 packets per second
is a time bomb.
130.2KONING::KONINGNI1D @FN42eqTue Sep 04 1990 11:0035
Mike brings up the key point... if you have a bridge that can't handle
full-speed traffic, losing some of the data is actually the least of your
problems.  The bigger problem is that you will almost certainly lose some
of your bridge control messages.  If you lose more than a couple in a row,
various bridges will conclude that the topology has changed and will start
to adjust things, exactly as if a link had shut down.  This is NOT a
theoretical problem: various large extended LANs at Digital (and presumably
elsewhere) have completely fallen apart on occasion due to the presence of
a misdesigned bridge somewhere in the extended LAN.

A bridge, in order to be useable, MUST be able to receive and act on all
the bridge control messages.  A good way to do that is for it to receive and
keep up with worst case traffic.  Failing that, it must at least sort the
control messages from the data messages at maximum rate; once it does that
it can get away with discarding some of the data messages.

The usual short-cut design does neither; it merely sorts and filters at
some lower rate.  So long as you don't go over that rate, you're lead to
believe that you have a good network.  Cross the limit and all hell breaks
loose.

As for the data, while I say that bridges can get away with discarding
data messages, this isn't exactly what you want to happen!  The whole point
of high-speed filtering is to make sure that the bridges have enough
throughput to be able to hang on to ALL the messages that need to be forwarded.
If they aren't designed to filter at max rate, then in traffic bursts you
will lose some data that should have been forwarded.  This will cause a
drastic performance hit in higher layer protocols.

Incidentally, bursty traffic is the rule, not the exception.  Especially with
client-server setups you can get a lot of that.  Also keep in mind that
FDDI, because it uses a token, tends to increase the burstiness (traffic
backs up at the sender waiting for the token, then goes out "all at once").

	paul
130.3MARVIN::COBBGraham R. Cobb (Wide Area Comms.), REO2-G/H9, 830-3917Thu Sep 06 1990 15:1315
While I  agree  with  .1  and  .2,  be aware that it *is* possible to design
systems  which  cannot forward at the maximum datalink rate but which behave
correctly  (and  even  pleasantly).   It  is  difficult, it requires careful
design and it is *not cheap*.  But it is possible.

For example,  the  product I am working on is designed to receive and filter
at  the  maximum  data  rate of the line (and select out and pass on control
traffic)  even  if it may not be able to forward at the maximum rate.  To do
this  requires (amongst other features) dedicated processors per line and so
it is fairly expensive.

I would  be  rather  surprised  if the equipment mentioned in .0 is designed
that way!

Graham