[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 7.286::fddi

Title:FDDI - The Next Generation
Moderator:NETCAD::STEFANI
Created:Thu Apr 27 1989
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:2259
Total number of notes:8590

232.0. "The CERN multivendor FDDI pilot project" by BERN01::DEY (Walter Dey, EIS, Berne Switzerland) Fri Apr 05 1991 11:21

I post here a report from CERN CN/CS/CH-FDDI Status , January 21, 1991, with
the title
 
	The CERN multivendor FDDI pilot project

	from J. Joosten

Without the permission from the author. If you wish to get a hardcopy (with
the figures) please let me know.

Regards

	Walter.

	 
1.	INTRODUCTION

CERN, the European Laboratory for Particle Physics, has its seat in 
Geneva, Switzerland.  The site extends over the Franco-Swiss border. 
CERN studies the basic subnuclear particles and forces of matter. It 
operates a complex of accelerators: a 28 GeV proton synchrotron (PS) and 
a 450 GeV super proton synchrotron (SPS) which can also be operated as a 
proton/antiproton colliderat up to 900 GeV. The biggest accelerator in 
the world, the 2x51 GeV electron/positron collider (LEP) was officially 
inaugurated in November 1989 and has already produced important physics 
results. LEP will subsequently be upgraded to 2x100 GeV.
        
The research programme at CERN involves the use of large particle detec-
tors and extensive data processing facilities. Cern is funded by four-
teen European countries and has a staff of about 3300. In addition, the 
laboratory is used by about 5000 scientists from universities and 
laboratories from all over the world, but mainly from member states for 
their research projects.
        
2.	 NOTES ON APPENDED SLIDES
Before talking about our FDDI project, it is useful to understand some-
thing about the CERN site and the general purpose network environment, 
which is based on Ethernets and MAC-level bridges. Although many 
routers and gateways to other networks are connected to it, they are 
not mentioned in this context.
         
Slide 1 is an overview of the entire site, including the LEP machine, 
which has a diameter of 8,5 km. The machine has four interaction regi-
ons (numbered PA2, 4, 6 and 8), where large experiments are taking 
place. A fibre infrastructure connects all the relevant area's toge-
ther.
         
The "Meyrin site" looks insignificant on this picture, but here most of 
the offices and laboratories are located (including the computer cen-
tre). From the computer centre fibre optic trunks are fanning out to 
the different locations. These trunks were initially used for Ethernet 
and TDM (G703), but are now also used for FDDI. They consist of 50 
micron and mono-mode fibres and typical distances here are 1 km.
         
Slide 2 presents a simplified picture of our network before the intro-
duction of FDDI: all the Ethernet segments are interconnected via 
MAC-level bridges to an Ethernet backbone (HRS) in the computer centre.
          
It shows also some topological information, required to understand how 
FDDI will be used at CERN. The numbers indicate distances in 
kilometers. All distances less than 2 km can be easily dealt with by 
FDDI. For the longer distances (i.e. LEP experimental area's) solutions 
have to be found.
         
Slide 3 gives the evolution of traffic and the number of hosts seen on 
the Ethernet backbone on an afternoon in the middle of a week. Before 
the introduction in April of FDDI as a (partial) Ethernet backbone, 
peaks of more than 60% were common. Although the growth of traffic was 
fairly erratic, the trend is clear. The growth of the number of hosts 
shows on average a more or less linear increase.
         
In slide 4 protocol usage is given for our backbone (HRS) and a typical 
office segment (5131). It shows that CERN is a multi-protocol site: in 
fact we find about 30 protocol types on our network.
         
Another indication of the complexity of the site is presented in slide 
5, showing some 35 manufacturer codes for a about 2150 active systems 
(seen by the bridges in the last 15 minutes, before the snapshot).
         
The problem we had to solve is how we could introduce an FDDI ring as a 
replacement of the Ethernet backbone, keeping multi-protocol transpa-
rency, while allowing traffic between hosts directly connected to FDDI 
(using TCP/IP according to RFC1188) and connectivity between Ethernet 
based hosts and hosts on FDDI (see slide 6). The solution is seen in 
so-called translating MAC-level bridges between Ethernet and FDDI. The 
use of encapsulating bridges would not allow for a multivendor connec-
tivity between hosts on FDDI and Ethernet.
         
The possibility to communicate transparently between an Ethernet based 
host and a host on FDDI posed an interesting problem with the represen-
tation of addresses on either network, see slide 7 for a detailed desc-
ription of the little endian and big endian address representation. 
Slide 8 shows how an ARP request from an FDDI host and an ARP reply 
from an Ethernet host would look on the two networks, connected with a 
MAC-level bridge. It also shows (cartouche) what would happen if the 
addresses in the data field would not be in canonical format.
         
The translation process in a bridge caused also concern, but has been 
solved as explained in slide 9. It was agreed not to use the OUI=0 on 
Ethernet, or accept translation when going back from FDDI to Ethernet. 
However, it was found out later that AppleTalk phase 2 did not adhere 
to this convention and was therefore not working over translating brid-
ges.
         
The way this problem has been solved is shown in slide 10. Every bridge 
will have a list of Type fields for which a packet with a SNAP, OUI=0, 
does not get translated to Ethernet 2 format when passing from FDDI to 
Ethernet. In order to ensure continued functioning of AppleTalk phase 
1, its packets are passed over FDDI with a special SNAP encapsulation.
         
Another problem that was encountered later related to packets of which 
the length in the data field did not match with the actual length of 
the real packet. This is for instance possible with IEEE 802.3 packets 
and slide 11 shows the different actions a bridge could take. With IP 
packets on FDDI, where somewhere in the data there is a field giving 
the "Total Length" of the datagram the same problem can arise. This 
field is used by a bridge when a FDDI IP packet has to be fragmented to 
fit into two or more Ethernet frames. The best approach would be, that 
in both cases the length value in the datagram will be taken as the 
reference and the actual packet shortened to fit it. If the actual 
packet is too short to do this, it should be dropped.
         
In order to introduce multivendor products at CERN, a reference ring 
has been set up based on the AMD FASTcard interfaced to a PC-AT. This 
interoperability laboratory should of course be following the latest 
definition of the standard. The Pdemo package running on these systems 
proved to be very useful. However, Ping and Telnet are unfortunately 
not available up till now.
         
First two Apollo 10000's where connected and coexistence with the AMD 
FAST-cards was verified. The TCP/IP was tested between the two Apol-
lo's. Next two IBM connections were introduced and TCP/IP between IBM 
and Apollo proved to be working straight away. Furthermore a SUN has 
been successfully connected. All these systems were running SMT 5.1.
         
Apart from this, the Ethernet back-bone has been partially replaced by 
another ring with concentrators and translating bridges from DEC. A 
powerful management system formed part of it. This ring is operational 
since April and will initially function as a pure back-bone for the 
segments on the Meyrin site, having the right distances for FDDI. The 
ring had a total length of 11 km and is entirely based on 50 micron 
fibres. The idea was that when both rings give entire satisfaction, 
they would be merged (see slide 12). In practice we found that a gra-
dual move from the test ring to the production ring of was more reali-
stic. Currently only one IBM connection has been moved to the produc-
tion ring, which runs a mixture of SMT 5.1 and 6.2.
         
The replacement of an Ethernet segment by an FDDI ring as a backbone 
had a side effect on our scheme of quenching broadcast storms. We made 
a device that recognised a storm and started to collide on the source 
address of an offending system. One such a device on the Ethernet back-
bone is sufficient, if one accepts that the bridge and the segment with 
the source of the storm get congested. On FDDI this cannot be done, 
hence the necessity to protect every FDDI bridge with a quencher (slide 
13). The bridge manufacturers will have to do something about that. Of 
course, a broadcast storm originating  on FDDI will still be a big 
problem for everybody. This is one of the reasons why we think that 
every host on FDDI should be connected via a remotely manageable con-
centrator capable of eliminating a connected host. It would even be 
better if a concentrator could find out in real time about such a 
problem and take action.
         
More recently two DEC 5000 workstations have been successfully introdu-
ced in the test ring. Slide 14 represents the latest configuration of 
this ring, also showing the analysis tools we were using. From compo-
nents readily available on the market and some homemade hardware we 
were able to make a line level monitor with trigger capabilities, see 
slilde 15. Furthermore, a feature of a bridge allowed us to forward 
local FDDI packets onto an Ethernet with a classic Ethernet monitor. 
Both systems proved to be invaluable in tracing problems and understan-
ding the results of throughput measurements, protocol behaviour etc.
         
The certification of the fibres gave some concern since 5 dB was deduc-
ted from the 11 dB power budget, leaving a relatively small margin. 
However, recent tests revealed that by connecting 50 micron fibres 
directly to the optical transmitter and receiver, we were able to span 
5 km and still have a comfortable margin. This can be explained by the 
superior band width of 50 micron fibre (1-1.5 GHz) and the fact that a 
LED outputs a lot of power in the low order modes. The 5dB safety mar-
gin can be considered too conservative in this particular case.
         
Slide 16 shows a real time map of the network as generated automatic-
ally by our management system. The thickness of a line is a relative 
indication of the traffic crossing the bridges. The map also indicates 
that certain devices are down. A small window shows also traffic figu-
res for the bridge connecting the FDDI backbone to the old Ethernet 
backbone (HRS). Note also an FDDI connection bypassing a Vitalink over 
a distance of 5 km (N866).
         
The LEP pits are at distances precluding a "standard" FDDI connection. 
Although equipment will be available to span long distances over 
mono-mode fibres and 50 micron fibres seem to be feasible up to 7 km, 
there are reasons why we do not want to go in that direction: slide 17 
shows that we would span already 60 km to only connect the 5 "hot 
spots" at CERN together: the computer centre with the four LEP pits. We 
consider making such large rings as bad engineering practice. It is 
furthermore not our policy to use the long distance fibres for special 
applications: in principle TDM should be used.
         
In slide 18 some potentially different approaches are shown: the idea 
would be to keep small rings, locally interconnected with MAC-level 
bridges and bring the remote requirements via split bridges (or brou-
ters) and TDM to them. No such products exist yet, but work is already 
being done in some places to provide solutions in this spirit.
         
Another point should not be overlooked. Affordable FDDI interfaces and 
concentrators are not sufficient. One needs also a cable plant. The 
effort to make FDDI working over shielded twisted pair has already 
resulted in cheap equipment that will profit from already installed 
cable plants. However, there is no twisted pair standard yet. This and 
also the choice between 50 micron and 62,5 micron fibre with the possi-
bility to have "low cost" fibre-optic components risk to give confusion 
and incompatiblity. Furthermore, if a cable plant has to be installed, 
one should think about future higher speed networks, basically meaning 
a choice for fibre.
T.RTitleUserPersonal
Name
DateLines
232.1Broadcast Meltdown ScenarioBERN01::DEYWalter Dey, EIS, Berne SwitzerlandSat Apr 27 1991 05:3638
I got a revised report from Joop Joosten, dated April 16, 1991.

New items are:

Managing an FDDI Broadcast Storm or SNMP Scenario for Disaster ?
________________________________________________________________


Assume the following "perfect" Network:

* Only (SNMP) managed concentrators on dual ring

* all hosts are slaves (SAS) on concentrator master ports

* The network management station (NMS) is Broadcast storm-proof

* the NMS knows every host and its concentrator port

	SNMP USES IP => CONC. HAS TO PROCESS ARP'S
		     => CONC. HAS TO PROCESS BROADCAST'S

The following scenario unfolds itself:

* a host starts a broadcast storm (say 5000 packets/sec)

* NMS finds out who started the broadcast storm

* NMS sends command to disable concentrator port of host

* Command is not executed for two possible reasons:

	1) Concentrator CPU spends all its time in the driver
	   handling packets: No cycles left to execute the command

	2) Concentrator CPU ran out of buffers: command lost


W E	L O S T 	C O N T R O L 	O V E R 	T H E 	N E T W O R K