[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference netcad::hub_mgnt

Title:DEChub/HUBwatch/PROBEwatch CONFERENCE
Notice:Firmware -2, Doc -3, Power -4, HW kits -5, firm load -6&7
Moderator:NETCAD::COLELLADT
Created:Wed Nov 13 1991
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:4455
Total number of notes:16761

34.0. "HUB management competition" by EMDS::SEAVER (LENAC Net Mgnt Mktg 223-4573) Mon Mar 02 1992 14:56

14 Feb 92-- Competition.tmp

SELLING DEChub 90 management AGAINST THE COMPETITION

SUMMARY:  This document explains how to sell DEChub 90 management against the
 competition.  The first page outlines the major selling strategies and ways to
 handle competitive issues.  The second page works through an example with
 Cabletron, while the third page provides back up data for all major vendors. 
 Please let me know if you find any errors or omissions in this document and
 also what you see for street prices since I intend to update this document as
 more information becomes available. 

	Good selling!
	Bill Seaver
	EMDS::SEAVER

STRATEGIES FOR SELLING against the competition (Updated versions of this can
	be found on ENGINE::$1$DUA4:[smart_hub90.management] )

BACKGROUND: SNMP graphical management of a hub is usually a check off item. 
 Many, if not all customers, seem to know little more about management than 
 they want it because it seems easier than a command line interface.  

PRICE: The DEChub 90 has the lowest price per port for management of any major 
 vendor even when the bridge is included in the cost.  (David, HP, and AT&T are
 supposed to be lower- I have not calculated the prices yet.)  See the attached
 example or the back up data on the next page for details.  Note- this is based
 on a price of 3000 apiece for the DECagent 90 and the HUBwatch software.  This
 is higher than the price we are proposing to PAC.

MORE RELIABLE DESIGN:  All vendors (except BICC) have a single point of failure
 (usually the retiming/management module) which, when it fails, will take down
 the whole hub so no port can talk to any other port on the hub.   DECs
 repeaters are full repeaters and not concentrators; so, even if the bridge
 fails, the ports on the hub can talk to each other.  If the DECagent 90 fails,
 it will have no effect on network operations.  If you customer is really
 concerned, a second bridge can be mounted in the hub, leaving only the cable
 coming to the bridges as a single point of failure.  (Both bridges MUST come
 off of the same cable with nothing in between the connections).

I DON'T NEED/WANT TO PAY FOR THE BRIDGE:  
-IT'S FREE! The bridge is essentially free since it is included in the price
 per port (see above), and ours is the lowest price per port.
-IT INCREASES NETWORK RELIABILITY by isolating the hub from the network.  This
 type of conservative design keeps backbone problems, such as collisions and bad
 packets, on the backbone and hub problems in the hub.  It also decreases
 traffic by isolating traffic between hub ports from the network and prevents
 the hub from being overloaded by traffic on the backbone.
 

I WANT THE AGENT TO MANAGE THE REPEATERS.  If the agent is to manage the
 repeaters, it must be in the hub taking up a slot in the hub and even then, it
 will only be able to manage repeaters in that hub.  Although we are thinking
 about offering this functionality in the future, the savings will be $1200 (at
 the most) and the network will be less reliable (see above).

I CAN GET CHEAPER STAND ALONE REPEATERS.  Yes, you can; but I thought we were
 talking about hubs.  The cheaper repeaters are stand alone units that can never
 be added to a hub and thus cannot reap the structured wiring, management, etc
 benefits of a hub.

YOUR MANAGEMENT DOES NOT OFFER THE FEATURES OF .....
-BAIT AND SWITCH: Are we talking the same type of system?  A workstation system
 is always going to be better than a PC based system.  Our PC based system,
 HUBwatch for Windows (TM) is exceptionally easy to use if you are familiar with
 Windows.  If you want a more powerful (and as a down side, more complex) 
 system, consider MSU or DECmcc.
-GRAPHING PORT TRAFFIC:  DEC feels that ports tend to fail quickly, so that
 looking at a graph (if you chose the right one) would not be very helpful. 
 Instead, we tell you why the port failed (see repeater detail screen- color
 slides on on ENGINE::$1$DUA4:[SMART_HUB90.MANAGEMENT]), so you can correct the
 problem. This is part of our making management simpler to use.
-ARTIFICIAL INTELLIGENCE:  What are we really getting here?  I find this a
 fancy word for getting suggestions from a help screen.  We have help screens if
 your customer wants to use them.  Also, we try to design the network so it
 takes care of itself and is easy to use by having auto-partitioning repeaters,
 bridges to isolate hubs, etc.  When we find a real use for artificial
 intelligence, we will implement it; for we have over ten years of experience
 using AI.

********* This page is DIGITAL CONFIDENTIAL ***********************************

Do not show this page in toto to your customer.  Please use it as background
information for selling situations.


EXAMPLE- how to use the back up data

This page explains how to use the points covered in the previous page and the
data on the following page to deal with a competitive situation with Cabletron.

In this case I am comparing the Cabletron MMAC8 to a double (daisy chained)
DEChub.  The price per 10baseT port of management when all available slots are
filled is $62 for Cabletron and $55 for Digital.  The cost of management is
made up of an IRM-2 module and PC based software for Cabletron.  For Digital
the cost of management includes a bridge and 1/4 of a DECagent 90 (one can
handle at least 4 double hubs) plus the PC based software.  I used the PC based
software, because it is the lowest cost and thus the obvious software to use
when the customer is interested in low price.  Note that, for the same number
of ports, Cabletron would be even more expensive than Digital because it can
hold more ports (168 VS 120).

The single point of failure in the Cabletron hub is the IRM module.  Without it
the 10baseT ports cannot even talk to each other.  A second IRM module could
not be added for redundancy, because it does retiming for the 10baseT ports and
only one module per hub can do this.  With the DEChub 90, the single point of
failure is the DECbridge 90.  When this fails, since the DECrepeaters are full
repeaters, ports in the hub can still talk to each other.  If the customer
really wants a reliable system, they can put two DECbridges in a hub.  Then the
only single point of failure will be the wire connecting the bridge to the
backbone.

DATA (extracted from following page)

			CABLETRON	DIGITAL	 NOTES	

Product			MMAC 8		DEChub 	 DEC= 2 hubs daisy-chained

Max Ports/ lg hub	168		120

Management module(s)	IRM-2		DENMA    one DENMA can handle 4 hubs
					DEWGB

Cost of mgnt modules	5500		3600   	(2850+3000/4)

Cost of mgnt software	4995		3000   	Only look at PC if looking for 
						 lowest cost

Total mgnt cost		10485		6600

Cost/port of mgnt	62		55

Single point of failure= IRM

Spectrum is high end management system.   Spectrum features are often used to
sell Cabletron, but few buy the system at 50K-150K.  In fact few Cabletron
people know the cost of the system. 

EASIER TO MOVE AROUND IN: Cabletron's PC based management system (and many
others) is based on HPs OpenView.  This system requires the user to select from
another menu to see a detailed view of the hub instead of just double clicking
on the icon as with HUBwatch.  Double clicking is also what one would expect
to do if one were used to using MS-Windows.

********* This page is DIGITAL CONFIDENTIAL ***********************************

Do not show this page in toto to your customer.  Please use it as background
information for selling situations.


GRAPHICAL, SNMP MANAGEMENT of Hubs (also called intelligent concentrators)

Manufacturer	BICC (6)  Cabletron  Chipcom  Digital   SynOptics  Unger'n-Bass
		ISOview   	     ONline   DEChub 90 LattisNet  Access/One

GENERAL DATA ****************************************************************
Small Hub	EtherCom4  MMAC3     5006C    DEHUB     3030-01    ASE 3000
Slots *		4	   3	     4	      7		3	   4
Price		1600	   750	     2295     890	1395	   2095

Large Hub	EtherCom10 MMAC8     5017C    2 DEHUB	3000-01	   ASE 7000
Slots *		10	   7	     15	      15	11	   10
Price		2390	   850	     4950     1780	2750	   2495

Concent'r **	1201-6	   TPMIN-24  5108M    DETMR	3308	   ASM310
Ports 		12 #	   24 (8)    8	      8 #	12	   12
Price    	1800	   3795	     1600     1545	1495	   2495

Ports/ lg hub   120	   168	     120      120	132	   120

MANAGEMENT DATA *************************************************************
M'gnt Module	1203	   IRM-2 @   5100M @* DENMA (3) 3313-02A @ ASM-7000 @
Price		2000	   5500	     2950     3600 E	5895 (9)   3350 (2)

PC Software	5500 (5)   4995	     7000 (7) 3000 E	5500	   7500 (1)
Wk Station Sw	N/A	   6495 (10) N/A  (4) 3000 E	6995	   N/A

TOTAL COST ******************************************************************
PC 		7500	   10495     9950     6600 (3)	11395	   10900
/port at max	63	   62	     83	      55 E (3)  86	   91
Work Station 	N/A	   12K       N/A (4)  13.6K(3)  12890	   N/A
/port at max	N/A	   71 	     N/A (4)  113E (3)  98	   N/A

---------------- Notes -----------------------------------------------------
* Slots that are free to install 10baseT concentrators (net of m'gnt cards)
** 10baseT module
# Can run as stand alone repeater (no single point of failure)
@ This module is a single point of failure that will leave all concentrator
   ports so they cannot do anything
@* Single point of failure is called the controller module

(1) OS2 based.  Price for less than 250 ports.  Higher prices for more ports.
     Feb 92- released ( I could not see) hub view.  Before that stopped at
     icon.
(2) Non-buffered version.  Buffered= $4550.  Can add ASM500 @1395 for redundant
     Backbone access.
(3) Need one for every 4 double hubs (16 slots).  Also need bridge (DEWGB)
     @2850.  E= Management prices are estimated high (3000) since we have not 
     yet been to PAC.  Work station cost includes 7K for MSU.
(4) Sun Net Mgr based management coming @3950 +3K for SUN Net mgr.
(5) OS2 based.  Cost 5000 for kernel+ 500 for repeater module.  CMIP has better
     graphical interface.  Note- OS2 is multi-tasking.
(6) Prices are before 3Com purchase of BICC
(7) $6500 for license for up to 12 hubs & 500 for firmware distn kit
(8) Also available in 12 port model (TPMIN-22 @2275)    
(9) Advanced management- basic cannot show picture of the cards.
(10) SUN Net Mgr version 3495+3K for SUN Net mgr.  Spectrum costs 50K-150K.


********* This page is DIGITAL CONFIDENTIAL ***********************************

Do not show this page in toto to your customer.  Please use it as background
information for selling situations.
    
T.RTitleUserPersonal
Name
DateLines
34.1More on dual bridgesEMDS::SEAVERLENAC Net Mgnt Mktg 223-4573Thu Mar 05 1992 08:24176
               <<< MOUSE::USER$:[NOTES$LIBRARY]ETHERNET.NOTE;1 >>>
                                -< Ethernet V2 >-
================================================================================
Note 703.3                   questions for smarthub                       3 of 9
PRNSYS::LOMICKAJ "Jeffrey A. Lomicka"               162 lines   8-APR-1991 10:59
                        -< More answers than you need >-
--------------------------------------------------------------------------------
Hi, I'm one of the engineers on the DECbridge 90.  I will try to answer
your questions.

>    1.  What is the reason of maximum 200 nodes in the workgroup LAN
>    ? Is it the memory constraint ?  What happens if there are more
>    than 200 workstations (lower performance ?) ?  Do we need to count
>    the DECserver 90L, the DECrepeater 90C, 90T and even the DECbridge
>    90 itself as an entry as well ? 

The 200 node limit is a memory constraint.  The value was choses as a
"reasonable" cost/feature trade-off.  The DECbridge 90 uses a 256-entry
content addressable memory to implement the address table.  We use "56" of
these to implement the protocol filters, our own station address, the
addresses reserved by Bridge Architecture, and other internal stuff, and
leave 200 for the work group.

If there are more than 200, the "work group size exceeded" lamp lights.
The work group size exceeded counter is incremented when a message arrives
from an address that cannot be learned due to the 200 node limit.
The 201st is not entered into the table.  This way, it is the NEW station
that suffers, not any of the existing stations.

What happens is that messages from the backbone that are destin for the
new station(s) will not be received.  Performance of existing stations is
not affected.

To get the 200 count, you count every station address.  The bridge itself
does NOT count.  The DECserver 90L counts as 1.  The DECrepeater units do
not count, nor would DEMPR's or DELNI's or similar things.  Bridges in the
work group (officially not allowed, but keep on reading) would count as
2, because they have two addresses.  Everything else counts as 1, but you
may want to be a little bit careful with counting LAVC workstations, as
they have one address when they boot, and another after DECnet is started.
If you boot an entire work group of 150 workstations at once, you may have
a transient condition (lasting the "age time", 900 seconds by default)
where your work group size is exceeded (to 300) due to this use of two
addresses.  If you only boot 50 at a time, there would be no problem.
(Rumor has it that this DECnet behavior goes away with Phase V.)

Note, by the way, there is NO LIMIT on the backbone side.  You could have
100,000 or more stations out there, and the DECbridge will still be happy.

>    2. What is the reason that a normal bridge (eg LB200) cannot be
>    placed between two DECbridge 90 ?  What is the meaning of 'end-node
>    bridge' ?  Is that implied the DECbridge 90 is not spanning tree 
>    compliance or only subnet of spanning tree has been implemented in 
>    it ?  Does it support spanning tree, 802.1d or both ?  If only
>    802.1d, does it imply we cannot use it in a spanning tree network
>    (eg. network with LB100) ?

You have to be careful how you quantify "between".

By "end node bridge" we mean "no bridges connected to the work group
side".  You can connect anything you want to the backbone side.  The
reason for this is beacuse of the 200 node limit (I'll get into this).
The spanning tree in the DECbridge 90 is a full Epoch 2 algorithm.  (Both
DNA and 802.1d.) You can use it with both LANbridge 100 and with IEEE
bridges, no problem.

The preferred configuration is:


(backbone A, over 200)	+---------+  (backbone B, over 200)
        --------+-------+  LB200  +-----+-----------+--------
                |       +---------+     |           |
            +---+---+               +---+---+   +---+---+
            | '90 A |               | '90 B |   | '90 B |
            +---+---+               +---+---+   +---+---+
                |(Work group A)         |	    | (Work group C)
        --------+------         --------+---	----+------------
				(work group B)	       

The problem is that you cannot set up a condition where there is a ANY
POSSIBILITY that you may route traffic between two backbones through the
work group.  Don't be tempted to redundantly feed one work group from two
backbones like this:

Really bad thing to do:

(backbone A, over 200)	+---------+  (backbone B, over 200)
	--------+-------+  LB200  +-----+--------
		|	+---------+	|
	    +---+---+		    +---+---+
	    | '90 A |		    | '90 B |
	    +---+---+		    +---+---+
		|(Work group, under 200)|
	--------+-----------------------+---------

Initially, the spanning tree will select one of '90A or '90B to go into
backup mode, and it will work fine.  Then one day somebody trips over the
power cord to the LB200, and suddenly you find both DECbridge 90 units are
forwarding, and both have over 200 nodes in their work group!

Party line:
	You may not connect any bridges to the work group.  Redundant
	bridging is not supported.  This prevents anyone from configuring
	the above scenario, accidently or otherwise.

Real truth:
	A:  If there are fewer than 200 nodes TOTAL in the ENTIRE extended
	LAN, you can do whatever you want with DECbridge 90 units, just as
	if they were full bridges.

	B:  You can put all the bridges you want in the work group, so
	long as the total count of stations (counting bridges as two) does
	not exceed 200 nodes, and none of these bridges attach to anything
	outside the work group.  The work group MUST be a CLOSED AREA, for
	which the only way in is via one DEcbridge 90.

	C:  There is one "almost safe" redundant bridge situation.
	However, the only thing redundant bridge situation protects you
	against is the loss of power or failure of the WGB itself.  It
	REQUIRES that both bridges are fed from taps on the SAME backbone
	cable.

	(backbone A, over 200)
        --------+-----------+--------   Both bridges attached to the SAME
                |           |		CABLE!!!  No equipment of ANY KIND
            +---+---+   +---+---+	between them.  No repeaters, no
            | '90 A |   | '90 B |	bridges, nothing.  SAME CABLE.
            +---+---+   +---+---+
                |	    |
        --------+-----------+--------
	(work group A, under 200)

	This is based on the idea that there is no possible failure mode
	where both bridges would be forwarding at the same time.

Remember, although these configurations will work, there is danger that
if they are later re-configured so that a path exists out of the work
group other than one DECbridge 90, a network failure that reconfigures the
spanning tree through the work group will blow away the 200 node limit,
and the network will become unusable.  You are best advised to play by the
rules, using DECbridge 90 only when no bridges are required in the work
group, and using the LANbridge 150 or 200 for all other cases.

>    3. How can we set up a redundant path for the workgroup LAN ?  For
>    the normal Lan bridge family, we can do so by simply adding an
>    additional bridge in parallel with the primary bridge.  How can
>    we do that for DECbridge 90 ?  

See "C" above.  Note that you MUST allow the backbone wire to be a single
failure point.  You cannot do FULL REDUNDANT networking with '90s.  You
can, in an unsupported way, protect against independent power failure of
the '90's or of the '90 itself.

>    4. If there is no DECbridge 90 installed in the DEChub, can we connect
>    a normal LAN bridge (say, LB150) to one of the segments of the
>    DECrepeater 90C ? 

Yes.  You can do that.

>    Simply speaking, can we say DECrepeater
>    90C = DEMPR and DECrepeater 90T = DETPR ?  If so, is that means
>    it is time to say goodbye to DEMPR and DETPR ?

Well, the DEMPR and DETPR have an AUI port, 90T and 90C do not.  DEMPR has
8 outgoing ports, 90C has 6.  The older repeaters still have application.
There is no good way to connect a 90C or 90T to a thickwire without
introducing another repeater or a bridge with an AUI port.  (You can't do
it with, for example, a DESTA.  It's the wrong end of the AUI cable.)  I
don't think its time to retire DEMPR or DETPR for that reason.

 Keyword              Note
>DECBRIDGE90          703.0, 710.0, 733.0, 744.3, 771.0, 775.1, 786.0,
795.0,                       798.0, 803.0, 812.0, 819.0
 End of requested listing
    
34.2Problems with Cabletron thru putEMDS::SEAVERLENAC Net Mgnt Mktg 223-4573Thu May 28 1992 13:23141
This document can be found on the public directory
EMDS::$1$DUA4:[SMART_HUB90.MANAGEMENT]CABLETRON.TXT

Subj:	Cabletron Hub Timing Problems

This can be used for competitive selling against Cabletron.  Our design of
repeaters and management do not have this problem.

DESCRIPTION OF PROBLEM: 

Poor network performance under moderate load,  possible lost packets.

PROBABLE CAUSE:

It appears that the individual repeater modules must contend for access to the
IRM module which provides the actual retiming/signal amplification function for
the MMAC hub.  In order to accomodate traffic jams the IRM module provides 
buffers which slow performance.  However, since the buffers are of limited size
even at moderate Ethernet loads the buffer space may be exceeded which means 
packets are dropped and must be regenerated which further degrades performance.
The loss of packets is identified by the error message "no resource available".

Field information indicates that the MMAC-3 has seen excess collisions at 3-4%
sustained load and the MMAC-8 is overwhelmed beyond 10-20% sustained load. 

DEChub 90 SOLUTION:

At one account where Cabletron repeaters were replaced with DE*MR repeaters
system performance improved so significantly users were calling up the system
manager asking what had been done to the network.



DETAILS FROM THE FIELD.  

                  I N T E R O F F I C E   M E M O R A N D U M

                                        Date:     05-May-1992 07:58am CDT
                                        From:     TONY REMBOWSKI
                                                  REMBOWSKI.TONY
                                        Dept:     SOFTWARE SERVICES
                                        Tel No:   713-953-3754

Subject: Cabletron vs DECrepeater 90's at ***

                            LAVC Transition History
    
    	The following information is based on Tony Rembowski's involvement 
    at *** concerning Local Area VAX Cluster (LAVC) transitions.
    
    PROBLEM
    
    	The indications are, LAVC running slow, unexplained loss and 
    regains of LAVC nodes and LAVC transitions are unpredictable.
    Timing issues?
    
    
    BASE LINE CONFIGURATION
    
    	The VAX cluster resides on the FDDI ring and has an IEEE 
    802.3/Ethernet connection also.  *** is utilizing a 10Base2 (Thinnet) 
    over twisted pair IEEE 802.3/Ethernet implementation.  The following is 
    a basic configuration for discussion purposes, this does not represent 
    the total configuration;
    
    VAXcluster-->FDDI Concentrator-->FDDI 620 Bridge-->ThinWire-->Cabletron 
    MMAC3-->Pair Tamer-->Twisted Pair Wire-->Pair Tamer-->Workstation
    
    It should be noted an IEEE 802.3/Ethernet segment is attached to the 
    cluster and provides another path for LAVC traffic.
    
    
    RESOLUTION TEAM MEMBERS
    
    	    	Digital Members:  Czarena Siebert, VAX Cluster Engineering, 
    			  Local Digital Services,  FDDI Engineering, Tony 
    		 	  Rembowski
    
    ACTION LOG
    
    Week of January 20
    
    	o  Cabletron MMAC3 identified as network traffic bottleneck. ***
           upgraded IRM module to IRM2 due to performance improvements.  
           Performance improved and IRM2 module provided more operating 
           information.  IRM2 logs No Resource Available, i.e., lack of 
           buffer space.  MMAC3 still appears to be a bottleneck.
    
    	o  Digital loans DECrepeater 90C to *** for evaluation,  
           performance improvements noticed immediately. 
    
    	o  Digital recommends *** evaluate the Network Professor System. 
           During the evaluation period excessive collisions conditions 
           were noted.  Evaluation period January 24th - February 12th.
    
    	o  FDDI and BI adapter repositioned on the XMI in 6440's to 
           determine if performance would improve, no significant change 
           was noted.
    
    
    Week of January 27
    
    	o  Phone support provided by Technically Elite Concepts January 29.
    
    	o  Begin acquisition process of DEChub 90's and DECrepeater 90C's 
           to replace Cabletron MMAC3's.
    
    	o  Digital provides quotation for onsite Network Consulting 
           Services.
    
    	o  Network performance suspect, LAVC loss and regains can be tied 
           to network errors, i.e., IRM2 and DECrepeater reduced 
           occurrences.
    
    
    	Six Cabletron MMAC3's where totally replaced with DEChub90's and 
    DECrepeater 90C's, 130 connections.  To verify the MMAC3 is a bottle 
    neck a single VAXstation 3100 was attached to one and the user noticed 
    slower response immediately.  Slower response is described as slower 
    screen updates, file gets from the server take longer, applications 
    starting slower and windows opening slower.
    
    	There are 10 segments in this facility.  The six LAVC legs 
    experience about 20% sustained traffic with peaks of 60%, the remaining 
    segments sustain about 6% utilization with peaks of 30%.  While 
    investigating performance issues I found a note in the Ethernet notes 
    conference that stated the MMAC3's have performance problems with 
    sustained traffic of 3-4%.  Cabletron technical support stated the 
    MMAC8's may experience similar conditions with sustained traffic of 
    10-20%, i.e., reconfiguration may be required.
    
    	The team does not have hard numbers to prove Cabletron's MMAC3 and 
    MMAC8 concentrators provide significant delay under heavy load 
    conditions, if 6% to 20% is considered heavy load, NOT.  The user 
    community phone calls, "What did you do to my workstation it's running 
    faster", after installation of the DEChub and 90C's, left the team 
    with the impression the Cabletron concentrators do add significant 
    delay.  Another indicator, the loss and regains of LAVC nodes has 
    virtually stopped.