| > 1.) Should I use 2 learning and 2 Spanning Tree-Domains, both
> comprising the same group of physical ports ?
They aren't called spanning tree domains anymore, they're called
"logical bridge domains". Not knowing very much about the application,
it looks like a logical choice to me.
When you say "the same group of physical ports", I assume you mean
the same group of systems with different physical ports.
> 2.) With this configuration, will it never be possible to
> bridge between these domains (as desired), provided
> that config o.k. and no bridging enabled on the DECnet
> router ?
Never is a long time. The current firmware does not allow any
communication between logical bridge domains. There are no plans to
change this, but it could probably be done if there was a business
justification.
> 3.) What is the preferred configuration tool ? I'm having
> problems to compile MIBs with Manageworks (on Windows NT)
> and a Polycenter Netview-Station cannot be moved very well.
> I heard some rumours of Clearvisn 6.0, supporting multiple
> spanning tree on the Gigaswitch ? Is this true ?
Most customers using the more complex features of the
GIGAswitch/FDDI system use a combination of MultiChassis Manager and
their favorite SNMP manager.
You will generally find more information about MultiChassis Manager
in the KALI::HUB_MGNT notes conference.
The version I am using (6.0a) looks like it supports logical bridge
domains.
MDL
|
| Thanks for your reply!
>> 1.) Should I use 2 learning and 2 Spanning Tree-Domains, both
>> comprising the same group of physical ports ?
> They aren't called spanning tree domains anymore, they're called
> "logical bridge domains". Not knowing very much about the application,
> it looks like a logical choice to me.
> When you say "the same group of physical ports", I assume you mean
> the same group of systems with different physical ports.
I mean the Gigaswitch-ports. E.g. Logical Bridge Domain1 and Learning
Domain 1 should consist of ports 2.1 - 5.4 and Logical Bridge Domain 2
and Learning Domain2 8.1 - 13.4.
All VMS-cluster nodes will be SAS-connected with FDDI-0 to one group
and with FDDI-1 to the other group. Failover is via SCA failover in
VMS (which works very well), but we also have the option of dual
homing with DAS stations for the future (if the customer is willing to
pay the extra money)
>> 2.) With this configuration, will it never be possible to
>> bridge between these domains (as desired), provided
>> that config o.k. and no bridging enabled on the DECnet
>> router ?
> Never is a long time. The current firmware does not allow any
> communication between logical bridge domains. There are no plans to
> change this, but it could probably be done if there was a business
Probably a misunderstooding. That's exactly, what we want: never never
ever ever to bridge between these groups!
Thanks,
Ralph
|