[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference azur::mcc

Title:DECmcc user notes file. Does not replace IPMT.
Notice:Use IPMT for problems. Newsletter location in note 6187
Moderator:TAEC::BEROUD
Created:Mon Aug 21 1989
Last Modified:Wed Jun 04 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:6497
Total number of notes:27359

5309.0. "Sizing of DECmcc?" by WARABI::CHIUANDREW () Wed Jul 07 1993 03:38

    Could someone provide me the sizing guideline for the following case:
    
    Customer would like to use DECmcc to monitor their network, there will
    be two ACTIVE mcc viewers, each of them manage their 'own' domain
    (because, they belong to different functional departments). In each
    domain, there will be 100 nodes (IV, IP, Terminal servers, LANbridges)
    and each node will have 2-3 alarm rules/notifications, recording will
    be used for keeping statistics data on IV/IP nodes (about 8-10 batch
    jobs of HISTORIANs will be running). Each MCC view will have 4-5
    windows open perform day-to-day network management operation.
    
    
    Given the above scenio, will VAXstation4000-60 with 48MB and two hard
    disks (426 and 1.2GB) give good performance (e.g 2-3 seconds window
    pop-up)? 
    
    
    PS: They will run VMS5.5 with DECmcc BMS V1.3, TSAM V1.03 and ELM
    AM/FM.
    
    Thanks in advance for help!
    Andrew Chiu - NIS Sydney
T.RTitleUserPersonal
Name
DateLines
5309.1see MCC-TOOLS, note 21MCDOUG::dougDead or Canadian?Wed Jul 07 1993 09:399
At first glance, 48 Mb sounds like it might be a little on the skimpy
side...

See note 21.* in NOTED::MCC-TOOLS for a system planner tool.

Then you can plug in your own numbers and draw your own conclusions.

/doug

5309.24000/60 48mb is too smallTOOK::R_SPENCENets don't fail me now...Wed Jul 07 1993 11:2022
    As you have specified it, the 4000/60 is going to be much too small.
    
    First, memory is way to little. 48mb is the minimum permitted by the
    SPD. You are doing lots more than minimum.
    
    Second, I would reccomend that you craft the domains and alarm rules
    so that you can have some wildcard rules if possible. What I read was
    that you wanted to have 400-600 rules running. I would work to get
    the number down below 100.
    
    Third, 8-10 batch jobs seems excessive. Each will have an instance of
    DECmcc running (more memory - 16mb per instance isn't unreasonable).
    Use domains and the fact that an entity can be in more than one domain
    to help you here. For each of the users, create a
    "historical-recording" domain (will need unique names of course) and
    start ONE background batch job for each. Then, include in these domains
    all the entities that you need to do historical recording on.
    
    Given the load you plan I would provide lots of memory and if possible,
    I would use a 4000/90 instead.
    
    s/rob
5309.3more info pleaseTRKWSH::COMFORTHere beside the rising tideThu Jul 08 1993 16:4817
    
    re .-1
    
    In what ways will using wild card alarms (I presume polling type rules)
    reduce processing overhead?  In the case of DECnet Phase IV, wouldn't
    the number of NML connections be the same?
    
    We have demontrated that a 4060 w/ 400 alarm rules, displaying  3-4
    iconic maps, using A1mail and message router in 104 megs of memory is
    an unusable system.  After reducing the number of rules to 300, the
    system becomes borderline useful.  The customer is looking a 4090's as
    a solution.
    
    Additional input is greatly appreciated.
    
    Dave
    
5309.4One thread per ruleTOOK::R_SPENCENets don't fail me now...Thu Jul 08 1993 17:4113
    Each "rule" requires it's own thread to execute within. So, one
    wildcard rule for 100 entities is one thread and the memory
    requirements for one thread. 100 rules is 100 threads...
    
    I would certainly agree with you that the VS4000/60 is not sufficient
    for the described load.
    
    Did you do any performance measurments on the system? What did you
    run out of? Memory, CPU, I/O? Depending on the rate that the rules
    execute you could have run out of CPU but I would expect memory to
    have been close to used up also.
    
    s/rob
5309.5TOOK::MCPHERSONDead or Canadian?Thu Jul 08 1993 18:1023
If you don't use wildcarded alarm rules, each rule creates its own set of
threads and resources.  If you can use a wildcarded alarm rule, the Alarms FM
*serializes* the requests, so only ONE set of threads has to be created.

If you *don't* use wildcarded alarm rules wherever you can, then you're burning
up resources.  Figure *nominally* 64kbytes per alarm rule for CMA thread
overhead:

	64k x 300 alarm rules comes to about 19 Mbytes of memory.

Absolute *best* case would be if you could replace 100% of the 300 alarms with
a single wildcarded rule.  You'd essentially buy back 19 Mbyes of system
memory.

Of courese, if the alarm FM has to call other FMs/AMs to do the work, these MMs
in turn create threads and consume additional resources.

With the little information that I have to work on, I *personally* would expect
that you'd get *memory* bound before CPU bound, if you were able to use alarm
rule wildcarding.

/doug
5309.6Stack space for the AM does not count (really)TOOK::LYONSAhh, but fortunately, I have the key to escape reality.Thu Jul 08 1993 20:1718
    re: .-1
    
    Doug, remember that the memory used for stack and CMA thread management
    in the AM's and FM's during rule evaluation is allocated and released
    on the fly, so only those rules that were currently evaluating would
    allocate FM/AM stack space.  It is still true that EACH rule will use
    the 64KB of memory, so your numbers on 19Meg of memory are right on.
    
    Using a thread to store the context of a rule is very expensive, but
    that design was done back when threads were much, much cheaper.  Moving
    to CMA cost MCC quite a bit in terms of performance.  Wildcard rules
    will save much (but not all) of that memory.  The reason it wont save
    all of it is that the alarms package will still need to keep some state
    information about each entity it is looking at, but that memory amount
    is small compared to the 64K per stack/context.
    
    drl
    
5309.7I stand corrected.TOOK::MCPHERSONDead or Canadian?Fri Jul 09 1993 09:333
OK. 
So best case, he'd only buy back *18* Mb of memory.  
/doug
5309.8re .-last fewTRKWSH::COMFORTHere beside the rising tideFri Jul 09 1993 10:5122
    
    In reference to the resource limits/consumption question in .4, both
    memory and cpu are at a premium.  When the 350 to 400 rules were
    running, the cpu was the problem, running at ~ 90-95% as seen by VMS
    monitor with DETMCC running continually over 80%.  The memory
    consumption, though a factor, has remained pretty much the same.  Just
    for informations' sake, the "background" task running the alarms
    (DETMCC) is running about 53000 pages and each map runs approx 20000
    pages with the current system setup.  Out of 212922 total pages, there
    are 8000 - 9000 free.
    
    After offloading 100 or so alarms and exporting to another system
    (about 300 alarms active now), cpu util doesn't get much above ~75% with
    DETMCC taking up to ~60%.
    
    Sounds like it is definitely worth seeing if some of these rules can be
    restructured.
    
    Thanks,
    
    Dave