T.R | Title | User | Personal Name | Date | Lines |
---|
2411.1 | An educated (?) guess | NANOVX::ROBERTS | Keith Roberts - DECmcc Toolkit Team | Tue Feb 25 1992 07:23 | 14 |
| RE: .0
> Would it be possible to have an authoratitive figure on how much
> memory and CPU is used for each alarm rule?
I haven't done an extensive research, but an Alarms Rule consumes about
20k bytes of memory for the internal data structures.
Then there is stack space for the thread to use: about another 20k bytes
Then there is Thread overhead; memory consumed by the threads package
to maintain the thread: another guess of about 20k bytes.
/keith
|
2411.2 | 120-200 pages/rule is a good guess | HAZARD::BAKER | Paul Baker, UK Product and Technology Group - 844 3311 | Tue Feb 25 1992 16:41 | 17 |
|
I would agree with Keith's estimate. I have seem guesstimates of
between 120 and 200 pages of memory per rule, depending on whether it's
sitting there waiting for an event or actively going out and polling.
Also, I have seem my 16Mb V5.4-3 system die with upto 800 page
faults/sec running DECmcc. The "SPD" says 24Mb as a minimum - I thing
it's correct. If you want decent performance, go for at least 32Mb.
Now - what we really need before V1.2 ships are the real numbers that
we can quota to a customer when sizing systems. All this guessing is
not going to get us any friends in the customer community if we get it
wrong.
Paul.
|
2411.3 | Some performance enhancements for Alarms will be in v1.2 | NANOVX::ROBERTS | Keith Roberts - DECmcc Toolkit Team | Tue Feb 25 1992 16:59 | 7 |
|
V1.2 of Alarms will have some memory consumption fixes. Memory which is
needed during evaluation - but not during hibernation was not freed. This
will be fixed and should save anywhere from 1k and 4k per rule. Every
little bit helps.
/keith
|
2411.4 | and cpu usage? | CLARID::HODSMAN | The Giraffe | Wed Feb 26 1992 02:33 | 9 |
| Thnaks for the news about memory - any news about cpu usage
which is my real worry.
15% of a 4.2 vup machine to keep 40 alarm rules going
suggests 15000 instructions a second just to keep an alarm rule
enabled. Is this normal - do I need alpha already ?
regards,
jeremy
|
2411.5 | There's a lot of stuff going on inside the box | NANOVX::ROBERTS | Keith Roberts - DECmcc Toolkit Team | Wed Feb 26 1992 08:48 | 24 |
| RE: .4
>> ... 15% of a 4.2 vup machine to keep 40 alarm rules going suggests
>> 15000 instructions a second ...
There are a lot of things going on inside. The Alarms FM calling
the Access Module (DECnet, Bridge, SNMP, ...) must go through the
Information Manager; Scheduling algorithms are performed and dispatching
takes place. Thread context switching and overhead eats up some CPU.
If Notification and the Iconic Map are enabled then more threads
are executing -- and events produced by Alarms are processed through
the Event Manager, which needs some CPU too.
The DECthreads (CMA) package we are using for v1.2 seems to consume
a bit more CPU and memory too (more than the MCC threads package used
for v1.1)
>> do I need alpha already ?
In many respects the Alpha chip would fix all of our problems. But
certain DECthread internal implementations just consume CPU (semaphores
I believe poll for events); A faster machine may not be the only answer.
/keith
|
2411.6 | HW >>> | MAYDAY::ANDRADE | The sentinel (.)(.) | Thu Apr 09 1992 09:45 | 14 |
| I understand how a product can get a bit out of hand when its first
developed. And was given to undertstand that things would get better
when version 1.2 came out. "The performance would noticibly improve"
But what do I hear now, v1.2 actually consumes more memory and cpu
then v1.1 did ???????? How come, did the added functionality eat
away all the performance gains ?
Are there plans to improve DECmcc performance for v2 ? (for real)
*** DECmcc *** the only product to regulary quit on me due to lack
resources....
Gil
|
2411.7 | catch 22, but we are working on it. | TOOK::CALLANDER | MCC = My Constant Companion | Mon May 11 1992 17:24 | 12 |
|
don't give up yet. We are having some problem with some of the
underlying tools we are dependent upon. But they are being VERY
responsive with fixes (memory and functionality). We are also working
full time on fixing performance and memory problems as we speak. If you
are interested in more details on this please call me and I can discuss
the changes that we are making. But for now surfise to say that we are
well aware of the problems. The biggest thing for us was that we did
enhance the memory usage and performance, but then went and added in a
ton more functionality which just went right back and consumed it all
back up again....
|