[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference azur::mcc

Title:DECmcc user notes file. Does not replace IPMT.
Notice:Use IPMT for problems. Newsletter location in note 6187
Moderator:TAEC::BEROUD
Created:Mon Aug 21 1989
Last Modified:Wed Jun 04 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:6497
Total number of notes:27359

4936.0. "Failure in kernel event receipt. Thread terminating" by LICAUS::LICAUSE (Al Licause (264-4780)) Thu Apr 22 1993 10:23

DECstation 5000/240 w/128MB memory:

DECmcc SSB V1.3, ULTRIX V4.3   and DECnet/OSI V5.1.

I had an MCC session going on the workstation.  I then logged in under a
non-prived user account and threw an mcc windows session to another workstation
w/o problems.  I stopped mcc and logged out.  I did not kill the mcc processes
on the system for the non-privelaged user.

I then logged into the same workstation remotely under a different user id
and did the following:

setenv DISPLAY licaus:0.0
onmccu.demo.dec.com> mcc_iconic_map

The following errors occured:

Failure in kernel event receipt. Thread terminating
 %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
 pool
Failure in kernel event receipt. Thread terminating
 %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
 pool
Failure in kernel event receipt. Thread terminating
 %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
 pool
Failure in kernel event receipt. Thread terminating
 %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
 pool
Failure in kernel event receipt. Thread terminating
 %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
 pool

Failure in kernel event receipt. Thread terminating
Exception: Invalid memory address (dce / thd)
IOT trap (core dumped)


When I looked at the ULTRIX system again, MCC processes were still in existence
for root which continued to run w/o problems as well as process for both
non-prived user id's.

If I then kill all of the user mcc processes and attempt to start mcc again
using the iconic map from one of the non-prived users, the same "Thread 
terminating" message is displayed, but this time the maps startup and several
windows open indicating the previous error messages.  The maps do not display
compeletely however.  (i,e. the top level domain is opened but the background
does not appear and the icons all default to autoplacement and default icons.
Very ugly to say the least.

The more important thing is, what needs to be done to the system to increase
system parameters or more specifically, which parameters need to be increased
to alleviate this problem.

I did see the warning about needing three times physical memory size for
swap space, per user.  Is this what I've run up against or might it possibly
be something else.

Any help appreciated.

Al

T.RTitleUserPersonal
Name
DateLines
4936.1Check the MCC_EVENT_POOL_SIZE env. valueTAEC::LAVILLATThu Apr 22 1993 11:0967
Re .0:

 Al,

>
>Failure in kernel event receipt. Thread terminating
> %MCC-E-EVT_POOL_SIZESKEW,  requested event pool size incompatible with existing
> pool

	This message means that the shared memory segment for the MCC event
	manager was created with a size different that the one specified
	for your current user : either your user or the user who has first
	created the MCC event manager segment had the MCC_EVENT_POOL_SIZE
	env. variable set with a non default value.

	You can check the size of the event manager shared memory segment
	with ipcs -mb command. The default size is 528332.

	So you may have :

saipas> printenv | grep EVENT
### mean default value
saipas> ipcs -omb

IPC status from /dev/kmem as of Thu Apr 22 16:08:05 1993
Shared Memory
T     ID     KEY        MODE       OWNER    GROUP NATTCH  SEGSZ
m      0          0 D-rw-------     root   system      14202496
m      1          0 D-rw-------     root   system      1  69632
m      2          0 D-rw-------     root   system      1   4096
m      3 1224841216 --rw-------   ingres   ingres      5   8192
m      4 1224841218 --rw-------   ingres   ingres      3 237568
m      6  805548194 --rw-rw----    temip    users      0     48
m      7 1224841219 --rw-------   ingres   ingres      3 114688
m   2709  805548239 --rw-rw----    temip    users      1     48
m    510  805548288 --rw-rw----    temip    users      1     48
m     11  805548291 --rw-rw----    temip    users      1     48
m    212 1292087422 --rw-rw-rw-    temip    users     13 528332
###--------------------------------------------------------^
### this is the default (you can recognize the MCC event manager shm
### by its rw-rw-rw protection

Or :

printenv | grep EVENT_POOL
MCC_EVENT_POOL_SIZE=2097152
bl3 > ipcs -omb

IPC status from /dev/kmem as of Thu Apr 22 16:08:00 1993
Shared Memory
T     ID     KEY        MODE       OWNER    GROUP NATTCH  SEGSZ
m      0          0 D-rw-------     root   system      1 888832
m      1          0 D-rw-------     root   system      1   4096
m    102 1225502785 --rw-------   ingres   ingres      3 155648
m      3 1225502725 --rw-------   ingres   ingres      5   8192
m      4 1225502726 --rw-------   ingres   ingres      3 311296
m    605 1292521505 --rw-rw-rw-    temip    users     202101196
###--------------------------------------------------------^
### this is what you get when you specify MCC_EVENT_POOL_SIZE=2097152
m  24106  806019092 --rw-rw----    temip    users      1     16


	Hope this helps.

	Pierre.


4936.2that's it,,,,butLICAUS::LICAUSEAl Licause (264-4780)Thu Apr 22 1993 16:1914
    Pierre,
    
    Thanks very much....I did set the MCC_EVENT_POOL_SIZE=2097152 in 
    anticipation of installing TeMIP.  Since I haven't yet installed
    TeMIP, I'll set this parameter back to the default value.
    
    The numbers you've displayed here are pretty much foreign to me
    since I'm still an ULTRIX novice.
    
    Question.  If and when I do install TeMIP and do have this value
    set high, then won't I run into the same situation when I try to
    display an MCC or TeMIP window to a remote system?
    
    Al
4936.3Should not be a problemTAEC::LAVILLATFri Apr 23 1993 04:2028
>    
>    Thanks very much....I did set the MCC_EVENT_POOL_SIZE=2097152 in 
>    anticipation of installing TeMIP.  Since I haven't yet installed
>    TeMIP, I'll set this parameter back to the default value.
>    
	That is what I suspected !

>    The numbers you've displayed here are pretty much foreign to me
>    since I'm still an ULTRIX novice.

	You will learn fast !

>    
>    Question.  If and when I do install TeMIP and do have this value
>    set high, then won't I run into the same situation when I try to
>    display an MCC or TeMIP window to a remote system?
>    

	No, this value has only a meaning on the local system.
	You should not have any problem if *all* your MCC processes
	run with the same MCC_EVENT_POOL_SIZE value (that is what we use
	to do with no problem)

	Regards.

        Pierre.