[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference azur::mcc

Title:DECmcc user notes file. Does not replace IPMT.
Notice:Use IPMT for problems. Newsletter location in note 6187
Moderator:TAEC::BEROUD
Created:Mon Aug 21 1989
Last Modified:Wed Jun 04 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:6497
Total number of notes:27359

190.0. "mcc_MODULENAME_cleanup..." by SMAUG::BELANGER (Quality is not a mistake!) Wed Jul 18 1990 13:18

    
    	When an MM's shareable image is mapped into the MCC adress space,
    the routine mcc_MODULENAME_init is called.  Is there a mechanism by
    which an MM can define a routine to be called to clean-up resources
    which were allocated while using the MM?  My problem is that the MM I
    have to right will be allocating resources in an address space not with
    in MCC and image exiting will not recover these resources.
    
    Jon. Belanger
T.RTitleUserPersonal
Name
DateLines
190.1don't think soMKNME::DANIELEWed Jul 18 1990 15:5011
	The idea has been that each MM frees any allocated resources when
	it's processing of the thread is complete, before returning from the
	MCC_CALL_xxx().

	This is frequently cumbersome.  For instance, an AM might wish to
	assign a device once at INIT time, use it wisely (across threads), 
	and deallocate at TERMINATE time.  There isn't any TERMINATE time.
	Each thread that calls you might be your last.

	Cheers,
	Mike
190.2You can use an exit handlerTENERE::DUNONPaul Dunon - Telecom Engineering - VBOThu Jul 19 1990 05:0625
I've already met this problem some time ago.

re .1
>	The idea has been that each MM frees any allocated resources when
>	it's processing of the thread is complete, before returning from the
>	MCC_CALL_xxx().

Yes, because the AM never knows whether its current execution is the last one or
just one of a serie of MCC_CALLs. But this result in some overhead when many
consecutive calls are made by a client MM. 
Just try to run a DECmcc commands procedure file containing node4 commands and
see the time it takes. Are we sure that this is acceptable for customers who
used NCP command procedures ?

I've solved this problem using a separate thread that closes the connection 
between the AM and the agent after a timeout period. If my AM is called again
before the timer expires, the opened connection is used.
Additionnaly, an exit handler is declared in the PROBE routine. It alerts the
disconnection thread when the image exits.

I knows that exit handlers should not be used by MM developpers, but I don't
see other reason than portability, and it works very well !


			-- Paul
190.3!goodMKNME::DANIELEThu Jul 19 1990 12:088
> But this result in some overhead when many consecutive calls are made by a 
> client MM. 

	Absolutely.  I didn't say I liked this mechanism, I write AMs too!

	There should be an architected way for this to work.  Why not have
	the kernel call each currently enrolled MM at a _terminate() routine
	at image rundown?
190.4my favoriteGOSTE::CALLANDERThu Jul 19 1990 16:0418
    
    If we are shooting out ideas, my personal favorite is that the kernel
    check for outstanding threads. If there are still outstanding threads
    when an exit request is detected, then my feeling is that the kernel
    should have a mechanism to either signal the thread for termination or
    more along your lines, call a terminate entry point in the xM so that
    clean up can ensue.  This would allow a module to start a thread
    to go off and do some of the external (external to MCC) operations
    that this note has been discussing and not worry about how it will
    shutdown, because it would be called back.
    
    Again, this is just a personal favorite. I am sure there are a whole
    lot more floating around. To extend on Mike's idea though, the only
    change would be that the kernel should only call the xM's that have
    been imaged merged, if you call all the modules that are enrolled
    it could be time consuming to image merge each just to call the
    terminate entry point.
        
190.5Protocols over communication threads must be closedSMAUG::BELANGERQuality is not a mistake!Mon Jul 23 1990 12:2212
    
    	The only problem with having an outstanding thread and using the
    termination of that thread as an indication that resources should be
    cleaned up, is that I have another protocol going over this thread
    (DECnet).  When what ever I'm connecting to see that the communication
    link went down when the protocol over that link has not been closed, a
    failure log gets generated.  This failure log, should only be written
    when a "real" problem occurs, not as a part of normal operations.  It
    is tough having to be a nice neighbor within 2 or more disparate
    environments.
    
    ~Jon.
190.6(cont)SMAUG::BELANGERQuality is not a mistake!Mon Jul 23 1990 12:266
    
    	Also, why can't the mcc_MODULENAME_init or _probe have a call into
    the mcc_kernel, to "register" a clean-up routine.  Therefore, those MMs
    which don't care to use this clean-up feature, do not "register" one.
    
    ~Jon.
190.7Thread context servicesPETE::BURGESSTue Jul 31 1990 11:3730
	About a year ago I implemented a service called
	'thread context' which provides a private context address space
	for each facility on a per thread basis.  The implementation
	is based on the context service in cma architecture/functional spec.

	The service allows clients to create process wide
	context keys and associate a destructor routine
	which is invoked if the key is ever deleted or
	(more likely) when the thread is terminated.  The
	destructor routine may perform the necessary operations
	to release resources, etc.  

	An AM for each thread may associate a private context value (32 bits)
	with the context key.  Usually the context value is the address
	of a dynamic memory block containing items that the AM
	wishes to access for the duration of the thread.

	The key value which is process wide may be kept in static non-global
	virtual memory  (access chars: write once, read many)

	Our experience is that these services are extremely powerful and flexible.
	Most of the kernel components use these services.	

	So where is the documentation in the SRM?
		Hopefully, in the next release.


	Pete Burgess