[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference vaxaxp::vmsnotes

Title:VAX and Alpha VMS
Notice:This is a new VMSnotes, please read note 2.1
Moderator:VAXAXP::BERNARDO
Created:Wed Jan 22 1997
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:703
Total number of notes:3722

212.0. "Memory Utilization 101" by SWSEIV::CONNER () Tue Feb 18 1997 18:06

    Hi,
    
    What are some things to look for to be sure that a large image is not
    consuming memory unnecessarily?
    
    Ported legacy code from VAX to Alpha.  Memory consumption is up 2-3 times
    from that on VAX.  We explained to customer why Alpha architecture
    consumes more memory than VAX (bigger page sizes, code alignment,
    reduced instruction set, etc.) They accept this for the most part but
    are requesting a code review to make sure that they are not doing
    something stupid in the code, compiler qualifiers, linker qualifiers,
    use of sharable images, installing images, etc.  (Basically they did as
    little as they could to port the code from VAX to ALPHA.)
    
    We have isolated what image is the biggest memory consumer using
    DECwhatever-it-is-called-now (data collector/performance monitor).
    It turns out that this image is run by the most users also - it's a
    FMS menu client that maps to a global section (Oracle data cache),
    reads from and writes to Oracle, and performs inter-process communication
    using a DECmessageQ-like API.
    
    Some particulars about the image:
    
    DEC C (ported from VAX C)
    PLI
    FMS
    interfaces to Oracle DB
    
    + some of the code was built with /DEBUG/NOOPT - we are asking for a
    /DEBUG free compile and link
    
    + maps to several disk file based global sections - they were able to
    reduce size of some of the static data sections; basically just
    triming empty space, the section is created using $CREMPSC, not
    specifying the WRITE flag unless they need to write...
     
    + uses the FMS feature to link against forms as opposed to opening them
    at run/time - anyone remember if there is a memory penalty for keeping
    the form library open after loading the forms?
    
    + linked with many separate object libraries (they are not creating sharable
    images) but the entire image is being installed
    
    Would it be worth organizing their code in several sharable images over
    installing the entire image? Or doing both?
    
    + Oracle 7 issues?  Are there something like RDB global buffers that
    might be coming into play?
    
    Current environment is Alpha OpenVMS V6.2
    
    Any advice would be appreciated...Thanks.
    
    Paul Conner
    [email protected]
    
T.RTitleUserPersonal
Name
DateLines
212.1AUSS::GARSONDECcharity Program OfficeTue Feb 18 1997 20:4828
re .0
        
>    Ported legacy code from VAX to Alpha.  Memory consumption is up 2-3 times
>    from that on VAX.
    
    What kind of memory - virtual or physical? What exactly is being
    measured?
    
    What numbers are we talking about? i.e. what was the consumption on
    VAX? what is it now on Alpha?
    
    Is memory the most important thing for them? i.e. would they be
    prepared to lose speed in order to reduce memory consumption?
    
>    What are some things to look for to be sure that a large image is not
>    consuming memory unnecessarily?
    
    Find out what within the image is using memory e.g. stack, heap, code,
    static data.
    
    How large is a large image? Is the image large because it contains lots
    of "empty" pages?
    
>    + some of the code was built with /DEBUG/NOOPT - we are asking for a
>    /DEBUG free compile and link
    
    Note that some optimisations will increase the amount of memory used.
    However in general for this kind of measurement you want /NODEBUG/OPT.
212.2Buy More Memory...XDELTA::HOFFMANSteve, OpenVMS EngineeringWed Feb 19 1997 09:2834
   This looks to be the classic `memory-requirements' vs `run-time
   performance' vs `engineering costs' tradeoff.

   How much physical (or virtual) memory is in use, and what is the
   site's tradeoff in the investigatory cost vs the cost of adding
   more physical memory?  (Given the pricing of memory lately, the
   investigation itself would have to be pretty brief, unless there's
   no memory-expansion headroom left on the target system, or unless
   there are a number of systems involved...)

   Physical memory consumption increases of 2-3 are not out of line with
   other previous Alpha experiences -- VAX code is pretty dense.

   The primary tools for determining memory usage are the static tools,
   tools such as the LINKER map, and the run-time tools, such as PCA and
   similar.  Short of recursion or similar techniques or of unnecessarily
   large allocations for the data used (check any big arrays, consider
   dynamic allocation), there's probably not a substantial savings for
   the run-time storage.  As for static storage, look for unnecessarily
   seperate psect or similar.

   Concentrate on the LINK map, and on the largest consumers of memory
   first -- particularly large sections or large arrays.  If any...

   Be aware that applications performing I/O -- databases, the OpenVMS
   caches -- tend to consume large amounts of any memory left available.
   This is deliberate, as it improves overall performance.

   Shareable images tend to help -- over the installation of the main
   executable image -- only if the shareable(s) are used across more
   than one main executable image.

   DEC C on OpenVMS Alpha pads data structures for performance, as well.