[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference vaxaxp::vmsnotes

Title:VAX and Alpha VMS
Notice:This is a new VMSnotes, please read note 2.1
Moderator:VAXAXP::BERNARDO
Created:Wed Jan 22 1997
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:703
Total number of notes:3722

582.0. "SMARTDRV for VMS Alpha ?" by ODIXIE::RREEVES () Mon May 12 1997 18:40

    Al right it's time for a stupid question. Do we make a product
    something similar to SMARTDRV for VMS ? IS there a product which 
    can take advantage of unused ram as a large file cache ?
    
    Regards,
    
    Ray
T.RTitleUserPersonal
Name
DateLines
582.1stupid answerGIDDAY::GILLINGSa crucible of informative mistakesMon May 12 1997 22:5119
>  Do we make a product
>    something similar to SMARTDRV for VMS ?

    In true Digital form, we make such a product, but we don't sell it. We
  give it away. From V6.0 and higher, the VaxCluster I/O Cache (VIOC) is
  turned on by default. See SHOW MEM/CACHE/FULL to see if it's working:


$ show mem/cache/full
              System Memory Resources on 13-MAY-1997 11:51:32.03

Virtual I/O Cache
    Total Size (pages)           11496    Read IO Count            211586550
    Free Pages                    3199    Read Hit Count           193832183
    Pages in Use                  8297    Read Hit Rate                   91%
    Maximum Size (SPTEs)        243067    Write IO Count             7237165
    Files Retained                 100    IO Bypassing the Cache     7377130

						John Gillings, Sydney CSC
582.2My hit rate is loer than your hit rate ODIXIE::RREEVESTue May 13 1997 13:0217
    
    Mine says
    
    Virtual I/O Cache
    Total Size (Kbytes)	3200		Read IO Count 251049935
    Free Kbytes		0               Read Hit Count 959581
    Kbytes in Use 	3200            read Hit Rate   0%
    Write IO Bypassing Cache 318028     Write IO COunt  14719410
    Files Retained	98              READ IO Bypassing Cache 4313246
    
    Is there something I need to turn on ? or a sysgen parameter to
    increase the Total Size from 3200 ? My hit rate seems a little 
    low compared to yours.
    
    Regards,
    
    Ray
582.3Holy Virtual IdeasODIXIE::RREEVESTue May 13 1997 14:586
    Holy Virtual Ideas Batman ! I just found VCC_MAXSIZE and set the baby 
    way up (we have almost 400 Megabytes sitting arround free). Can't wait
    to talk the customer into a reboot. Besides VC_MAXSIZE is there 
    anything else to set ? 
    
    Regards
582.4Reasons for a low hit rateMOVIES::PALMERSki Forever...Tue May 13 1997 17:5924
I don't think I've ever seen a system with such a low hit rate ! ;-)
There are an awful lot of reads and writes bypassing the cache... Several
things may be going on on this system though:

o Are the disks mounted /NOCACHE ? This disables VIOC as well as the XQP
  metadata caches.
o Do the I/Os being issued have 'unusual' modifiers such as IO$M_DATACHECK ?
  This could occur if SET FILE/DATACHECK was on, or the volumes were mounted
  /DATA_CHECK.
o VIOC will not cache I/Os greater than 35 blocks. There is no way to change
  this limit short of patching the system cell unfortunately.
o Lots of logical I/O on the system, not initiated by the XQP. Each logical
  I/O to a mounted disk causes the cache to be invalidated clusterwide. Some
  3rd party defraggers do lots of logical I/Os to mounted disks for example.

What applications are you running on this system ? Given the ridiculously low
hit rate, I'm dubious whether increasing the cache size as described in .3 will
help you. If most of the I/os are non-cachable due to modifiers or I/O size,
then the memory will just be wasted.

FWIW The new Extended File Cache in a future OpenVMS Alpha release have a new
cache which will automatically exploit spare memory on the system.

Julian
582.5MOVIES::PALMERSki Forever...Tue May 13 1997 18:005
Of course another reason for very low hit rates is a random I/O pattern. If
that is true then increasing the cache size might help somewhat. There would be
a higher chance of the data being in cache if the cache is bigger...

Julian
582.6EPS::VANDENHEUVELHeinTue May 13 1997 23:4429
> I don't think I've ever seen a system with such a low hit rate ! ;-)
    
    Agreed. Surely because of VMSes exceedingly conservartive approach
    the the VBN cache size. Just 3.2Mb isn;'t going to cut it these days.
    IMHO the OOTB default should have been 10% of main memory or thereabouts.
    Compare to Digital Unixes UBC values...
    
> There are an awful lot of reads and writes bypassing the cache... Several
    
    Disagreed. Less than 2% is not much at all. John's output shows > 3%
    The system I happen to be logged on to now shows > 25% bypassing reads
    (surely BACKUP activity)
    
> FWIW The new Extended File Cache in a future OpenVMS Alpha release have a new
> cache which will automatically exploit spare memory on the system.
    
    Just like we enjoyed for 5+ years on VAX VMS huh? :^) :^)
    
    Grins,
    	Hein.
    
              System Memory Resources on 13-MAY-1997 22:35:37.06

Virtual I/O Cache
    Total Size (pages)           50629    Read IO Count              4119210
    Free Pages                   38142    Read Hit Count             2266277
    Pages in Use                 12487    Read Hit Rate                   55%
    Maximum Size (SPTEs)        181137    Write IO Count              902474
    Files Retained                  96    IO Bypassing the Cache     1144697
    
582.7BSS::JILSONWFH in the Chemung River ValleyWed May 14 1997 10:36502
You really have to watch out for BACKUP polluting the statistics.  Sure 
wish that VMS included a utlity to zero the statistics portion of SHOW 
MEMORY/CACHE/FULL instead of relying on the CSC to do it for them :*)

[OpenVMS] How to Interpret Info From SHOW MEMORY/CACHE/FULL on Alpha

     Any party granted access to the following copyrighted information
     (protected under Federal Copyright Laws), pursuant to a duly executed
     Digital Service Agreement may, under the terms of such agreement copy
     all or selected portions of this information for internal use and
     distribution only. No other copying or distribution for any other
     purpose is authorized.
Copyright (c) Digital Equipment Corporation 1995, 1996.  All rights reserved.

PRODUCT:    OpenVMS Alpha, All Versions

COMPONENT:  Virtual I/O Cache (VIOC)

SOURCE:     Digital Equipment Corporation


OVERVIEW:

This article provides information on how to interpret the information
found in the DCL command "SHOW MEMORY/CACHE/FULL".

The following is a display from the command from an Alpha system with
a numeric value assigned to each element.  The explanation for the
element can be found in the section associated with the numeric value.

  Virtual I/O Cache
1. Total Size (Kbytes)      3200    3. Read IO Count           422489
"  Free Kbytes                 0    4. Read Hit Count          242514
"  Kbytes in Use            3200    5. Read Hit Rate              57%
7. Write IO Bypassing Cache 4455    6. Write IO Count           48359
2. Files Retained             95    8. Read IO Bypassing Cache 100523


SECTION 1:
----------
Total Size (Kbytes), Free Kbytes, and Kbytes in Use.

The "Total Size" of the VIOC is the count of the Kbytes allocated from
memory for cached files.  This value is fixed and is defined by the
SYSGEN parameter VCC_MAXSIZE.

  Note:  
    The SYSGEN parameter VCC_MAXSIZE is the count of the number
    of the blocks allocated from memory for the VIOC.  The default 
    value for the parameter is 6400.  This block value is equal to 
    the displayed value for the "Total Size (Kbytes)", 3200, based 
    on the following algorithm:

                    1 block = 512 bytes
                    1 Kbyte = 1024 bytes (2 blocks)
                6400 blocks = 3276800 bytes
              3276800 bytes = 3200 Kbytes

                blocks * bytes_per_block
                ------------------------ = Kbytes
                     bytes_per_Kbyte

    Another, much simpler, algorithm is (VCC_MAXSIZE/2).
    However, the correlation of 'blocks' to 'Kbytes' is
    unclear with the simple equation.

The number of "Free Kbytes" is the count of bytes allocated for the
VIOC that do not currently contain valid file data.  When a DIO to a
cachable file is executed, and no reference to the file data can be
located in the VIOC, a portion of these "Free Kbytes" is allocated.
Once allocated, the number of "Free Kbytes" is decremented and the
count of "Kbytes in use" is incremented.

The sum of "Free Kbytes" and "Kbytes in use" is equal to the value of
"Total size".


SECTION 2:
----------
Files Retained.

This element is the count of File Control Blocks (FCB) retained by
the file system, not files retained by the VIOC.

The value of this element is based on the value of EXE$GL_LIMBOLEN,
which is the count of FCBs retained in an FCB cache called the LIMBO
queue.  These FCBs are from files that have been deleted on the
system.  The FCBs are cached to speed the allocation of file headers
for newly created files.

The head of the LIMBO queue is pointed to by EXE$GQ_LIMBOQ and its
size is based on EXE$GL_LIMBOMAX.  When the file system attempts to
move another FCB into the queue, it checks to see if EXE$GL_LIMBOLEN
is equal to EXE$GL_LIMBOMAX.  If this is true, the LIMBO queue is
flushed down to the value of EXE$GL_LIMBOTHR in a FIFO manner, and
EXE$GL_LIMBOLEN is reset to reflect the new size of the queue.

The size of the LIMBO queue is hardcoded to 250 for EXE$GL_LIMBOMAX,
and 200 for EXE$GL_LIMBOTHR.  However, with VIOC enabled these values
are limited to 100 or the SYSGEN parameter ACP_HDRCACHE, whichever
is less.

When a file is cached by VIOC, an associated FCB is created for cache
called a CFCB (Cache File Control Block).  CFCBs are also cached in
a LIMBO queue but the association between the CFCB LIMBO queue and
the FCB LIMBO queue is not guaranteed.

This dissociation is due to the fact that a CFCB can be removed from
the CFCB LIMBO queue and associated with a completely different FCB
then for what it was originally created.  This re-association is
caused by the periodic inability of the VIOC to allocate more space
for expansion, i.e.; the creation of a new CFCB.  If expansion is
disabled and there are CFCBs on the CFCB LIMBO queue, VIOC uses these
entries.

There may also be more CFCBs defined in nonpaged pool than FCBs.
This occurs because FCBs are deallocated back to nonpaged pool
as files are closed and the limbo queue is unable to contain them.


SECTION 3:
----------
Read IO Count.

This element is the count of all DIO reads.


SECTION 4:
----------
Read HIT Count.

This element is the count of all DIO reads satisfied with data from
the VIOC.


SECTION 5:
----------
Read HIT rate.

This element is calculated based on the following formula;

  (<value_of_element_4>*10)/(<value_of_element_3>/10)

The break-even percentage for OpenVMS Alpha is 30-35%.

  Note:  
    Refer to the discussion below regarding "Tuning VIOC".


SECTION 6:
----------
Write IO Count.

This element is the count of all DIO writes to files that have
associated data in the VIOC.


SECTION 7:
----------
Write IO Bypassing the Cache.

This element is the sum of 2 counters:

    1.  Write I/Os Bypassing the VIOC due to function
    2.  Write I/Os Bypassing the VIOC due to size

  Note:  
    Refer to the discussion below regarding the "Analysis of IO
    Bypassing the Cache".


SECTION 8:
----------
Read IO Bypassing the Cache.

This element is the sum of 2 counters:

    1.  Read I/Os Bypassing the VIOC due to function
    2.  Read I/Os Bypassing the VIOC due to size

  Note:  
    Refer to the discussion below regarding the "Analysis of IO
    Bypassing the Cache".


TUNING VIOC
-----------
There are many things that have the potential to negatively affect
the "Hit Rate" seen in Section 5 of the SHOW MEMORY/CACHE/FULL
display.

1. The size of VIOC.

   SYMPTOM:

     If "Free Kbytes" constantly indicates 0 free, then the amount
     of non-paged pool allocated for the cache may be limiting the
     size that VIOC needs to be effective in your environment.

   SOLUTION:

     Increase the value of VCC_MAXSIZE.


2. How VIOC is accessed:

   Certain utilities and applications access data in a fashion not
   conducive to caching, or execute in a way that skews the numbers
   observed with the SHOW MEMORY/CACHE/FULL command.

   Note:                                                                      
     A program to reset the counters described in section 3, 4,               
     6, 7, and 8 is provided at the end of this article. 

   Examples:

     a. BACKUP, when run, typically causes a large increase to the
        "Read IO Count" while incurring no read hits.  This is due
        to the size and function of the IO BACKUP is executing.

        Depending on the amount of data being backed up, the
        increase to "Read IO Count" can be drastic in a short span
        of time.  Since the "Read Hit Rate" remains somewhat flat
        for this same period of time, the calculated "Hit%" is
        skewed.

        The skewed values remain until the system is rebooted, or
        until enough time and IOs, with an equitable hit rate, occur.
        Typically, "Read IO Bypassing Cache" increases in proportion
        to "Read IO Count".

     b. Some network applications use special functions when
        performing IO causing "Read IO Count" to increase while
        incurring no read hits.  Typically, "Read IO Bypassing
        Cache" increases in proportion to "Read IO Count"
        and the overall size of the VIOC shows minimal growth.

        In this case, VIOC should be disabled to conserve resources.
        This problem has been experienced with certain TCP/IP
        applications like PATHWORKS.

     c. Some database applications access information in an
        inconsistent manner, i.e.; doing minimal consecutive access
        to the same record in a file, that the size of the VIOC
        increases to or near its maximum, incur a consistent
        "Read IO Count" with little or "Read IO Bypassing Cache",
        and show "Hit%" far below the break-even point.  In this
        case VIOC should be disabled to conserve resources.


Analysis of IO Bypassing the Cache
----------------------------------
Most read or write IOs Bypassing the VIOC due to "function" are due
to the IO request being made with either a function modifier or with
NOCACHE enabled.  Read or write IOs Bypassing the cache due to "size"
are due to a request size in the IO greater than 35 blocks.

If the IO Bypassing the cache is consistently 80% or more of Read IO
Count (element 5), disable VIOC by setting VCC_FLAGS to 0.

To view the counters that are incremented when IOs bypass the VIOC
use the System Dump Analyzer (SDA).  For example:

    $ ANALYZE/SYSTEM
    SDA> SHOW EXECUTIVE
     :         :
    ( Continue to press return until you find--+ )
                                               |
       +---------------------------------------+
       v
     SYS$VCC        <---- ( This is the definition for VIOC )
         Nonpaged read only       800FE000 80109800 <-- (VIOC code)
         Nonpaged read/write      855CE000 855CFA00 <-- (VIOC data)
                                      v
                                      |
                      +---------------+
                      v
    SDA> EVALUATE @(855CE000+nn)
                             ^
                             |
             +---------------+
             ^
            24 for read IO Bypassing the cache due to the function,
            28 for read IO Bypassing the cache due to the size

The "function" of an IO is defined in a data structure for the IO
called the IO Request Packet (IRP) and is almost exclusively
application dependent.  No manipulation of cache parameters can
overcome how an application chooses to perform an IO.


Program to Reset selected VIOC Counters:                                      
----------------------------------------                                      

  Note:
    There are 4 lines of this code that have been formatted to
    conform to the 80 column width of the article.  These lines
    must be rectified to 132 columns before this code can be
    linked or run.  The lines in question are terminated by
    a hyphen (-) with the proceeding on the following line.

        .title VIOC_RST_CNT-AXP
        .ident   /V2.0/
;*****************************************************************************
;                            COPYRIGHT (C) 1994 BY
;                      DIGITAL EQUIPMENT CORPORATION, MAYNARD
;                       MASSACHUSETTS.  ALL RIGHTS RESERVED.
;
;    THIS SOFTWARE IS FURNISHED UNDER A LICENSE AND MAY BE USED AND COPIED
;    ONLY IN ACCORDANCE WITH THE TERMS OF SUCH LICENSE AND WITH THE INCLUSION
;    OF THE ABOVE COPYRIGHT NOTICE.  THIS SOFTWARE OR ANY OTHER COPIES
;    THEREOF MAY NOT BE PROVIDED OR OTHERWISE MADE AVAILABLE TO ANY OTHER
;    PERSON.  NO TITLE TO AND OWNERSHIP OF THE SOFTWARE IS HEREBY TRANSFERRED.
;
;    THE INFORMATION IN THIS SOFTWARE IS SUBJECT TO CHANGE WITHOUT NOTICE AND
;    SHOULD NOT BE CONSTRUED AS A COMMITMENT BY DIGITAL EQUIPMENT CORPORATION.
;
;    DIGITAL ASSUMES NO RESPONSIBILITY FOR THE USE OR RELIABILITY OF ITS
;    SOFTWARE ON EQUIPMENT THAT IS NOT SUPPLIED BY DIGITAL.
;
;    NO RESPONSIBILITY IS ASSUMED FOR THE USE OR RELIABILITY OF SOFTWARE
;    ON EQUIPMENT THAT IS NOT SUPPLIED BY DIGITAL EQUIPMENT CORPORATION.
;
;    SUPPORT FOR THIS SOFTWARE IS NOT COVERED UNDER ANY DIGITAL SOFTWARE
;    PRODUCT SUPPORT CONTRACT, BUT MAY BE PROVIDED UNDER THE TERMS OF THE
;    CONSULTING AGREEMENT UNDER WHICH THIS SOFTWARE WAS DEVELOPED.
;
;****************************************************************************
;*                                                                          *
;*      This program is provided AS IS and as such offers no warranty,      *
;*      implied or otherwise by DIGITAL EQUIPMENT CORPORATION or any        *
;*      of it's employees.  Any loss of services or damages incurred by     *
;*      the use of this program are the sole responsibility of those        *
;*      authorizing it's execution.                                         *
;*                                                                          *
;****************************************************************************
;
; Reg Hunter - Digital Customer Support Center/Colorado Springs
; 9-APR-1996
;
; This routine reset the counters that determine the hit% for the VIOC.
; The location of these counters are an offset from CACHE$ACCESS.
;
; The offset value used in OpenVMS ALPHA 6.1 is 1188(hex), for an 
; *unpatched* version of SYS$VCC.EXE.  Use the following steps to 
; determine what the correct offset value would be:
;
;       $ ANALYZE/SYSTEM
;       SDA> EVALUATE CACHE$ACCESS-SYS$VCC_NPRW
;        Hex: 00001188       Dec: 00004488
;                   ^
;                   |
;    *** Use the displayed hexadecimal value in the following ***
;    *** 2 locations:                                         ***
;    ***                                                      ***
;    ***  OFFSET:    .LONG   ^X<offset>                       ***
;    ***  subl2      #^x<offset>,r2                           ***
;    ************************************************************
;
; Once the routine has been edited with the proper offset value do:
;
;      $ MACRO VIOC_RST_CNT-AXP
;      ( Ignore the AMAC-I-BRANCHBET informational message. )
;      $ LINK/SYSEXE/SYSLIB/SYSSHR VIOC_RST_CNT-AXP
;      $ RUN VIOC_RST_CNT-AXP
;
;   The counters to be reset are:
;      (CACHE$ACCESS-<offset>)
;           + 18 for CACHE$GL_VREAD, the Read IO counter
;           + 1C for CACHE$GL_READHIT, the Read Hit counter
;           + 20 for CACHE$GL_VWRITE, the Write IO counter
;           + 24 for CACHE$GL_RRNDMOD, Read IO bypassing VIOC due to 
;                function counter
;           + 28 CACHE$GL_RRNDSIZ, Read IO bypassing VIOC due to 
;                size counter
;           + 2C CACHE$GL_WRNDMOD, Write IO bypassing VIOC due to 
;                function counter
;           + 30 CACHE$GL_WRNDSIZ, Write IO bypassing VIOC due to 
;                size counter
;
;-------------------------------------------------------------------------------
.library        /sys$library:lib.mlb/
.link           /sys$system:sys.stb/
;
; USER mode routine to call the KERNEL mode routine which actually
; does all the work.
;
        .psect noexe
;
; Output warning and prompt for reply.
;
warn_mess:      .ascid  /*************** THIS ROUTINE MAY CAUSE A SYSTEM -
CRASH ***************** /
warn_mess1:     .ascid  /This routine must be linked with the correct -
offset into the VIOC code. /
warn_mess2:     .ascid  /This version of the routine has been linked with -
an offset of !8XL. /          ;;
warn_mess3:     .ascid  /To determine the correct offset, access SDA and -
execute the command, /
warn_mess4:     .ascid  /"SDA> EVALUATE CACHE$ACCESS-SYS$VCC_NPRW". /
prompt:         .ascid  /Do you want to continue? (y-n)[n]: /
resp:           .long   80
                .address resp_len
resp_len:       .blkb   80
respc:          .long   1
OFFSET:         .LONG   ^X<offset>
;
        .PSECT  code,exe
        .entry  start, ^m<r2,r3,r4>
;
begin:
        pushal warn_mess                        ; Output the warning message.
        calls  #1, g^lib$put_output             ;
        pushal warn_mess1                       ;
        calls  #1, g^lib$put_output             ;
        movc5   #0,#0,#20,#80,resp_len          ;; zero buffer
        movl    #80,resp                        ;;
        $fao_s  ctrstr=warn_mess2,-             ;;
                outbuf=resp,-                   ;;
                p1=offset                       ;;
        pushal  resp                            ;;
        calls   #1, g^lib$put_output            ;;
        pushal warn_mess3                       ;
        calls  #1, g^lib$put_output             ;
        pushal warn_mess4                       ;
        calls  #1, g^lib$put_output             ;
        pushal resp                             ; Prompt for a reply
        pushal prompt                           ;
        pushal resp                             ;
        calls  #3, g^lib$get_input              ;
        cmpb   #^A/n/,resp_len                  ;; Check the replys. "n"
        beql   donenow                          ;;
        cmpb   #^A/N/,resp_len                  ;; "N"
        beql   donenow                          ;;
        cmpb   #^A/ /,resp_len                  ;; " ",<CR>
        beql   donenow                          ;;
        cmpb   #^A/y/,resp_len                  ;; "y"
        beql   resetum                          ;;
        cmpb   #^A/Y/,resp_len                  ;; "Y"
        beql   resetum                          ;;
        brw    donenow
;
; CHKRNL and call reset_counts routine.
;
resetum:
        $cmkrnl_s       routin=reset_counts
        brw    donenow
;
; KERNEL mode routine to reset all VIOC statistic counters seen
; from the SHOW MEMORY/CACHE/FULL command.
;
        .entry  reset_counts, ^m<r2>
        lock  lockname=MMG           ; lock MMG spinlock(synchronize)
;
; In order to find the head of the array that contains the VIOC counters
; we need to start with a cache global location known to the system.
; 
; (There were many to choose from.  I chose CACHE$ACCESS, a global)
; (address of a routine in SYS$VCC, because I liked the name.     )
;
        moval   g^cache$access,r2    ; get value to help locate 
                                     ; vcc database address
;
; Once we have the global address we need to fix the pointer to the
; head of AXPs VIOC array.  This fix is done by subtracting the proper
; value.
;
; The proper <offset> value must be included before this routine will
; MAC or LINK.
;
        subl2   #^x<offset>,r2       ; point to top of array
;
; Reset the counters.
;
        clrl    ^x18(r2)             ; Reset Read count
        clrl    ^x1c(r2)             ; Reset Read hit
        clrl    ^x20(r2)             ; Reset Write cnt
        clrl    ^x24(r2)             ; Reset rrnd func
        clrl    ^x28(r2)             ; Reset rrnd siz
        clrl    ^x2c(r2)             ; Reset wrnd func
        clrl    ^x30(r2)             ; Reset wrnd siz
;
        unlock   lockname=MMG        ; Desynchronize
        ret
;
; Don't run, or finished.
;
donenow:
        $exit_s
        .end  start


RELATED ARTICLES:

Other articles in the OPENVMS database describe more on how the
Virtual I/O Cache functions, on both OpenVMS VAX and Alpha.
These articles can be found using search strings of:

        VIOC Virtual I/O Cache Improves Read Performance
        VIOC SHOW MEMORY/CACHE/FULL VAX
        MEMORY RECLAMATION FROM VIOC

582.8#1 Warp 200 Megaebytes and Engage!ODIXIE::RREEVESFri May 16 1997 07:5515
    My statistics in .1 are for a system dedicated to running Intersystems
    MUMPS (this product sucks to put it mildly). The wonderful VAR that
    the customer bought the app from is IDX (which most of the time hasn't
    a clue either). No one even knew about Virtual I/O caching there. Their
    suggestion is to try it and see if it helps.  
    
    Support by throwing stuff at the wall an seeing what sticks. 
    
    So boys and girls, we are are warping up to a 200 Megabyte cache on the 
    next reboot and we are going to monitor cache stats. I/O on this system 
    boils down to multiple queries reading one large file (6 Gigabytes) 
    over and over again. I'll let you all know if this improved I/O
    performance. After all, the customer already paid for the memory and 
    it isn't being used for anything else. 
                           
582.9Engage?! It would not for us.FIEVEL::FILGATEBruce Filgate SHR3-2/W4 237-6452Fri May 16 1997 10:0421
 We have tried a few times to get the cache to use more of the available
 memory on the server, but always have 25-40% memory left `free'.  For an
 instance, at the moment we have

Physical Memory Usage (pages):     Total        Free      In Use    Modified
  Main Memory (160.00Mb)          327680      130120      194330        3230

and

Virtual I/O Cache
    Total Size (pages)           33522    Read IO Count               367062
    Free Pages                     957    Read Hit Count               56573
    Pages in Use                 32565    Read Hit Rate                   15%
    Maximum Size (SPTEs)        243515    Write IO Count               11013
    Files Retained                 100    IO Bypassing the Cache      230769

and from sysgen
VCC_MAXSIZE            2000000000 2000000000         02000000000 Pages      D


582.10Not saying VIOC won't help, but DSM studies showed d VIOC slowed down DSMKEIKI::WHITEMIN(2�,FWIW)Fri May 16 1997 18:234
    
    	MUMPS aka DSM also has its own DATA cache.
    
    					Bill
582.11Not DSMODIXIE::RREEVESWed May 21 1997 21:271
    It's not DSM it's Intersystems product.