[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference vaxaxp::vmsnotes

Title:VAX and Alpha VMS
Notice:This is a new VMSnotes, please read note 2.1
Moderator:VAXAXP::BERNARDO
Created:Wed Jan 22 1997
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:703
Total number of notes:3722

318.0. "can't create file of certain file name" by HTSC12::MICKWIDLAM (Water addict, water man) Thu Mar 13 1997 04:57

I have a customer have a bind volume set of two disks. Two disks have both lot
of disk space left. They found that in a certain directory, some files cannot be
created and have the message "acp file create failed" "allocation failure on
directory".

He also found that only the files with name started with S, T, U, G cannot be
created. Other files can be created at any time. In this directory he got lot of
small files with name started with U and H. He suspected that this caused
problem. Also he has no problem on file creation on the other directories. The
problem is not so like the contiguous free space problem as the "map area word
in use" is 12 only.

Anyone seen similar case before?

Thanks,
Mickwid.
T.RTitleUserPersonal
Name
DateLines
318.1UTRTSC::thecow.uto.dec.com::JurVanDerBurgChange mode to Panic!Thu Mar 13 1997 08:1511
>problem is not so like the contiguous free space problem as the "map area word
>in use" is 12 only.

Yes it is. This has nothing to do with the number of mapping pointers in use.
In this case the XQP has to extend the directory file, and it lacks contiguous 
space on disk to do so.

Defrag the disk.

Jur.

318.2How Big Is This Directory?XDELTA::HOFFMANSteve, OpenVMS EngineeringThu Mar 13 1997 09:2132
   As .1 says, defragment the disk.  Use BACKUP. 

   The "map area words in use" from DUMP/HEADER, halved, is the number
   of fragments the file is presently in.  Further down the same display,
   you can get an idea of the various sizes of the fragments.

   Look for files that grow consistently over time -- log files, indexed
   files, etc, -- and determine if increasing the default extent size on
   the file(s) or process(s) will reduce the fragmentation.

   I'd seriously onsider adding disk storage -- disk storage is presently
   relatively cheap, and consistently sufficient free space, and the
   partitioning of scratch and volatile storage files from the static
   and incrementally-growing files can help avoid or greatly reduce the
   fragmentation.  (Some amount of file fragmentation is entirely normal
   and expected -- and good -- behaviour.)

   As directory files are expected to be contiguous, a fragmentation value
   of six (based on a map area value of twelve) would tend to indicate a
   very large directory file -- please encourage the customer to use
   multiple subdirectories, and -- when creating large numbers of files
   in a directory (under program control), to generate filenames that
   would place the new filename(s) at the end of the directory.  Both
   techniques can improve the performance of the directory files.

   --

   (When asking questions, it's also often useful to include the OpenVMS
   version and the platform.  And in this case, the total size of the
   directory -- entries and blocks -- would be of interest.)

318.3how about the other files?HTSC12::MICKWIDLAMWater addict, water manThu Mar 13 1997 10:239
    re .1
    
    Then how can I explain that other files can be created in the same
    directory? Is there any kind of hashing/grouping of file in the
    directory so that adding some file caused directory file to expand
    while adding other files will not?
    
    Regards,
    Mickwid.
318.4EPS::VANDENHEUVELHeinThu Mar 13 1997 10:4925
    Directory files are not dense (looking at the records. looking
    at the concepts come might disagree :^). Records are stored sorted 
    and 'blocked' in 1 block units. If one removes a directory entry 
    then the space for the entry (the whole record if there is only one
    version) becomes available for use in that block roughly speaking 
    only for entries with a similar name. Once the last entry for a block
    it removed, the whole block will be shuffled out of the directory.
    This will make room for any new name.
    You may well find that
    	- you _can_ add the impossible entry be deleting / removing 
    	  files with a very similar name (lots of leading characters
    	  in common
    	- you can NOT add many more files with a name starting with
          the same characters as you used for the succes trials
    	- you _can_ add the impossible entry by deleting a series
    	  of names close to one-another, but distant from the test
    	- you have no more problem at all because someone deleted
    	  a large contig file.
    
    To experiment with all this I recommend $DUMP/DIRECOTRY
    and thje freeware tool DFU REPORT diskname
    
    Hein.
    
    
318.5AUSS::GARSONDECcharity Program OfficeThu Mar 13 1997 17:0320
re .2,.0
    
>   The "map area words in use" from DUMP/HEADER, halved, is the number
>   of fragments the file is presently in.
    
    This is not strictly correct.
    
    The map area words in use is the number of 16-bit words in the map area
    that are in use. How many fragments this equates to depends on the
    format that each retrieval pointer is using. In the most compact format
    (format 1 = 1 longword = 2 words = 1 retrieval pointer) the claim is true
    but if format 2 is in use e.g. extent has length exceeding 255 blocks
    (a not uncommon situation) then 1 retrieval pointer = 3 words.
    Similarly for format 3, 1 retrieval pointer = 4 words. Obviously if
    there is a mix of formats in use in one file header then there is no
    simple conversion from "words in use" to "number of extents".
    
    For directories, of necessity being contiguous, the map area words in
    use *must* be 2, 3 or 4 (for exactly 1 retrieval pointer) so I can only
    assume that the quoted value applies to some other file e.g. INDEXF.SYS.
318.6Defragment the disk!UTRTSC::DORLANDThe Wizard of Odz2Fri Mar 14 1997 02:2013
    If you want some hard figures DFU is the tool to use.
    The REPORT <device> command will tell you quit a few
    things, such as a judgement of the fragmentation level,
    and the largest contiguous free space. If this last figure
    is more or less the size of the directory then you are in
    trouble. Extending this directory will not succeed as the
    XQP is no longer able to allocate enough contiguous space.
    
    DFU also has an option to compress directories , this may
    help you out for the time being. However, the bottom line
    is that you need to defragment the disk.
    
    Rgds, Ton