[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference iosg::all-in-1_v30

Title:*OLD* ALL-IN-1 (tm) Support Conference
Notice:Closed - See Note 4331.l to move to IOSG::ALL-IN-1
Moderator:IOSG::PYE
Created:Thu Jan 30 1992
Last Modified:Tue Jan 23 1996
Last Successful Update:Fri Jun 06 1997
Number of topics:4343
Total number of notes:18308

999.0. "Bucket size, cluster size, and SDAFs" by KAOFS::M_MORIN (Le diable est aux vaches!) Tue Jul 07 1992 19:49

While $ ANAL/RMS/FDL OA$SHARE:OA$DAF_E.DAT, I noticed that the bucket size for
this file is 4.  On most disks nowadays, most of the time the CLUSTERSIZE is
set to 3 by default.

In an RMS course that I've attended recently, it was mentioned that the bucket
size for a file should always be a multiple of the CLUSTERSIZE for the disk.

Since a bucket dictates how much data is read when doing the actual physical
I/O, are we not being inneficient here by setting the bucket size to 4?  Should
it not be set to 6 instead?

I can see the case where when adding a DAF record may require the allocation
of another data bucket resulting in doing an I/O for 6 blocks for a 4-block
bucket.

Or have I missed something?

Mario
T.RTitleUserPersonal
Name
DateLines
999.1Cluster size not transfer sizeIOSG::DAVISMark DavisWed Jul 08 1992 10:3421
    
    
       So far as I understand it the disk cluster size has nothing whatsoever 
       to do with transfer size. In general the disk cluster size should be 
       ignored as a factor when determining the bucket size of a file.
    
       The disk cluster size does determine the allocation size of the file 
       which has to be a multiple of the disk cluster size. If you have a 
       badly fragmented file then buckets may straddle extents leading to 
       possibly more split I/Os.
    
       You may find it advantageous to increase the bucket size of your SDAF 
       anyway. If messages have long distribution lists then continuation 
       SDAF records may be created. You may want to increase the bucket size 
       to ensure that these records are read in one I/O. 
    
    
       				Mark 	
    
    
       
999.3Instructor's reply.KAOFS::M_MORINLe diable est aux vaches!Thu Jul 09 1992 16:4721
Here is the answer I got back from the instructor in case anyone is curious:


   Yes, the bucket size should be a multiple of the disk cluster size.
   However, not because that a disk cluster size is the size of a transfer.
   For an Index file the size of the transfer is a bucket.
   The main reason that the cluster size should be a multiple of the
   disk cluster size is to 1) avoid dead/unusable space in the file, and 2) 
   as an attempt to limit the possibility of split I/Os.
   Of course all  is dependent on an appropriate cluster size haing been
   selected. An appropriate cluster size is one that is based on the number of 
   sectors in the track, example. Can the number of sectors in a track be
   evenly divided by the cluster size. If sectors per track is 33, and cluster
   size is 3. This is a good cluster size, and a good bucket size is also
   3. However 6 would not be a good bucket size, because in that sector there
   would be 5.5 buckets. That last bucket would be a split I/O to process


Regards,

Mario