[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference iosg::all-in-1_v30

Title:*OLD* ALL-IN-1 (tm) Support Conference
Notice:Closed - See Note 4331.l to move to IOSG::ALL-IN-1
Moderator:IOSG::PYE
Created:Thu Jan 30 1992
Last Modified:Tue Jan 23 1996
Last Successful Update:Fri Jun 06 1997
Number of topics:4343
Total number of notes:18308

1486.0. "Structure of SDAF" by CROCKE::YUEN (Banquo Yuen, Darwin Australia) Wed Sep 23 1992 10:30

    Hello
    
    I have a problem with fast expanding SDAF.  I take a look at the SDAF,
    I found that there are more than one entry for the same share-file.
    
    Question 1:
    Is this because if the share-file is belonged to 10 persons, there will
    be 10 records in the SDAF ?
    
    Question 2:
    If answer to question 1 is yes, will the distribution list be repeated
    on each of the 10 records ?
    
    Thank you very much
    Banquo.
T.RTitleUserPersonal
Name
DateLines
1486.1A possible answerGIDDAY::SETHIMan from DownunderWed Sep 23 1992 10:5715
    Hi Banquo,
    
    >Question 1:
    >Is this because if the share-file is belonged to 10 persons, there will
    >be 10 records in the SDAF ?
    
    From what I have been told that the SDAF has a structure as follows
    
    <share_file><list of recipients><pointer>, the record is 2000 bytes
    long and the pointer field is used if there are a large number of
    recipients and it contains the rest of the recipient names and so on.
    
    I hope this is correct if not I am sure someone will put me right :-)
    
    Sunil
1486.2SDAF key should be uniqueWOTVAX::DORANAConfuse-a-cat LtdWed Sep 23 1992 10:5913
    Banquo,
    
    The SDAF key (the shared file name - ie OA$SHARE123:ZYYTU9UJN.WPL)
    should be unique. There is a field in the SDAF called USAGE_COUNT that
    shows the number of people who 'have' the mail in their account.
    
    If you have multiple occurences of the same key, I think that you
    should speak to the CSC as there could be some corruption of the
    SDAF...
    
    Cheers,
    
    ANdy
1486.3Continuation recordsIOSG::MAURICECeci n&#039;est pas une noteWed Sep 23 1992 11:5134
    Hi Banquo,
    
>    Question 1:
>    Is this because if the share-file is belonged to 10 persons, there will
>    be 10 records in the SDAF ?
    
    No!
    
    You will get more than one entry with what looks like the same SDAF key
    only if there is not room to fit all the attribute information into
    2000 bytes. When this happens the File Cabinet code creates
    continuation records with the same key except for a continuation flag
    at the end of the key. There is no duplication of attribute information
    in the continuation record(s) - only the filename portion of the key
    repeats.
    
    If your customer has a problem with the size of the SDAF then get the
    customer to consider:
    
    a) Analyse the file - perhaps it can be tuned.
    
    b) Consider having multiple SDAFs - up to five are allowed
    
    c) Consider purchasing the Mail Janitor utility - this will allow the
       System Manager to have old mail messages deleted. There's also a
       utility around that unshares mail messages with a usage count of 1
       which is very effective on reducing SDAF size, but I'm not sure what 
       its availability is.
    
    d) Run TRM patched up to date and with all delete options set.
    
    Cheers
    
    Stuart
1486.4Bucket size OK ??AIMTEC::VOLLER_IGordon (T) Gopher for PresidentWed Sep 23 1992 16:4821
    Banquo,
    
    	Further to Stuart's reply ...
    
    	A possible cause of a fast growing SDAF is having an unsuitable size
    	set for the files Data Buckets.
    
    	Use $ANALYZE/RMS/STATISTICS to gather more information about the
    	file. If you see that the Data Bucket size is 4 (default) and that
    	you are typically writing large records (large distribution lists)
    	to the SDAF then consider increasing the Data Bucket size. Check
    	how full each bucket is from the $ANALYSE output. Maybe
    	try 12 as a first guess, continue to use ANALYZE/RMS to monitor
    	file usage etc. Note. Increasing the bucket size means that you
    	will need to increase GBLPAGES and GBLPAGFIL system parameters
    	accordingly. Ie Number of GBLPAGES = Bucket Size * Number of
    	Global Buffers.
    
    Cheers,
    
    Iain.
1486.5Confuse with RMS and ALL-IN-1CROCKE::YUENBanquo Yuen, Darwin AustraliaThu Sep 24 1992 09:5416
    Hello                                          
    
    Thanks for your immediate response but I am still confused!
    
    If the continuation record is managed by ALL-IN-1 rather than RMS,
    how come if the continuation record is not in the same data bucket, the
    space cannot be reclaimed when deleted (quote from another note); and
    how come it can affect data bucket utilization.
    I can see that there will be more IO to and from disk if they are not
    in the same data bucket.
    
    Anyway I will try to increase the data bucket size to 4 as well as
    using RMS/ANA to see what it suggests.
    
    Thank you very much
    Banquo.
1486.6Record Mangling ServicesSCOTTC::MARSHALLDo you feel lucky?Thu Sep 24 1992 12:3038
Hi,

>> If the continuation record is managed by ALL-IN-1 rather than RMS

As far as ALL-IN-1 is concerned, the continuation record is an ordinary RMS
record.  How it is managed on disk is up to RMS.  RMS has no knowledge of
continuation records; as far as RMS is concerned, it's just another record.

>> if the continuation record is not in the same data bucket, the
>> space cannot be reclaimed when deleted

This is a "feature" of RMS, and nothing to do with whether or not records are
continuation records, or whether continuation records are in the same bucket
as the main record.  RMS will not (in fact  it cannot because of the way buckets
work) reuse deleted space in a partially-empty bucket.  The bucket cannot be
reused until it is totally empty (explanation why on request).

Hence the reason for Iain's suggestion to tune the bucket size, to ensure
optimum bucket usage.

>> how come it can affect data bucket utilization

Again, an RMS "feature".

>> there will be more IO to and from disk if they are not
>> in the same data bucket

The SDAF has so much I/O on a busy system, this is just background noise
I suspect.

>> I will try to increase the data bucket size to 4

No.  The default is 4.  Increase it to more than that!

>> using RMS/ANA to see what it suggests

Use ANALYZE/RMS/FDL to generate an FDL file for your existing SDAF, then change
that as appropriate, and do CONVERT/FDL to modify the file.