[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference jamin::vms-for-mac

Title:PATHWORKS for Macintosh & PATHWORKS for VMS (Macintosh)
Notice:Mac client 1.3.5 kit see note 9.2. MacX 1.5 kit see note 9.5
Moderator:UNIFIX::HARRIS
Created:Fri Jan 26 1990
Last Modified:Thu Jun 05 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:4033
Total number of notes:16065

4012.0. "how to determine tuning params for Appletalk" by VMSNET::J_FILLERS () Fri Feb 21 1997 19:19

    Any thoughts on tuning the specific parameters below and how to
    actually dtermine if the change was helpful/harmful
    
    Thanks
    
    jim fillers
    pathworks support
    csc atlanta
    
    ----------------------------------------------------------------
    
                                                                                
In the Pathworks for VMS (Macintosh) Server Administrator's Guide there are     
several sections on tuning and performance.  It mentions a number of            
parameters to raise to help performance but there is no mention of how to       
determine if the parameters need to be raised or how to evaluate any changes    
once they have been made.  Specifically I am interested in the following        
parameters:                                                                     
                                                                                
        -  Connection Receive Queue Length                                      
        -  Transaction Queue Limit                                              
        -  Transaction Response Cluster                                         
        -  DDP Receive Buffers                                                  
        -  Datagram Queue Limit                                                 
                                                                                
My configuration consists of 3 Vax 4090 which act as front ends for a           
back end DB.  The 4090's communicate with the users Mac's via Appletalk         
and with the back end via Decnet phase V.  Each 4090 has 40MB of memory         
and is running OpenVMS 6.1.  The reason I am investigating these parameters     
is that a small percentage of transactions from the Mac to the front ends       
experience severe delays up to 300 secends.  This problem may very well         
be a network issue but we have all our parameters at their default values       
and I want to look into both areas.  Normal volume for a three day period       
averages around 240,000 transactions for each front end and we may have about   
2500 transactions which have a network delay of >10,000 millsec's.              
Below are the version's of Appletalk we are running.                            
                                                                                
        Component                       Version                                 
                                                                                
        Management interface            ATK S3.2-E01                            
        PS driver                       ATK V3.2                                
        Protocol stack                  ATK S3.2-E01                            
        AppleTalk ACP                   ATK V3.2                                
        AppleTalk messages              ATK V3.2                                
        Runtime library                 ATK V3.2                                


                              DIGITAL Confidential

                              DIGITAL Confidential                  Page:   3

        Printer Runtime Library         ATK V3.2                                
        EV driver                       ATK V3.2                                
        GW driver                       ATKGW V1.3                              
        Gateway ACP                     ATKGW V1.3                              
        Gateway messages                ATKGW V1.3                              
                                                                                
                                                                                
Any assistance would be appreciated.                                            
                                                                                
    
T.RTitleUserPersonal
Name
DateLines
4012.1AOSG::HITTI break for ACCVIOsMon Feb 24 1997 17:5875
WARNING: This is from memory.

Having read the above warning, and the fact that you're still reading this, you
can consider yourself a brave soul. ;-)

Basically, the answer is that you try changing the parameter, and if it helped,
it needed to be changed. I guess what I'm saying is that there is no way to
monitor ATK to see what parameters need to be changed. There are counters deep
within ATK which could be used to help determine what parameters might need to
be changed, but they aren't easily read (although if you really enjoy using SDA,
you could get at them - interpreting them is another matter).

As for the parameters mentioned, here's a brief commentary on each:

        -  Connection Receive Queue Length                                      

The note doesn't really say what protocol(s) is/are being used. This parameter
is only valid for ADSP and simply increases the amount of nonpaged pool used for
buffering data. If this parameter is increased, the amount of BYTLM charged to
processes which use ADSP will be increased accordingly (i.e. beware - you may
cause a process to get into an RWAST state if the process happens to handle a
large number of ADSP connections). As to performance increase from raising this
- probably minimal - certainly none if you're not using ADSP.

        -  Transaction Queue Limit                                              

This applies only to ATP and *as I recall* only affects direct clients of ATP
(i.e. even though ASP is built upon ATP, this will not affect ASP). Similar to
the preceeding parameter, this will affect how many transactions can be placed
on a queue to be held until the process asks for it/them. Again, this will
increase process BYTLM used, so be careful. Performance impact - well, it may
help some since it can prevent retransmits of transactions, however if a process
has a high latency between getting a request and posting to receive the next
one, it's probably swamped and that is your performance problem, not the network.

        -  Transaction Response Cluster                                         

This again has to do with ATP and how many responses the stack sends at one time
before waiting to send the next batch. It probably won't hurt to bump this up
but may not help matters. In some cases, the server can swamp the client and the
client will ask for a retransmit of the responses.

I doubt that the customer is using ATP directly (at least I hope they're not -
it's got some, uh, well, how do I put it? - 'limitations' which don't make it
the ideal choice for a networking protocol).

        -  DDP Receive Buffers                                                  

This applies only to direct consumers of DDP (i.e. won't affect any other
protocol even though all other protocols are built on DDP). Since I doubt
they're using DDP, this really doesn't matter - but the idea is similar to
Connection Receive Queue Length and Transaction Queue Limit above.

        -  Datagram Queue Limit                                                 

This is a port parameter and affects how many packets the stack tells the
Ethernet driver to allow it to queue. Unlike the DDP parameter, this *does*
affect all protocols in ATK. If you raise it, it has the potential to reduce the
number of retransmits the clients may have to do. It can help somewhat, but it's
not likely to really affect the performance since if packets are being dropped,
it's likely to be because the system is extrememly busy (i.e. many peaks of near
100% kernel time) and thus the retransmits are not likely to be the cause of any
major performance problem. In any event, unless you're running on a 6Meg
MicroVAX II, I'd bump this to the max. You're not talking about a huge amount of
memory, but it could reduce some network traffic.

In summary:

There's nothing to tell you what parameters to tune. And if there were, you
could then bump these parameters up and not get much in a difference in
performance.

Don't blame me - I'm just calling it like I see it.

-Jeremy
4012.2Thanks for the info.VMSNET::J_FILLERSMon Feb 24 1997 18:173
    Thanks for the info. I will pass it on to the user. 
    
    jim