| WARNING: This is from memory.
Having read the above warning, and the fact that you're still reading this, you
can consider yourself a brave soul. ;-)
Basically, the answer is that you try changing the parameter, and if it helped,
it needed to be changed. I guess what I'm saying is that there is no way to
monitor ATK to see what parameters need to be changed. There are counters deep
within ATK which could be used to help determine what parameters might need to
be changed, but they aren't easily read (although if you really enjoy using SDA,
you could get at them - interpreting them is another matter).
As for the parameters mentioned, here's a brief commentary on each:
- Connection Receive Queue Length
The note doesn't really say what protocol(s) is/are being used. This parameter
is only valid for ADSP and simply increases the amount of nonpaged pool used for
buffering data. If this parameter is increased, the amount of BYTLM charged to
processes which use ADSP will be increased accordingly (i.e. beware - you may
cause a process to get into an RWAST state if the process happens to handle a
large number of ADSP connections). As to performance increase from raising this
- probably minimal - certainly none if you're not using ADSP.
- Transaction Queue Limit
This applies only to ATP and *as I recall* only affects direct clients of ATP
(i.e. even though ASP is built upon ATP, this will not affect ASP). Similar to
the preceeding parameter, this will affect how many transactions can be placed
on a queue to be held until the process asks for it/them. Again, this will
increase process BYTLM used, so be careful. Performance impact - well, it may
help some since it can prevent retransmits of transactions, however if a process
has a high latency between getting a request and posting to receive the next
one, it's probably swamped and that is your performance problem, not the network.
- Transaction Response Cluster
This again has to do with ATP and how many responses the stack sends at one time
before waiting to send the next batch. It probably won't hurt to bump this up
but may not help matters. In some cases, the server can swamp the client and the
client will ask for a retransmit of the responses.
I doubt that the customer is using ATP directly (at least I hope they're not -
it's got some, uh, well, how do I put it? - 'limitations' which don't make it
the ideal choice for a networking protocol).
- DDP Receive Buffers
This applies only to direct consumers of DDP (i.e. won't affect any other
protocol even though all other protocols are built on DDP). Since I doubt
they're using DDP, this really doesn't matter - but the idea is similar to
Connection Receive Queue Length and Transaction Queue Limit above.
- Datagram Queue Limit
This is a port parameter and affects how many packets the stack tells the
Ethernet driver to allow it to queue. Unlike the DDP parameter, this *does*
affect all protocols in ATK. If you raise it, it has the potential to reduce the
number of retransmits the clients may have to do. It can help somewhat, but it's
not likely to really affect the performance since if packets are being dropped,
it's likely to be because the system is extrememly busy (i.e. many peaks of near
100% kernel time) and thus the retransmits are not likely to be the cause of any
major performance problem. In any event, unless you're running on a 6Meg
MicroVAX II, I'd bump this to the max. You're not talking about a huge amount of
memory, but it could reduce some network traffic.
In summary:
There's nothing to tell you what parameters to tune. And if there were, you
could then bump these parameters up and not get much in a difference in
performance.
Don't blame me - I'm just calling it like I see it.
-Jeremy
|