T.R | Title | User | Personal Name | Date | Lines |
---|
3431.1 | | IROCZ::D_NELSON | Dave Nelson LKG1-3/A11 226-5358 | Tue Feb 04 1997 09:50 | 13 |
| RE: .0
> A customer is using SLIP at 38400 Bds to manage DEChubs, he sees a
> high rate of lost packets from its host system on Ethernet input,
> it there any way to reduce these errors with some setting in the
> DECserver ?
Exactly which counter are you looking at?
Regards,
Dave
|
3431.2 | DECServer 900 BufferSize and SLIP | ANNECY::CHALUMEAU | | Tue Feb 11 1997 03:08 | 18 |
| follow up of note 3431
A customer is using SLIP at 38400 Bds on DECServer 900 to manage DEChubs
with HUBwatch, he sees a high rate of lost packets from its host system.
the error count is observed with
show port xx slip counters parameter Send Packets Lost
Is there any parameter to increase the Buffer size of ports
in the server, or another way to reduce these errors with
some setting in the DECserver 900?
Jean-Claude
|
3431.3 | more on the customer context | ANNECY::CHALUMEAU | | Wed Feb 12 1997 10:58 | 19 |
|
In addition to the previous note statement,
They have a huge network used for Air Traffic control,
they use in band and out of band management to monitor
they network and modules. They have no problem with in band
monitoring. They have only troubles with out of band management
of some modules, I guess DEChub 900 chassis and few modules, using
SLIP on DECserver 900 ports.
The problem is observed with the sho port command and also with the
monitoring command on the server. With the monitoring they can see
the Send Packets Lost reporting errors when the Send Packet Queued
increases. This is why they think these errors might disappear
with a larger buffer in the DECserver.
They have their own application that constantly monitors their
network with snmp request. These errors seems to be a real problem
for the people in charge of the network administration. And they
would like to know if we have a workaround or if it is a limitation
in the product.
|
3431.4 | Where's the backpressure (packet flow control)? | IROCZ::D_NELSON | Dave Nelson LKG1-3/A11 226-5358 | Wed Feb 12 1997 17:45 | 24 |
| Hmmm.... Not sure we can help. The counters you describe indicate that the
SLIP interface has too many packets queued to it. The UART isn't emptying
the queue fast enough. For TCP applications, this would get throttled by
the TCP window size. For UDP applications like SNMP, there's not much we
can do. I don't think that ICMP "source quench" packets would come into
play here. But that might be something to look into.
First, the number of buffers that each SLIP interface can have in its output
queue is hard coded. Secondly, if the packet arrival rate (over a faster
interface like Ethernet) consistently exceeds the packet transmit rate (over
a slow 38,400 SLIP interface), then no amount of buffering will do. More
buffers would just add a small increment in the time it takes to overflow
the number of buffers allowed.
Applications that run over UDP expect to have to do timeouts and retransmits
in the face of packet loss. With finite buffering and no "backpressure"
method, this seems like a problem they'll have to live with.
Does anyone else have an idea? I may be missing something obvious...
Regards,
Dave
|
3431.5 | live with it or change their application | ANNECY::CHALUMEAU | | Thu Feb 13 1997 08:55 | 7 |
| Dave, you confirmed what I thought about the problem, a solution
would be in their application.
Thanks for your help
Regards,
jean-claude
|
3431.6 | any way to modify the transmit queue buffer size or length for async line ? | TOOSRV::SELLES | | Tue Feb 18 1997 04:49 | 20 |
| i work with the same customer as jean-claude
they dont want to change their application right now and they
complain it should be possible to change the Transmit buffer queue size or
length for the async line to be able to bufferize as much as
32 UDP request of 150 bytes each so there will be no request loss
they describe the application as this :
they poll a variable thru the OBM port of dechub900 , and if this
variable value changes , they issue a burst of 32 requests also thru the OBM
port ( so the use of the async port of the decserver900 ) and they wait for
32 replies
is there any way to modify this queue length or size ?
if it is hardcoded , could it be modified for the next DNAS version ( 2.1 ) ?
thanks for your replies
regards PJ
|
3431.7 | | IROCZ::D_NELSON | Dave Nelson LKG1-3/A11 226-5358 | Tue Feb 18 1997 10:51 | 37 |
| RE: .6
> they dont want to change their application right now
Customers never want to change their applications, do they? That costs
money, and far better that they can get their vendors to spend the money
to solve the customer's problem. Sigh. We, too, have limited resources...
> and they complain it should be possible to change the Transmit buffer queue
> size or length for the async line to be able to bufferize as much as
> 32 UDP request of 150 bytes each so there will be no request loss
Anything is possible, with time and money.
I will at least look at what we have now. Keep in mind that statically
allocated buffers are allocated equally for all ports. On a DECserver 900TM
that's 32 times the buffers for one port. 32 x 32 x 150 = 153,000 bytes.
All I can say at this point is that we'll look at the problem.
> is there any way to modify this queue length or size ?
Sure. We could change the software to allocate more memory to each port.
That may be an option now that DNAS requires at least 4 MB of RAM. It
wasn't such a good idea when we were supporting DNAS in 2 MB of RAM.
> if it is hardcoded, could it be modified for the next DNAS version ( 2.1 ) ?
It is hard coded, but V2.1 is already shipping (in limited release). It will
be included with the DECserver 900MC modules. The next general release of
DNAS is V2.2, and we're fast approaching functionality freeze (prior to
system test). It is not clear whether this enhancement can be included in
V2.2. That depends on a number of factors.
Regards,
Dave
|
3431.8 | what are the sizes right now ? | TOOSRV::SELLES | | Wed Feb 19 1997 10:53 | 17 |
|
thanks Dave for your reply ;
now the customer is pressing us ( as usual ) and he is asking as much as
20Kbytes per port ( covering as much as a 120 requests burst ) ;
he is also asking what are the buffering values used by the product right now
( for transmit and receive queues on async port )
could they be patched for 2.1 ?
is it possible to include this demand in 2.2 and what will be the delay for
this version ?
thanks again for your quick answers
regards PJ
|
3431.9 | Decserver number of buffers and sizes for SLIP | TOOSRV::SELLES | | Wed Apr 09 1997 11:30 | 9 |
| hello
as a follow up to note 3431 , the customer needs in order to tune
its application to know the number of input/output buffers
for slip connections and the buffer sizes for decserver 900 tm and
decserver 90m with dnas 2.0
thanks for your replies
|
3431.10 | Well, here's what I've discovered... | IROCZ::D_NELSON | Dave Nelson LKG1-3/A11 226-5358 | Fri Apr 18 1997 17:56 | 55 |
| RE: .9
> as a follow up to note 3431 , the customer needs in order to tune
> its application to know the number of input/output buffers
> for slip connections and the buffer sizes for decserver 900 tm and
> decserver 90m with dnas 2.0
Sorry to take so long on this. We are very busy, and I had to do some
research to answer this question. Thus it was put off until "later".
The DNAS code used mbuffers, since it is based on BSD Unix V4.3. We
divide mbuffers into "pools". A pool has the following properties:
a minimum number of buffers (that belong to that pool and thus are
graranteed to be available for allocation), a maximum number for buffers
(that may be allocated from the pool) and a "parent" pool (from which all
buffers above the minimum but below the maximum may be allocated).
Up until DNAS V1.5 mbuffers were 128 bytes in size and could contain 110
bytes of user data. For DNAS V2.0 and higher mbuffers were increased to
512 bytes in size and can contain 494 bytes of user data. However, at the
same time the number of mbuffers in the system was decreased, so that the
total memory allocated to mbuffers did not quadruple.
Each SLIP port has a pool for transmit. When datagrams are forwarded from the
Ethernet interface to a SLIP interface, the mbuffers in the IP receive pool
for the Ethernet intreface are "transferred" to the SLIP interface's
transmit pool. (No data is copied; this is bookkeeping only.) If the
quota for the SLIP transmit pool is exceeded, the packets are dropped.
In DNAS V2.0 the minimum SLIP transmit pool size (for each port) is zero,
the maximum size is 48 and the parent pool is the general pool. This means
that up to 48 mbuffers can be allocated for each SLIP port from the
general mbuffer pool (assuming they are available). This provides for up
to 23,712 (23K) bytes of data, including all headers. In DNAS V1.5 this
would have been 5,280 (5K) bytes of data. Note that 48 mbuffers are not
guaranteed. Allocation depends on the state of the general pool. For a
single SLIP port, on an otherwise idle DECserver, this shouldn't be a
problem.
We did discover that the guranteed socket pools, created whenever a socket
is opened, were not appropriately "adjusted" to account for the larger mbuf
size, but smaller total number of mbufs in DNAS V2.0. It is possible that
this bug is causing your SLIP port to not be able to allocate the full 48
mbuf maximum before the general pool is depleted. This would only be the
case in extreme conditions, when many DECserver sessions (hence sockets)
were open concurrently. But its a possibility. If this is your case, there
is a "patch" image available.
We have not made any adjustments to the mbuffer pooling for SLIP in DNAS V2.2.
We have no current plans to make the pool sizes user manageable.
Regards,
Dave
|
3431.11 | Oh, yes... One more important point. | IROCZ::D_NELSON | Dave Nelson LKG1-3/A11 226-5358 | Fri Apr 18 1997 18:04 | 20 |
| RE: .8
> now the customer is pressing us ( as usual ) and he is asking as much as
> 20Kbytes per port ( covering as much as a 120 requests burst ) ;
In interpreting the data I gave in .10 be aware that the 23K (max) of
buffering assumes that UPD packets fill mbufs with no wasted space.
Recall that mbufs come in quanta of 512 (with 18 bytes of overhead).
If your 120 requests are in 120 UDP packets, then you'll still fail.
The DNAS code never puts more than one UDP packet in one mubf (or mbuf
chain). Mbufs are chained together for packets larger than 494 bytes.
If you ever have more than 48 individual UPD packets (which must be less
than 494 bytes) on the SLIP interface queue at one time, even though the
total data is less than 23K, then packets will be dropped.
Regards,
Dave
|
3431.12 | thanks for the detailed explanations | TOOSRV::SELLES | | Wed Apr 30 1997 14:35 | 6 |
| i will check with the customer that they can tune
their application so they dont saturate the buffer pools
and how many ports are used to separate dechub900s
thanks again for the details
PJ
|