T.R | Title | User | Personal Name | Date | Lines |
---|
3897.1 | | RMULAC.DVO.DEC.COM::S_WATTUM | Scott Wattum - FTAM/VT/OSAK Engineering | Thu Mar 06 1997 09:02 | 26 |
| >V6.3-ECO06 we segment an SPDU into 3! TPDUs.
This could be an application issue. When calling osak_associate_req() (or any
outbound osak request for that matter), you pass OSAK a list of chained buffers
which as a whole, represent the SPDU being sent. However, each buffer is handed
to OSI Transport as a single QIO WRITEVBLK operation, so if you have three
buffers in the chain, you get 3 TPDUs being sent. These TPDU's could be of any
size and shape, and depending on size, may result in less than perfect
segmentation happening at the X.25 layer.
This type of thing can be easily seen in FTAM during an association request that
we send - it's segmented very strangly as a result of how the code was written
to build the request. Once FTAM is into data transfer mode though, we perform a
buffer gather operation prior to calling OSAK - that is on an outbound
osak_data_req() we walk the list of buffers, allocate a single large buffer and
copy everything into it, and only send the single large buffer - thus cutting
down on QIO calls which have very high overhead, and allowing better performance
at the lower layers in terms of segmenting.
Also, I would guess that what transport actually does with the segmented TPDU's
is platform specific. XTI on UNIX may very well re-assemble the TPDU prior to
handing to routing (though I don't know for certain), where as QIO on OpenVMS
does not.
--Scott
|
3897.2 | Thanks - relationship w. X.25 ? | BIKINI::DITE | John Dite@RTO | Thu Mar 06 1997 11:39 | 54 |
| Scott,
thanks for your detailed explaination!
>which as a whole, represent the SPDU being sent. However, each buffer is handed
>to OSI Transport as a single QIO WRITEVBLK operation, so if you have three
>buffers in the chain, you get 3 TPDUs being sent. These TPDU's could be of any
>size and shape, and depending on size, may result in less than perfect
>segmentation happening at the X.25 layer.
So OSAK must be setting the modifier IO$M_MULTIPLE indicating that it is only
part of the message, otherwise OSI Transport would set the EOT bit in the
Transport Header.
>Also, I would guess that what transport actually does with the segmented TPDU's
>is platform specific. XTI on UNIX may very well re-assemble the TPDU prior to
>handing to routing (though I don't know for certain), where as QIO on OpenVMS
>does not.
What I observed with UNIX (same association establishment attempt
with the same configuration) was 1 TPDU segmented into 2 X.25 packets.
11:49:52.03| Tx| 20|00A CALL |Called DTE 45xxxxxxxx
| Calling DTE 45xxxxxxxxx
| Data 03 01 01 00|
11:49:52.24|Rx | 16|00A CALLC |Called DTE 45xxxxxxxx
| Calling DTE 45xxxxxxxxx
||
OSI Transport CR
11:49:52.27| Tx| 37|00A DATA 0/0 | 21 E0 00 00 01 40 00 C0 01 0B..
| 41 44 45 56 43 31
OSI Transport CC
11:49:52.29|Rx | 3|00A RR 1/ |
11:49:52.34|Rx | 37|00A DATA 1/0 | 21 D0 01 40 01 94 00 C2 0C 4D..
| C0 01 0B F0 01 10
11:49:52.34| Tx| 3|00A RR 1/ |
1 SPDU = 1 Transport TPDU (EOT set) = 1st X.25 Segment (more bit set)
11:49:52.35| Tx| 131|00AM DATA 1/1 | 02 F0 80 0D DD 01 26 0A 0D 04..
| 30 33 30 36 31 30 34 39 35 30..
| 49 33 03 4D 54 41 34 03 4D 54..
| 81 03 4D 54 41 82 03 4D 54 41..
| 01 30 0F 02 01 3D 06 04 56 00..
= 2nd and last X.25 Segment (m-bit not set)
11:49:52.36| Tx| 101|00A DATA 1/2 | 51 01 30 0F 02 01 3F 06 04 56..
| 60 44 80 02 07 80 A1 06 06 04..
| 29 80 01 0A 81 01 03 82 01 00..
| 16 06 78 74 65 73 30 31 00 00..
Now wasn't there something done by Yanick to optimise the segmentation between
OSI Transport and the X.25 layer?
John
|
3897.3 | | RMULAC.DVO.DEC.COM::S_WATTUM | Scott Wattum - FTAM/VT/OSAK Engineering | Thu Mar 06 1997 12:01 | 27 |
| >So OSAK must be setting the modifier IO$M_MULTIPLE indicating that it is only
>part of the message
Correct. If there is more than 1 buffer in the chain, the MULTIPLE flag is set
for each QIO until the last buffer is processed at which point the flag is
cleared. This is why it can be very important for an application to understand
how OSAK works. For example, one of those TPDU's could be a result of MTA not
leaving sufficient workspace in front of the buffer for OSAK to place its
encoding information (the most obvious example of this is when you're in data
transfer mode and see a TPDU containing only 01 00 01 00 followed by another
TPDU which contains the application data).
>What I observed with UNIX (same association establishment attempt
>with the same configuration) was 1 TPDU segmented into 2 X.25 packets.
Whinc confirms that XTI is in fact reassembling the TPDU's prior to hand-off.
>Now wasn't there something done by Yanick to optimise the segmentation between
>OSI Transport and the X.25 layer?
I cannot comment on that; I don't recall what, if anything, that may have been
done. However, I can say that as far as I know, OSI Transport on OpenVMS has
always behaved the way you see it with ECO6. There was some discussion a long
time ago to have OSAK implement the "buffer gather" itself on OpenVMS, but it
was never implemented.
--Scott
|
3897.4 | T_MORE flag ? | BIKINI::DITE | John Dite@RTO | Thu Mar 06 1997 12:39 | 11 |
| >Which confirms that XTI is in fact reassembling the TPDU's prior to hand-off.
Doesn't OSAK just set the T_MORE flag in the t_snd() call ?
If it did, then it would imply that OSI Transport reassembles prior to sending.
If it did not, what basis would OSI transport use to reassemble?
Also, does OSI Transport know the NPDU/X.25 Packet size required?
John
|
3897.5 | You want me to what??? ;-) | RMULAC.DVO.DEC.COM::S_WATTUM | Scott Wattum - FTAM/VT/OSAK Engineering | Thu Mar 06 1997 13:00 | 20 |
| Yes, OSAK sets the T_MORE. I guess I'm confused. From OSAK's perspective, we
aren't re-assembling, so on UNIX someone below OSAK is either re-assembling, or
is trying to completely fill an X.25 packet (this would actually be more
efficient from a memory usage standpoint than re-assembling, since you'd likely
only be hanging onto a small chunk of memory across calls).
>Also, does OSI Transport know the NPDU/X.25 Packet size required?
I don't know. If it does, it certainly isn't using the information on OpenVMS
to make more efficient usage of the datalink in this particular situation. I
would think though that hanging on to partial packets would be a real pain on
OpenVMS, what with having to defer I/O completion and all that. Certainly it
could be done (I would think), but it probably wasn't deemed to be worth the
effort - after all, transport could take the same attitude that OSAK took "It's
up to the application, if you don't like it, don't use IO$M_MULTIPLE" (which is
not to say that this is what anybody said, well, I might have said that, but I
doubt anyone else would have been that rude).
|
3897.6 | It sounds like we may have gone back a few steps | BULEAN::TAYLOR | | Tue Mar 11 1997 14:48 | 15 |
|
Hi John,
Sorry I am late in this conversation. You are correct.
We have been back and forth on this. I thought that
we now definitely used the largest possible segment
size on both VAX and Alpha with X.25. How this got
to be 128 I don't know.
This sounds like an IPMT to me.
Sorry,
Pat
|
3897.7 | IPMT submitted | BIKINI::DITE | John Dite@RTO | Wed Mar 12 1997 11:08 | 10 |
| Pat, Scott,
> This sounds like an IPMT to me.
have done.
Thanks for all the help.
I'll be out of the office for the next couple of weeks.
John
|
3897.8 | | RMULAC.DVO.DEC.COM::S_WATTUM | Scott Wattum - FTAM/VT/OSAK Engineering | Wed Mar 12 1997 11:33 | 10 |
| I talked to Pat about this yesterday, and I disagree. Transport is segmenting
properly for an individual I/O - it's just not doing all it can to fill an NSDU
when there are multiple I/O's done with IO$M_MULTIPLE - which would be an
enhancement to the current implementation, rather than a bug fix (since I've
seen this behavior since DECnet/OSI 5.6).
But, at least with an IPMT someone will be able to take a look at it in more
detail, and provide a definitive answer.
--Scott
|