T.R | Title | User | Personal Name | Date | Lines |
---|
5641.1 | new one to me | TOOK::PURRETTA | | Wed Oct 06 1993 11:52 | 4 |
| Dan, if you can point me to a DECNIS on the net
that shows this behavior I'll try and spend a few minutes
looking into the problem.
John
|
5641.2 | Guess | MARVIN::COBB | Graham R. Cobb, Internetworking, REO2-G/G9, 830-3917 | Thu Oct 07 1993 13:08 | 8 |
| I believe you see this sort of problem with things like Routing Destination
Node as well. The problem is that those entity classes have *very* large
numbers of instances (1000 to 2000 even). From recollection, MCC doesn't
handle very large wildcards at all efficiently.
John, if you want to try this out use NIS118 (1.118).
Graham
|
5641.3 | Something's up with Status requests | TOOK::PURRETTA | | Thu Oct 07 1993 17:52 | 12 |
| Thanks Graham, I tried it and what's happening is we're hanging
waiting for a response for the status partition request.
If I show all identifiers all is fine. If I show ALL STATUS
(or as in Dan's case all attrib which includes status) the thread
which goes to read the status response never gets one. In the
all attribute case when you control-c it aborts that attempt but
again blocks waiting for the next entity's status information.
That's why he's only seeing ID info and that's why he's hanging.
I'll place a trace of our request in the next reply for you to look at.
John
|
5641.4 | the request dump | TOOK::PURRETTA | | Thu Oct 07 1993 17:55 | 45 |
| MCC> SHOW NODE NIS118 Bridge Dynamic Physical Address * all status
IN_Q is empty
Contents of parse_ilvs value_desc: :
MAXLENGTH :1024
LENGTH :4
CLASS :1
TYPE :14
18 fa 1a 00
QIO P4 Node Name Descriptor: :
MAXLENGTH :6
LENGTH :6
CLASS :1
TYPE :14
4e 49 53 31 31 38
..Connecting using 5.1.0 ..... Link 544 connected, using protocol 5.0.0
[ 1 ] (
[ 2 ] 00000001
[ 2 ] 00000003
[ 16 ] (
[ 0 ] 2b 0c 02 87 73 02 01 01 26 01
[ 3 ] (
[ 5 ]
[ APPLICATION 29 ]
)
[ 11 ] (
[ 1 ] 02
)
)
)ENTITY_REQ: SMA request in HEX :
MAXLENGTH :2048
LENGTH :39
CLASS :2
TYPE :14
a1 82 00 23 02 01 01 02 01 03 30 82 00 19 80 0a
2b 0c 02 87 73 02 01 01 26 01 a3 04 05 00 5d 00
ab 82 00 03 81 01 02
..Sending 39 Bytes on link 544..... ok.
..Receiving on link 544......
At this point we're blocked in a QIO read on that link waiting for a reply.
|
5641.5 | The problem is with batched responses | TOOK::PURRETTA | | Thu Oct 07 1993 18:36 | 14 |
| Graham,
>From recollection, MCC doesn't handle very large wildcards at all efficiently.
What MCC does for wildcarded responses is read/process/post/read-again/....
until we're told there isn't any more data. What we *don't* handle is
the case where multiple responses are batched in a single reply.
The code was designed such that every read ment one response, which
is the case for all entities but a couple new ones. This problem
shows up believe when talking with the MTA entities which do batching.
John
|
5641.6 | | MARVIN::COBB | Graham R. Cobb, Internetworking, REO2-G/G9, 830-3917 | Tue Oct 19 1993 08:24 | 20 |
| Re: .4.
John,
That message is a wildcarded SHOW operation. It will generate 1000 or so
responses (and it does -- I have just tried sending *exactly* the bytes
given in .4 to NIS118 but I ^Y'd out of the responses after the first 20 or
30).
Are you *sure* that you are sitting waiting in the DECnet read QIO itself? I
thought this problem (which I have often seen with MCC when there are a very
large number of entities) was something to do with a lower layer of code
trying to accumulate all the responses before starting work on them. That
can be a long wait (and it can take a *lot* of memory).
If you are sure you are sitting in the receive QIO and it has not completed
even once then we need to do some more work to understand why it works for
my test program and for NCL but not for you.
Graham
|
5641.7 | The READ never returns | TOOK::PURRETTA | | Mon Oct 25 1993 12:19 | 21 |
| Graham,
> Are you *sure* that you are sitting waiting in the DECnet read QIO itself?
Yes, I have a breakpoint right at the QIO read. It never wakes up.
>I thought this problem (which I have often seen with MCC when there are a very
>large number of entities) was something to do with a lower layer of code
>trying to accumulate all the responses before starting work on them. That
>can be a long wait (and it can take a *lot* of memory).
The AM does not accumulate all the responses. On a wildcard response
we read twice, the first is returned and the second response is kept
in a context buffer. There is exactly one response kept in context at
any time per thread. I think you're describing what the map does.
It quite possibly gathers all responses because it wants to know what to
build for the screen. I'm using FCL and seeing this problem. Neither
FCL nor the AM accumulates responses so that's not it. I know where
we're hanging, at the QIO read. I just don't know why yet.
John
|