T.R | Title | User | Personal Name | Date | Lines |
---|
531.1 | | TOOK::ALEX | Od wsi Charzewice | +S | | Tue Dec 11 1990 15:08 | 14 |
| re .0
This reply is a placeholder. You can expect a more detailed answer
shortly.
You will be able to use the exported data for custom purposes such as
you described. The structure of the Rdb export database (relation,
attributes, etc.) is defined in a spec, however it is not in any user
doc-t at this time.
I'll forward this to the appropriate people.
Regards,
Alex
|
531.2 | waiting for you ... | TLSE01::HAGENMULLER | | Wed Dec 12 1990 05:15 | 5 |
| thanks for your reply ,
we stay tuned . We 'll have to demonstrate EXPORT/RDB-DECDECISION
capability on tuesday 18/12/90 .....
ch
|
531.3 | Export RDB structure | TOOK::SHMUYLOVICH | | Wed Dec 12 1990 23:57 | 144 |
|
re .0
Thanks to A.Shvartsman and R.Lane for their help in the reply preparation
This reply contains two parts:
A. How to get the structure of the Export RDB from the existing file,
B. Some information about Export RDB structure.
A. How to get the structure of the Export RDB from the existing file:
You can get the structure of the Export RDB file using SQL ( I think there is
a way to get the same information using DECdecision but I do not know how to
do it):
SQL> declare schema filename <filespec>;
SQL> show tables;
SQL> show table <table name>;
where <filespec> is the file specification of the RDB file and <table name>
is the name of the table that is to be used in DECdecision.
B. Some information about Export RDB structure.
Export Rdb will be created dynamically for each export database, when the
Background Process for that data base is started for the first time.
1 Export Rdb Relations and Attributes
The database will include one relation for each distinct class that is to be
exported. The con catenation of the presentation names of the superior classes
and the child class (separated by underscores) will be used to name each
relation. For example,all entities of the kind NODE4 * LINE * will be stored
in the relation named MCC_NODE4_LINE. Each relation will contain the following
(groups of) Rdb attributes:
1. Attributes that are derived from the entity specification, e.g. NODE4 N.NNN
LINE XYZ will correspond to the Rdb attribute NODE4 with value N.NNN and Rdb
attribute LINE with value XYZ. If entity supports multiple identifiers this
entity specification contains identifiers returned from the
SHOW <entity> ALL IDENTIFIERS. In the case of NODE4 it is a primary identifier
- address. In the same time identifier attribute partition contains all valid
identifiers for this entity.
2. For each attribute partition of a given entity, all attributes will be
exported. The names of the attributes will be derived from the presentation
names of the attributes obtained from the dictionary. There is no inherent
limit imposed by MSL on the length of the at tribute names. However, Rdb
requires that the names of attributes be limited to 31 char acters or less.
Export is going to create attributes names using the following approach:
(a) If the presentation name of an attribute is <31 char-s then the full
attribute name is going to be used with the blanks (if any) replaced by "_"
(b) Else the longest words of the presentation name will be abbreviated
until the result ing string is <31 char-s (and blanks are replaced by "_").
In case of non-unique choice between the longest words, the word that
appears later in the presentation name is abbreviated, then the earlier
words if still needed. This will of course include the four standard
attribute partitions: IDENTIFIERS, STATUS, CHARACTERISTICS and COUNTERS.
Another important attribute parti tion (when present) is the STATISTICS
attribute partition. The attributes found in the partition are expected by
the Reports module.
3. Finally, the time instance of the export along with the timestamps for
each individual partition are exported into Rdb attributes REQUEST_TIME and
TIME_<parition> re spectively, where <partition> is the presentation name of
the attribute partition whose time is the value.
Upon the initial EXPORT, the relation is created, and the successive EXPORT
requests add in formation (tuples) to the same existing relations.
2 Export Rdb Datatypes
Export does not process constructed data types.
Export will convert the native DECmcc datatypes of the attributes to the Rdb
datatypes as follows:
1. Signed integer data types will be converted to integer types of the same
length.
2. Unsigned integer types will be converted to signed integer types of the
next higher available length (Rdb does not support unsigned integers, and
this need to be done to prevent overflows; When this is not possible (as is
the case with 64-bit unsigned inte ger), a signed integer of the same length
is used.
3. Strings and file names are converted to strings.
4. Phase4Name and Phase4Address are coerced to strings of length 7 (up to
XXXXXX for names and up to XX.XXXX for addresses).
5. ID802 is converted to a string containing a presentation output format for
ID802 (see SRM).
6. The MCC G_FLOAT is not changed, as this is already an Rdb data type.
7. Other data types TBD
8. The following is comparation of the MCC data types, SQL keywords and VMS
data types:
---------------------------------------------------------------
| MCC data type | SQL keyword | VMS data type |
|-------------------------------------------------------------
| MCC_K_DT_BOOLEAN SMALLINT DSC$K_DTYPE_W |
| MCC_K_DT_INTEGER8 SMALLINT DSC$K_DTYPE_W |
| MCC_K_DT_INTEGER16 SMALLINT DSC$K_DTYPE_W |
| MCC_K_DT_INTEGER32 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_INTEGER64 QUADWORD DSC$K_DTYPE_Q |
| MCC_K_DT_UNSIGNED8 SMALLINT DSC$K_DTYPE_W |
| MCC_K_DT_UNSIGNED16 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_UNSIGNED32 QUADWORD DSC$K_DTYPE_Q |
| MCC_K_DT_UNSIGNED64 QUADWORD DSC$K_DTYPE_Q |
| MCC_K_DT_ENUMERATION INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_COUNTER16 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_COUNTER32 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_COUNTER48 QUADWORD DSC$K_DTYPE_Q |
| MCC_K_DT_COUNTER64 QUADWORD DSC$K_DTYPE_Q |
| MCC_K_DT_LCOUNTER16 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_LCOUNTER32 INTEGER DSC$K_DTYPE_L |
| MCC_K_DT_FILE_SPEC CHAR (up to 255) DSC$K_DTYPE_T |
| MCC_K_DT_LATIN1STRING CHAR (up to 132) DSC$K_DTYPE_T |
| MCC_K_DT_SIMPLE_NAME CHAR (up to 255) DSC$K_DTYPE_T |
| MCC_K_DT_FULL_NAME CHAR (up to 512) DSC$K_DTYPE_T |
| MCC_K_DT_VERSION CHAR (up to 12) DSC$K_DTYPE_T |
| MCC_K_DT_PHASE4NAME CHAR (up to 7) DSC$K_DTYPE_T |
| MCC_K_DT_PHASE4ADDRESS CHAR ( up to 7) DSC$K_DTYPE_T |
| MCC_K_DT_REAL FLOAT DSC$K_DTYPE_F |
| MCC_K_DT_BIN_ABS_TIM DATE DSC$K_DTYPE_ADT |
| MCC_K_DT_BIN_REL_TIM DATE DSC$K_DTYPE_ADT |
| MCC_K_DT_ADDRESS_DTE CHAR (up to 44) DSC$K_DTYPE_T |
| MCC_K_DT_ID802 CHAR (up to 17) DSC$K_DTYPE_T |
| MCC_K_DT_ID802_SNAP CHAR (up to 14) DSC$K_DTYPE_T |
| MCC_K_DT_ID802_SAP CHAR (up to 2) DSC$K_DTYPE_T |
| MCC_K_DT_IDENETV2_TYPE CHAR (up to 5) DSC$K_DTYPE_T |
|-------------------------------------------------------------|
/sam
|
531.4 | RECORDING for TCP/IP node ? | TLSE01::HAGENMULLER | | Mon Dec 17 1990 13:50 | 20 |
|
Thank you for your inputs !
We have tested the RECORD & EXPORT features with RDB and DECDECISION.
It works fine ...... with Decnet P4 nodes .
Our next issue is to use these commands with other entities like
TCP/IP nodes ( we are on the short list to manage one of the largest
europeen TCP/IP network )
We 've been a little surprised when trying to start recording with
SNMP AM , because no RECORD option exists in the pop-up menu for
TCP/IP nodes .
Is recording only available for Decnet nodes ?
If so , is it scheduled for a next version of TCP/IP-SNMP AM ?
thanks in advance , ch
|
531.5 | | GOSTE::CALLANDER | | Mon Dec 17 1990 15:24 | 3 |
|
send mail to Mike Daniele (CHORD) for info on this.
|
531.6 | save yourself some time | MKNME::DANIELE | | Tue Dec 18 1990 08:44 | 27 |
| re .5
> <<< Note 531.5 by GOSTE::CALLANDER >>>
> send mail to Mike Daniele (CHORD) for info on this.
I'm sure what Jill meant was send mail to me regarding the lack
of a RECORD directive on the pull-down for SNMP entities, as opposed to
sending mail to me regarding export data structures.
Wrong. This is a matter for someone from the Historian FM.
Historian works by augmenting the dictionary of every entity for which
data will be recorded. For asynchronous management modules, ESPECIALLY
those delivered before Historian, clearly this augmentation must be
done by someone at the specific site.
My understanding was that this was all planned for Historian and
DECmcc 1.1 EFT. There was to be documentation on how to augment
entities present in the dictionary, and even (I believe) procedures
to automate that augmentation.
If this is not the case, will someone please enlighten me?
Mike
|
531.7 | Historian & Exporter for any entity | TOOK::SHMUYLOVICH | | Tue Dec 18 1990 10:15 | 42 |
|
re .4
To use Historian and Exporter FMs every entity for which data is
recorded/exported has to be augmented in the data dictionary.
DECmcc V1.1 EFT data dictionary contains such augmentation for
Phase IV, Phase V and Bridge entities.
To augment another entities you can use the following procedure:
1. Use command file (MCC_HIST_AUG.COM - for Historian augmentation and
MCC_EXPORT_AUG.COM - for Exporter augmentation) to build augment
file that adds information to the dictionary.
The augment file has a default name of "gename_HIST_AUGMENT.COM" or
"gename_EXPORT_AUGMENT.COM", where "gename" is the global entity
class name. The file is created if it doesn't exist. It is
appended to if it does exist. The user is required to delete
pre-existing versions before using this program.
The user calls this program with a list of the class hierarchy
for the entity to be augmented in the dictionary.
Example:Augment NODE, NODE ROUTING, NODE ROUTING CIRCUIT
$ @MCC_HIST_AUG NODE
$ @MCC_HIST_AUG NODE ROUTING
$ @MCC_HIST_AUG NODE ROUTING CIRCUIT
2. Update data dictionary using the augment file.
Example:
DAP> augment class NODE from NODE_HIST_AUGMENT.COM
3. Rebuild parse tables.
/Sam
|
531.8 | | CCIIS1::ROGGEBAND | _ �hili��e _ | Thu Dec 27 1990 04:14 | 22 |
| Re : .7
I believe there is a minor bug in the MCC_HIST_AUGMENT.COM file. Where
it checks for existence of a previous file, the name of the file it
checks for is :
("MCC_COMMON:"P1+"_HIST_AUGMENT.COM")
This should be :
("MCC_COMMON:"+P1+"_HIST_AUGMENT.COM")
or :
("MCC_COMMON:''P1'_HIST_AUGMENT.COM")
otherwise you end up with n + 1 files where n = the total number of
subentities.
Cordialement,
Philippe.
|
531.9 | | TOOK::SHMUYLOVICH | | Thu Dec 27 1990 11:25 | 5 |
|
Re : .8
Philippe,
Thank you. Sam
|
531.10 | Some problems augmenting entities | CCIIS1::ROGGEBAND | _ �hili��e _ | Wed Jan 02 1991 07:40 | 24 |
| Sam,
I've tried using the MCC_HIST_AUG.COM with a home-brewed AM, I'm afraid
there are some undesirable features left in the procedure.
The procedure created by the .COM invokes the DAP, which we have alread
invoked (" DAP> Augment Class nnnn from xx...")
There were 3 "Use Class nnn" at the beginning of the procedure
When I ran it from the DAP, I had some "Error code already exists" +
"Entity does not exist" error messages, so I did not try to run the
Historian against my object.
Rather than providing us with a command procedure to create DAP command
files which are not too readable if you don't speak DAP fluently,
wouldn't it be simpler to give us some .MS files which we could append
/ insert via the editor in our MSL's ? This is (more or less) the way I
added support for the registration FM ( by picking up the SAMPLE AM
.MS). It would also allow us to "see" what we are doing.
Happy new year,
Philippe.
|
531.11 | No bug...be careful about specifying class names though | TOOK::DENSMORE | Dirty deeds done dirt cheap | Thu Jan 03 1991 10:08 | 22 |
| re .8
You need to specify the entire entity class hierarcy for each subentity
you are augmenting, thus P1 will/must always be the global entity.
Only one file will be created with the global entity name imbedded in
the name and all the entities appended. Example:
@mcc_common:mcc_hist_aug node
@mcc_common:mcc_hist_aug node routing
@mcc_common:mcc_hist_aug node routing circuit
@mcc_common:mcc_hist_aug node routing "permitted neighbor"
(Note the quotes required for multiword class names.) This sequence
will create a single file to augment node, the routing module and the
two subentities. The file name will be:
mcc_common:node_hist_augment.com
Oh, and you need to delete any pre-existing copys of the file:
node_hist_augment.com
Mike
|
531.12 | RE: augmenting problems | TOOK::SHMUYLOVICH | | Thu Jan 03 1991 18:43 | 17 |
|
re: .10
Philippe,
I don't think that including Historian's .MS files in your .MS
is better.
If, for instance, Historian's .MS is changed and data dictionary
has to be updated, you need to have the original .MS files for
all entities in the data dictionary.
Using an augmentation command procedure you need only to know
the names of these entities.
If you still want to play with Historian's .MS, please, contact
with Dave Moore (TOOK::D_MOORE).
Sam
|
531.13 | | CCIIS1::ROGGEBAND | _ �hili��e _ | Fri Jan 04 1991 05:46 | 22 |
| Ok, it seems to be working now, I re-created my class definition and
then augmented it.
I have however found a minor bug in the MCC_EXPORT_AUG.COM file :
The definition of code 23 should have two double quotes around the
value items (""suspended"" and ""active"").
This is what caused my DAP update to failed and left my dictionary in a
poor state...
Re : .12
I agree that an augmentation procedure is better in a user context. As
a developper, I would feel more confortable if I had access to the .MS
file to see what is being added to my entity. However, now that it
seems to be working (I am in the process of rebuilding the parse tables
and it hasn't screamed so far), I'll leave it at that.
Cordialement,
Philippe.
|
531.14 | i'm seeing "'s in my sleep :-) | TOOK::DENSMORE | Dirty deeds done dirt cheap | Fri Jan 04 1991 08:12 | 6 |
| re .13
Yeah, I got the "" problem when I was updating the file for Sam.
Thanks.
Mike
|
531.15 | correction... | TOOK::DENSMORE | Dirty deeds done dirt cheap | Mon Jan 07 1991 07:48 | 15 |
| re .11
(I'm talking to myself again :-( )
There is an error in .11. Multiword class names need to specified as a
single word. DAP expects MSL output which compresses spaces. So, for
example, enter sessioncontrol for session control.
I have an updated version of the augment build procedure which has been
tested thru DAP but not thru PTB. When I finish checking it out I will
make it available in a public area for anyone who would like to play
with it. It fixes the bugs mentioned in these notes, fixes some things
not in these notes and adds new exceptions and arguments to the directives.
Mike
|
531.16 | New Procedures Available | TOOK::DENSMORE | Dirty deeds done dirt cheap | Tue Jan 08 1991 14:09 | 85 |
|
The latest files for building export and recording augmentation files
are in my public directory:
TOOK::USER$772:[DENSMORE.PUBLIC]MCC_EXPORT_AUG.COM
TOOK::USER$772:[DENSMORE.PUBLIC]MCC_HIST_AUG.COM
Feel free to copy them and try them out. I have attached the draft
release notes to this note. You can report problems to me directly but
please enter a QAR also so we can track them. File them under
INSTALLATION.
Thanks.
Mike
For AM Developers (Tool Kit):
If you wish to have your access module dictionary information
augmented in the field to include the Historian and Export
directives, two procedures have been provided for use in your
install procedure. These procedures are intended to run out
of MCC_COMMON and produce DAP COM files in MCC_COMMON that
will modify the dictionary appropriately.
The procedures are:
MCC_HIST_AUG.COM Create DAP commands for Historian
augmentation.
MCC_EXPORT_AUG.COM Create DAP commands for Export
augmentation.
To use the procedures, you must:
1. Add a check for existing augment COM files to your
installation procedure, deleting them if they exist.
The procedures were designed be very simple and just
append to an existing DAP COM file with the expected
name.
2. Invoke the procedure for each entity to be augmented.
You must specify the entire entity class hierarchy.
Multiword class names must be compressed to a single
word. Examples:
@MCC_HIST_AUG NODE4 CIRCUIT
@MCC_HIST_AUG NODE4 CIRCUIT ADJACENTNODE
3. Use the resulting DAP COM file. The name of the files
will be "MCC_HIST_globclass.COM" and
MCC_EXP_globclass.COM", where "globclass" is the class
name for the global entity being augmented. Example:
$ MANAGE/DICT/TOOL
AUGMENT CLASS NODE4 FROM MCC_HIST_NODE4.COM
4. Repeat 1-3 for Historian and Export.
Below is an example that would augment the Phase V NODE Global
Entity and its subentities: ROUTING, CIRCUIT and SESSION CONTROL.
$ IF F$SEARCH("MCC_COMMON:MCC_HIST_NODE.COM") NES "" THEN -
$ DELETE MCC_COMMON:MCC_HIST_NODE.COM;*
$ @MCC_COMMON:MCC_HIST_AUG NODE
$ @MCC_COMMON:MCC_HIST_AUG NODE ROUTING
$ @MCC_COMMON:MCC_HIST_AUG NODE ROUTING CIRCUIT
$ @MCC_COMMON:MCC_HIST_AUG NODE SESSIONCONTROL
$ MANAGE/DICT/TOOL
AUGMENT CLASS NODE FROM MCC_HIST_NODE.COM
$!
$ IF F$SEARCH("MCC_COMMON:MCC_EXP_NODE.COM") NES "" THEN -
$ DELETE MCC_COMMON:MCC_EXP_NODE.COM;*
$ @MCC_COMMON:MCC_EXPORT_AUG NODE
$ @MCC_COMMON:MCC_EXPORT_AUG NODE ROUTING
$ @MCC_COMMON:MCC_EXPORT_AUG NODE ROUTING CIRCUIT
$ @MCC_COMMON:MCC_EXPORT_AUG NODE SESSIONCONTROL
$ MANAGE/DICT/TOOL
AUGMENT CLASS NODE FROM MCC_EXP_NODE.COM
Note: the parse tables will need to be rebuilt. This is a long
procedure and should not be done until all dictionary augmentations
have been completed.
|
531.17 | recording doesn't work with our AM | TLSE01::HAGENMULLER | | Fri Jan 11 1991 05:21 | 29 |
|
We have developp an AM to manage UB entities like UB-Bridges and PC
Ethernet hardware .
When integrating this AM to DECMCC we encountered a problem :
* for talking to UB entities we've used UB proprietary protocol
and UCX udp QIOs .
When running your last MCC_HIST_AUG.com and MCC_EXPORT_AUG.com
procedures , nothing wrong appeared . Neither when we defined
a record command on counters for a PC running an UB Ethernet card .
But when we did a "show recording" we received the following
interesting messages :
Number of successfull polls : 0
Number of failed polls : 3
last poll failure reason : FAILED TO CALL ETP .
Was does this mean ?
All procedures seem to have completed successfully , our AM runs
correctly when used with decwindow PM ( show commands run well on
partition counters ) .
Other record commands on phase4 nodes run well an don't generate this
error message so that we think our AM is in trouble . But why ?
thanks for help ,ch
|
531.18 | how to check what is wrong with recording | TOOK::SHMUYLOVICH | | Fri Jan 11 1991 14:28 | 30 |
|
RE: .17
Failure reason "failed to call ETP" means that mcc_call from
Historian background process to your AM fails either with error
or exception.
It's hard to say what is wrong is this particular case but I'll
try to make some suggestions.
1. Do you need to have a special privileges to get the information
from your AM? If yes, please, check if the background process
has them.
2. Before the first mcc_call to your AM with a specified attribute
partition Historian does SHOW entity ALL IDENTIFIERS and uses
entity_out argument as an entity_in in all its mcc_calls.
Please, verify if entity_out returned by your AM is correct
(Decwind PM does not display entity_out).
3. If you have a debug logical which enables printing from your AM
you can:
- set up this logical on the different window and
- run Historian background manually (not in batch):
@SYS$MANAGER:MCC_HISTORIAN_BACKGROUND.COM domain_name.
The results can give a little more knowledge what is wrong.
If this doesn't help, please, contact me at TOOK::SHMUYLOVICH.
Regards, Sam
|
531.19 | thanks for help + other pbs | TLSE01::HAGENMULLER | | Wed Jan 16 1991 12:58 | 41 |
|
RE:.18
Ok , it seems that failure reason was 2 background processes running
in the same batch queue .
Now recording seems working : "seems" because when I export data from
our AM , the first set of counters is good an the next values aren't .
( futhermore , only running export backgroung manually works well . )
Is there any way ( other than exporting data ) to see values polled
by the record fm and written to the mir ?
( i tried dumping mcc**********.*hist* but saw nothing readable )
There is a bug in decwindow pm with export fm : when you try deleting
, resuming ,... , the export definition , you always get an error
message "invalid qualifier : in domain". Only command line interface
works well.
Another little problem : I've run procedure SNMP_HIST_AUGMENT and
SNMP_EXP_AUGMENT . The latter fails line 439 because of unexpected "("
set definition code 23 -
type t -
cont 1 -
.
.
.
value
error --> ( -
0 = suspended , -
1= active
)
Sorry to be a bit negativ , but it's a field test that will surely be
improved ...
regards , ch
|
531.20 | New DAP Augment Files On The Way | TOOK::DENSMORE | Dirty deeds done dirt cheap | Wed Jan 16 1991 15:27 | 14 |
| re .19
I can respond to your question on the SNMP augment files. I think
you have an old version of these files. The files should be
MCC_EXP_SNMP.COM and MCC_HIST_SNMP.COM. The files you have appear
to be the ones from the first field test kit since the names you have
in your note are the names used at that time. You should be all set
when you get the kit we are building at the moment. (I verified with
the "buildmaster" that the right files are in the class.)
Mike
|
531.21 | Re: .18 | TOOK::SHMUYLOVICH | | Wed Jan 16 1991 15:28 | 34 |
|
Re: .19
> Ok , it seems that failure reason was 2 background processes running
> in the same batch queue .
This is not correct. The failure reason was a not running background
process with correct RDB file name.
> ( futhermore , only running export backgroung manually works well . )
There are no known limitations to the background process running in the
queue. If you have any problem to start background process in the queue,
please, contact me at TOOK::SHMUYLOVICH.
> Is there any way ( other than exporting data ) to see values polled
> by the record fm and written to the mir ?
> ( i tried dumping mcc**********.*hist* but saw nothing readable )
Yes. Everything that was recorded can be seen using Show command with
the corresponding scope of interest time. Please, refer to the chapter
6 of SRM for the correct syntax of FOR clause.
> There is a bug in decwindow pm with export fm : when you try deleting
> , resuming ,... , the export definition , you always get an error
> message "invalid qualifier : in domain". Only command line interface
> works well.
Yes, this is a bug in the Exporter FM (see note 621.3).
Mike Densmore will answer you about augmentation problem.
Regards,
Sam
|
531.22 | Missing "" | CCIIS1::ROGGEBAND | _ �hili��e _ | Thu Jan 17 1991 04:02 | 33 |
| � Another little problem : I've run procedure SNMP_HIST_AUGMENT and
� SNMP_EXP_AUGMENT . The latter fails line 439 because of unexpected "("
�
� set definition code 23 -
� type t -
� cont 1 -
� .
� .
� .
� value
� error --> ( -
� 0 = suspended , -
� 1= active
� )
Christophe,
I forgot to tell you about this one on the phone yesterday. the problem
is not the '(', it's the missing '"' !
Edit NMS_HIST_AUGMENT to contain :
value
( -
0 = "SUSPENDED", -
1 = "ACTIVE"
)
and all will be well !!!
%^)
Philippe.
|
531.23 | beware of using old files | TOOK::DENSMORE | Dirty deeds done dirt cheap | Thu Jan 17 1991 10:45 | 9 |
| re .22
Don't use SNMP_EXP_AUGMENT or SNMP_HIST_AUGMENT. Either wait for the
new kit or build new files with the new procedures I pointed to in a
previous reply. The new SNMP augment files were tested and appear to
be clean. If you do find problems with these *new* files, let us know
asap.
Mike
|
531.24 | seems ok now | TLSE01::HAGENMULLER | | Fri Jan 18 1991 11:51 | 11 |
|
I've modified snmp_xxx_augment procedures and all seems ok .
Should be better with the next kit (?) .
When you select a snmp entity , the popup menu looks strange : there
are export commands but no record command . For child entities ( tcp ,
ip , ... ) all commands are available . I know nothing should be
recorded with the first menu , but why providing export commands .
ch
|
531.25 | Problem with new proc | WELMT2::MCCALLUM | | Tue Jan 22 1991 05:19 | 9 |
|
I have tried with the new procedures and had a problem. The building
of the command procedures and updating the dictionary went ok, but the
updating parse tables fell over with an access violation. The log file
ended at processing entity 18 31. I am not sure how to debug this but
am working on it.
Dave McCallum.
|
531.26 | bug | MKNME::DANIELE | | Tue Jan 22 1991 15:38 | 4 |
| re -.1: That's a bug in PTB, so you can't do much about it.
I believe it's been fixed and will be available soon, but
will defer to Jill.
|
531.27 | please contact Jill Callander | GOSTE::CALLANDER | | Tue Jan 22 1991 16:44 | 9 |
| Actually, at this point in time I know of no access violations in
PTB with the exception of some "ill handled" memory problems (when
you run out we don't always gracefully exit).
Dave I am very interested in seeing what is going on. Please contact
me at your earliest convenience.
jill
|