[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference 44.370::system_management

Title:system management communications forum
Moderator:CHEST::THOMPSON
Created:Fri Mar 21 1986
Last Modified:Thu Jul 08 1993
Last Successful Update:Fri Jun 06 1997
Number of topics:490
Total number of notes:2018

22.0. "RDB Conversion problems." by VULCAN::PLATT () Fri Aug 22 1986 16:07

    
The folowing text is an amalgamation of the entries from the RDB notes file
on the subject of V1.1 versus V2.0 ( and V2.1 ) pluas a few extras.



--------------------------------------------------------------------------------


1. VALID IF NOT MISSING:

	If you have a field defined as 

	def fie foo ... valid if not missing.
	def rel rel.
	foo.
	bar.
	end.

	you were permitted to 

	store r in rel using r.bar =10 end_store

	which did not catch the non-store into foo.

	Restore will fail to restore that record.
	(The next version will diagnose the STORE as an error)

	You will have to modify the records in the v1.1 database
	with something of the form:

	FOR R IN REL WITH R.FOO MISSING
		MODIFY R USING R.FOO = ??? END_MODIFY END_FOR



2.  COMPUTED BY FORWARD REFERENCES:

	RESTORE restores relations in alphabetical order.

	if you have a relation defined with a computed by
	aggregate that references an alphabetically higher
	relation, the restore will fail.

	ex:

	define rel rel.
	foo computed by count of r in relx.
	end.

	The next version will do the RESTORE correctly

	The suggested work around for this problem is to use
	CHANGE RELATION on your V1.1 database to delete those
	COMPUTED BY fields which make references to relations
	whose names are alphabetically greater than the relations
	which contain the offending fields.

	Of course, this same problem can be encountered when 
	restoring any Rdb/VMS database version prior to V2.1.
               


3. INDEX ERROR:

	It was possible in V1.1 to DEFINE an INDEX on one terminal
	and, before committing the transaction, start a process which
	would store records without knowledge of the new index.  This
	creates a problem state whereby, some queries will not retrieve
	some records.  Some records might even be in the database twice
	-- regardless of "no duplicates" on the index, they would just not
	be in the index twice.

	RESTORE will fail when defining the index.  The database will
	be labelled as "corrupt" because 2.0's restore uses batch_update
	and will not be able to rollback the aborted index.

	The next version's RESTORE will also fail but will have a
	NOBATCH_UPDATE option so that the database can be "partially
	restored" and the user can then find the duplicates and fix up the
	database.

	There are a few proposed workarounds for this problem:  You have to
	put up your V1.1 system again to work any of them though.  Attempt a
	restore under V1.1, each creation of a bad index should fail.  Then
	you can research the problem in the original database, probably by
	deleting the bad index, finding the duplicate key fields, deleting the
	offending records, and redefining the index before doing another backup.


4. BACKUP I/O ERRORS:

        Backup in V1 AND V2.0 did not indicate I/O errors correctly so such 
        things as disk full would not be signalled and the user might be 
        mislead into believing that all went well.  Of course, the restore 
        will fail.

        V2.1's BACKUP will correct this.

        It looks like the recipe for converting from v1.1 to V2.0 might be 
        to see if you can restore your db on V1.1 before installing V2.0 and
        therefore save yourself some grief.  
        (But remember that RESTORE on V1.1 does not do the right thing for 
        VARYING STRING. -- ahhhh, the histories we live with!)


5. PROJECT CONVERSION TALE:

My team just finished a fairly large project using the following products:

		VMS	V4.2
		Pascal  V3.0
		Rdb/VMS V1.0
		VAX-11 Pascal Pre-Processor Version Rdb/ELN T1.1-00

As none of us had any previous experience with Rdb, we muddled through,
learning and developing tools as we went.  Our database had about 45
relations, about 75% of which had a field of type DATE (ie, a system
quadword timestamp).  All turned out just fine, and we had an extremely
happy customer. 

Then they wanted to update to Rdb/VMS V2.0 running under VMS V4.3.

We took all the source and tried to recompile it under Rdb/VMS V2.0 so that
we could exterminate any bugs before the customer found them crawling
around their machine room. 

It took 10 days!  And here's why.
	
Rdb/VMS V1.0 used the Rdb/ELN Pascal Pre-Processor, which declared fields
of type "TEXT SIZE IS 1" as "Packed Array [1..1] of Char" and fields of
type "DATE" as "Packed Array [1..8] of Char." 

Rdb/VMS V2.0 uses the Pascal Pre-Processor V1.2, which declares fields of
type "TEXT SIZE IS 1" as "Char" and fields of type "DATE" as "[Byte (8)]
Record End".  This latter definition is a real surprise, because the Include 
File that the Pre-Processor brings in, namely SYS$LIBRARY:RDBVMSPAS.PAS, 
declares two types: 

	RDB$_DATE_TYPE     = Packed Array [1..8] Of Char;
	RDB$_QUADWORD_TYPE = Packed Array [1..8] Of Char;

However, whenever it sees a field of type DATE, it uses neither of these
definitions, but rather uses its own "[Byte (8)] Record End" construct.  A
variable that is a record type cannot be used in comparisons without being
type cast to a non-record type. 

So what did we have to do?

1) Change all variables that were declared as "Packed Array [1..1] Of Char" 
   to be declared as "Char".

2) Change all variables that were declared as "Packed Array [1..8] Of Char" 
   and were being used as quadword times to be declared as 
   "[Byte(8)] Record End".

3) Type cast all variables declared as in (2) to Packed Array [1..8] of Char 
   if they were being used in comparison operations.

So we have several questions:

??? 1) Why was the Rdb/ELN Pre-Processor replaced by a different one?
??? 2) Why was the consistent declaration of fields of type "TEXT SIZE IS x"
       modified so that it's different if x = 1?
??? 3) Why are the pre-declared RDB$_DATE_TYPE and RDB$_QUADWORD_TYPE types
       not used in favor of the very clumsy [Byte(8)] Record End" type?
??? 4) Is any of this likely to be rectified in a future release?
??? 5) Are there any plans to modify other data types in similar ways in
       future releases?



6.  CDD CONSIDERATIONS:

    Just a slight problem with the restore/backup for converting v1
    to v2 databases. I got our ops group to do a backup of all known
    databases before the upgrade, then a restore afterwards, but I had
    not read up fully about the restore operation and assumed that it
    would replace the CDD definitions where they had come from.... In
    fact it replaces the CDD definitions in CDD$DEFAULT unless you specify
    a location in the restore command. I feel that the documentation
    in the release notes about this part should have been a bit clearer.
    The release notes could also have mentioned that the restore fails
    if the CDD definitions already exist at the specified cdd node.
    Maybe another parameter for the restore?
    

7. DATE CONFLICT AGAIN:

I've encountered a what seems to be an Rdb v1.1/2.0 version conflict 
with the DATE datatype when using CALLABLE RDO within a PASCAL program.
                       
Under Rdb v1.1, if I use Callable RDO to fetch a DATE data type field from a 
relation, I always get a date/time value in ASCII format.

Under Rdb v2.0, if I fetch the same DATE field from the same relation, I
always get a date/time value in binary format.  From reading the release notes, 
this is what should be expected.

But, this is a problem, because it means that in order to have one program that
will run under either version of RDB, I must do one of the following:

	-  not use the DATE datatype to store a DATE value if I am going to use 
	   CALLABLE RDO to fetch that date.

	-  use the DATE data type and use Callable RDO to fetch it, and have  
	   the program determine what version of RDB it is running under and 
	   act accordingly. 

	-  use Rdb/ELN data manipulation statements instead of CALLABLE RDO
	   (At this point, this is not an option we want to consider)


Have I over-looked something here?  If anyone knows of another possible
solution I would be happy to hear about it.

Below is a extract of the existing PASCAL code which gives an ASCII date/time
value under Rdb v1.1 and a binary date/time value under Rdb 2.0

{*****************************************************************************}
{* PASCAL variable declarations *}

VAR
	stat                    : INTEGER;
	typ                     : PACKED ARRAY[1..1] OF CHAR;
	address,name, phone     : PACKED ARRAY [1..20] OF CHAR;
	p16chr 			: PACKED ARRAY [1..16] OF CHAR;	
	time                    : PACKED ARRAY [1..2] OF INTEGER;
	tstpath	                : VARYING[60] OF CHAR;


{* extract of code lines  *}

	tstpath := 'PRS$SYSTEM:PRS'; 

	stat := RDB$INTERPRET('invoke database filename '''+tstpath+'''');
	stat := RDB$INTERPRET('start_transaction read_only');
	stat := RDB$INTERPRET('start_stream stream using st in USER');

	stat := RDB$INTERPRET('fetch stream');

	Stat := RDB$INTERPRET('get' +
		       ' !val = st.address;'+
		       ' !val = st.type;	!val = st.name;'+
		       ' !val = st.phone;	!val = st.last_access;'+
		       ' end_get',
		       %stdescr address,
  		       %stdescr typ,	
 	               %stdescr name,
		       %stdescr phone,
		       %stdescr p16chr
		      );
!***************************************************************************
!the "USER" relation definition is as follows:

define relation user.
userid		datatype signed longword.	
address		datatype text	size 20.
type		datatype text	size 1.	
name		datatype text	size 20.
phone		datatype text	size 20.
last_access	datatype date.
end user relation.

8. GOING BACKWARDS V2.0 TO V1.1:

   RESTORE will work except where there are bugs in the 1.1
   restore.  A case that comes to mind is that varying string fields were
   not done correctly.  There were others: empty blobs, scaled decimals,
   very large constraint/view definitions.

9. LOCKING:

   If the NOWAIT qualifier is used on a transaction under V2.0 the automatic
   lock granularity (ALG) adjustment does not function. The transaction
   is aborted when RDB encounters its FIRST lock regardless of the level
   at which this lock is placed. Now, the whole idea of ALG is to reduce
   the number of locks that RDB must hold. This is done by placing any lock
   at the highest possible level and only 'demoting' the lock when a lock 
   conflict is encoutered. Thus the FIRST lock will probably be on the best
   part of a whole relation!

   This bug is fixed in V2.1. The transaction is only aborted if a lock
   conflict occurs at RECORD level.

10. SEGMENTED STRINGS:

    Segmented strings of a non standard length are perfectly allowable 
    under RDB V1.1. RDB V2.0 however has a bug in it which does not 
    allow the creation of segmented strings of length other 512 chars. 
    Apparently it can be overcome by editing the RDB systems relations,
    but this is 
     a) tacky 

     b) highly undesirable on a production system. 

     c) UNSUPPORTED

T.RTitleUserPersonal
Name
DateLines
22.1Straight to v2.1 ???RDGE28::KEWJerry Kew dtn 830-4373Fri Aug 22 1986 17:2218
Has anybody out there found out if you can go straight from v1.1 to v2.1?
if not, wouldn't it be worth checking out on the RDB conference. From the
text of .0 it would appear that a lot of the problems are cleared in 2.1.

Should we therefore be taking the cluster straight to RDB v2.1?

It would be worth identifying what the rest of Europe is doing with RDB as 
that could well impact the applications under *development* as opposed to 
Debbie's application in support, as there is presumably a possibilty that 
the development applications are going to go onto target machines with 
other RDB applications already running on them. Compatibility of RDB 
version requirements would then be at issue.


Just a few ramblings   :-)

Jerry

22.2RDB V2.1, Good idea, but....VULCAN::PLATTTue Aug 26 1986 09:4918
    Your quite correct, many of the problems of conversion do go away
    with RDB V2.1. However, there is a big problem with RDB V2.1; It
    has n't been released yet! As far as I know it is due to be released
    at the end of August which may be in time for the cluster. If this
    is the case then I hope that whom ever it may concern will ensure
    that we have this on the cluster.
    
    As to your second point about the target machines, you again correct.
    The project leaders responsible for the UK RDB projects are currently
    looking into the effect of upgrading the live machines. However I
    am not sure if this issue has been raised in 'European' development
    as yet. 
    
    Yet more ramblings,
    
    Pete' Platt
    
    
22.3October at least for 2.1RDGE21::MORRISTue Aug 26 1986 12:1111
    When its released its not going to make life any easier for the
    cluster management team cos you need VMS 4.4 and thats a bit flaky.
    
    Wait for VMS 4.5 in October and then you can go 2.0 >> 2.1 >> 2.2
    
    For those who who are interested 2.2 supports DDAL (distributed
    database acces layer) which allows the subseting , aggragating and
    distibution of Rdb databases in an efficient and automatic manner,
    thats why we (EUC) are so interested in it.
    
    Chris...