[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | DEC Rdb against the World |
|
Moderator: | HERON::GODFRIND |
|
Created: | Fri Jun 12 1987 |
Last Modified: | Thu Feb 23 1995 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1348 |
Total number of notes: | 5438 |
340.0. "Sybase V4.0 Review (long)" by IND::SANTIAGO (VMS and U___, perrrfect together) Mon May 01 1989 08:46
What follows is a long winded brain dump from a Sybase 'Fast Track'
course I just completed for a customer project. It dispelled for
me just what this product was and it's limitations. Hope this helps
those who gather info from afar
/los
Recently I had the opportunity to attend a 1 week Sybase 'Fast Track'
training course on the new V4.0 product (still in Beta test). This is
the version they're bringing to VMS, sun, ultrix, os/2, mac ...
What follows are my course notes so they do follow along with the material
presented. It is hoped they will provide some insight into this product
which to date has gotten a lot of press in this area (NY Financial).
/los
dtn.461-1918
This was my first real exposure to the product in a non competitive
environment. The product can be characterized as a step up from 'dbase'
in functionality and does not present itself to be a 'database' system,
but rather an application development tool. From a tool point of view,
the product is quite easy to use (assuming you know both C and SQL
(their version)) in the development of applications.
The Environment
In the audience were several customers all with prior pc-database
experience and most all were implementing under UNIX (except 2). The
labs were run a sun 3/260 with 16MB and two disk drives (600MB total)
and provided ample compute power so long as everyone (24 users)
didn't want to start their lab at the same time. For labs that
required reading only from the database lab (a small 2MB publication
demo database), the machine was fine. Performance really began to
be a factor during metadata changes and programming.
The simple queries were fast and the tools used very simple. The intent
here is not to develop sophisticated applications, but be able to develop
using a knowledge base of PC development tools (paradox, dbase, reflex,
rbase...)
Tools
Notable application tools includes the creation of stored procedures
(showing a bit of their Britton Lee heritage) through a tool called 'dwb'
the data work bench. Here you get to enter 'automatically' committed SQL
commands to develop your database and procedures. You had the option of
'batching' SQL statement within a
begin tran
SQL statement
...
commit tran
block, but most of the course was done one line of SQL at a time. This
made undoing mistakes in definition costly (more on this later). The
majority of the course (4 days) was spent on dwb with a 1/2 day look at
a real simple to use SQL environment called ISQL (interactive SQL).
Attached at the end of this document, is a hack I wrote during the last
day that does about 99% of what ISQL does. The point being that both
ISQL and DWB were constructed using a very simple to use run-time
library known as 'db-library'. This topic was covered on the last day
and we were told really fell under the category of db-library
programming, another course.
T-SQL
Transaction SQL is what it's called and it's missing a number of
elements that have been provided via stored procedures (SQL
sub-routines) which made development somewhat tedious. Quite a number
of extensions have been added namely
access to temporary tables
access to before-image, after value data in triggers
access to global (Sybase) variables
access to image/text (i.e. segmented string) datatypes
if..then..else
while
begin..end
waitfor ...
column level security
indexes can disallow duplicate key and ROWs
SQL Server
All operations on the database are carried out by a single process
(detached job) called the dataserver. Communication to the server is
through the SQL language (using either TCP/IP or DECnet<VMS>), which is
then sent to the dataserver for interpretation. This literally means
sending ASCII SQL to the server. Return information is in the TDS
(tabular data stream) format, a counted ASCII encoded message protocol
containing just the rows selected by the query.
The product allows the pre-compilation and storage of SQL 'procedures'
known as 'stored procedures' (used to be called 'stored commands' under
Britton Lee), which may take arguments like any 3gl language and return
back status to the caller. Many times we were told that this method of
development has a 5:1 performance improvement over embedded SQL as
using 'just SQL' requires a compilation and execution plan to be
constructed on each call. Using stored procedures precompiles the
request relieves this.
Multiple servers are allowed on the same machine and each server can
handle multiple databases. The naming of every object within the database
includes an optional server name. The full syntax for an object is
[server.][database.][owner.]object[.column]
Each database however can only be accessed by one server at a time. A
program can make use of more than one server (hence database) at a time,
but 2PC is not formally supported. A 3GL routine library is provided
for those that require 'distributed transactions' and that a sample
application would be provided (using db-library) on how to do this
(not covered as part of this course).
Locking
All reading is done without locking, unless explicitly called for
through a 'HOLDLOCK' qualifier on the select statement (level II
consistency). Update locks are held until a commit. It may be
important to note the various lock types (both of them) that are user
selectable. They are
shared (read and write)
exclusive
intent (read for update - triggered by a SELECT ... HOLDLOCK command)
demand (allows current shared trans to finish, blocking new ones)
We were also told that
rows are never locked (only at page level)
pages are usually locked
extents are always locked
tables are sometimes locked
indexes are never locked
and when we pressed on these, we got told
operations are so fast it wasn't worth while doing row locking
index writing blocks other activities
selects (WITHOUT the 'HOLDLOCK' qualifier) don't lock the page
and are the norm in Sybase
Buffer Cache
Reading and writing is all through a buffer cache settable by the sa
(system administrator). (I believe the naming here is intentional. Never
was the term 'dba' mentioned or I was asleep ;-). The manager of this
environment is visioned to be the same over the overall computing
system).
The cache is written to a system wide log file which sequentially logs
all updates and before image copies. Upon committal of a transaction, the
log file is flushed (via a commit record). The writing back to the
database is not performed at committal, and addtionally, the buffer
cache is available for future read operations. Cache records are
forced to the log when space is required.
Log file
The database is only forced written when the log file is 'dumped' or
when a use executes a 'checkpoint'. Users of critical applications are
encouraged to checkpoint (at least 2 in a row) during quite points.
The manner is which the log file is written is to create an 'UNDO-REDO'
log file. After a crash, uncommitted transactions are 'undone' by
putting back the before image record copies, committed transactions are
written to the database using the committed log copies. On server
startup, it reads back the log and applied necessary rollback/forward
to all databases managed by that server.
Structure
The database environment is made up of a master database (hold all
system tables, security information) and acts as a model for the
creation of user databases, a temporary database (to hold all
temporary tables) and various user databases. Temporary tables
(identified by a "#" prefix are user selectable, but disappear once the
user logs off the system. Databases have the following limits
- 2KB per record (less 32b page overhead)
- 2KB per database page (fixed)
- 250 fields per record
- 256 records per page
- 2 billion (i.e. 2^31) tables per database
- 16 fields per composite index
- 256 bytes per index
- unlimited number of 'mirrors' (file level)
Definitions
Defining items in Sybase tend to be quite tedious. Some are done with
SQL commands, others by calling a stored procedure. Also changes can be
quite drastic as
- you can't change a field once it's been defined
( drop it and create another )
- you can't retrieve a table definition once defined!
- some definitions can only be retrieved by using a shell utility
( defncopy )
- you can't drop columns from a table
( copy data out and back into a new table )
Most labs during the course really showed off the rate of change in the
product. It would have been nice if all metadata changes were done through
a menu interface or stored commands or SQL, but not all three. The
following items were newest and show the least amount of integration
with the rest of the product.
RULES - field level checking; only provides 'picture' level
editing, but with a unix-like command syntax; for example a field
that was numeric and be coded at 4 numeric, a decimal point and
two decimal points fraction (9(4)V99 for COBOL folks) would be coded as
sp_addtype numeric_type, 'char(7)', 'not null'
making use of a stored procedure "sp_addtype" and passing 3 args. One
would then define a rule to perform picture editing as
create rule numeric_field_rule as
@numeric like "[0-9][0-9][0-9][0-9].[0-9][0-9]"
where "@numeric" is a token variable. To activate this rule one 'binds'
it to the type as
sp_bindrule 'numeric_field_rule', 'numeric_type'
and then incorporates it into a table definition as
create table foobar (
amount numeric_type,
name char(20) not null)
DEFAULTS - default field values; used similarly to rules; for the above
field, if you wanted to have a default value of 100 added you would
create default numeric_default as '100.00'
sp_bindefault numeric_default, 'numeric_type'
Again not the use of both stored procedures and SQL like commands. I'm
unable to present the menu structure (which was a hurdle in itself).
DATATYPE - 13 varieties not including zoned numeric (i.e.
numeric(9,2)), but including a 'datetype' field.
char(n) varchar(n) text
binary(n) varbinary(n) image
int smallint tinyint
float money bit
datetime timestamp
TRIGGERS - routine to execute upon update, delete or insert to a table.
One per table. Binding a new trigger to a table trashes the old bind
(but not the trigger). Anything may be performed here, normally if the
operation logically failed (the field lookup returns no rows), then here
is the place to code a rollback. Note that triggers are invoked after
the value change. Dropping a table, also DROPS a trigger. This is
somewhat of a pain as you can't alter or delete fields in a table other
than by exporting the data, dropping the table, create a new one and
reload. There seemed no automatic way to get the trigger back in. Also
triggers are not invoked during a load or table 'truncate'.
During an update, delete, or insert, the server creates temporary tables
so that a procedure can be invoked during one of these operations. Hence
the term "trigger". Within this framework one can look at the
'deleted' table, which contains those rows that have been most recently
deleted from the table in question. The reference to these 'trigger'
tables is by the reserved names 'deleted' and 'inserted'. Note that this
naming is different from other temporary tables (which use a "#"
prefix).
The trigger can deduce what operation (update, delete, insert) caused
it to be invoked as well as which field has been altered (update only).
INDEXES - B-tree only, multi-key with up to 16 parts in a composite key.
Options include uniqueness and clustering (key must be created before
load) of record physically in key order.
Load/Unload/Backup
Loading and Unloading the database is provided via bcp (bulk copy)
program or from within the dwb application. Within dwb the user
visually selects the table to copy in/out and the fields and associated
datatypes to use. On output a new file is created, on input the data is
appended to any already existing in the table. It may be important to
note that bcp operations are not logged. Also, use of the bcp facility
must be enabled for the database. The file is read/written a record at a
time and the user selects how to delimit fields within a record and
other records. To make it PC compatible with other tools, use "\t" and
"\n" which represent the "C" language escape characters TAB and NEWLINE
to delimit fields and records. For usage directly by the C language,
omit the field delimiter. For packed records, omit the record
delimiter.
I got conflicting messages on whether backups may be performed on-line.
On the one hand, since read operations don't lock by default, the answer
was yes, on the other since part of the backup strategy was dumping the
log file, the answer was no. The methodology seemed to mirror the UNIX
'dump' utility as dumps are done in levels.
Also part of the product was mirroring. The functionality seems to be
similar to VMS volume shadowing as one could
add mirrors (didn't hear a limit on the mirror set size)
can mirror storage segments only
can put online/offline segments
can add and place a segment in catch-up mode.
Growth
Disk space must be manually added to the database and storage segments
as storage segments (i.e. file systems) under UNIX can't be
automatically extended. Under VMS I assume the product should be able to
extend a file. A database uses a storage segment until full, then moves
on. Deleted storage space is reclaimed one the transaction commits. As
far as segments go, the syntax to add them was similar to creating the
database in the first place. Also, new in this release was the placement
of the log file on a disk other than where there is a storage segment.
The syntax for segment usage is
CREATE DATABASE dbname
[ON {DEFAULT|logicalname}[=sizeMB][,logicalname[=sizeMB]]...]
[LOG ON logicalname[=sizeMB]]
ALTER DATABASE dbname
[ON {DEFAULT|disk_name}[=sizeMB][,disk_name[=sizeMB]]...]
Security
Security is provided at two levels: command and objects. For commands,
you can, by user, allow/disallow access to
the create command at one of the following levels
- table (includes indexes and triggers)
- procedures
- defaults
- rules
- views
the 'dump' utilities at one of the following levels
- database
- transaction
objects for either direct or indirect access. Indirect access is through
stored procedures, direct access is at the SQL level. This allowed the
sa to grant a user access to stored procedures, but not the ability to
manipulate the data directly with SQL. The levels of access for direct
access are
- select (column or all columns)
- update (column or all columns)
- insert
- delete
Permissions are enabled/disabled through the GRANT/REVOKE SQL commands
in the form
GRANT|REVOKE { select } ON object[.column] TO|FROM user[,user]...
insert
delete
update
execute
all
DB-Library
This is a sub-routine package for
C
COBOL
FORTRAN
pascal
Ada
to have access to SQL and stored procedures. The library provides
routines to
initialization and cleanup functions
command set-up and execution
results processing
miscellaneous routines
authorize user to server
All functions for "C" have a "dbxxxx" naming convention. However, for
other FORTRAN they're "fdbxxxx". No mention on just what COBOL, pascal
and Ada use. The routines were generally very easy to use and come
with online documentation (help on VMS, syman for UNIX).
Sample Application
The following is a hack I wrote in 2 hours which does 99% of what their
ISQL utility does. I provide it here to show the types of operations
that are capable, in particular ANY SQL command and the ability to
retrieve information about the database environment (list of tables,
fields, the column heading for an select statement) (issues I have
with SQL services for ms-dos environment).
This program allows one to 'go-back' and correct mistakes, an
issue I had with all the Sybase utilities. Once a a command was
entered, it's gone. There is a round-about way to extracting
successful commands from a history stack through menu control, but once
logged out some operations I've come to take for granted were difficult
if not impossible.
Hopefully this code is somewhat readable and gets the point across on
the ease of use (if you like "C" rather than embedded SQL ;-).
/*
* my-isql.C Sybase db-library program to provide an editable
* SQL environment replacing the Sybase 'ISQL' utility
*
* carlos santiago (aka /los)
* DEC NYC
*/
/*
* These includes must appear in this order; If anyone is interested in
* their contents, please mail me.
*/
#include <sybfront.h>
#include <sybdb.h>
#include <syberror.h>
/*
* Normal stdio header
*/
#include <stdio.h>
#define woof(msg) printf("%s: %s\n", argv[0], msg)
#define STACK 512
/*
* The following are external references to two Sybase provided error
* handlers. Error handling is quite precise as you can find out the
* procedure, line, table, column and data which caused the error.
* again is anyone wants these, please mail
*/
extern err_handler();
extern msg_handler();
/*
* This programs the use to select the server, user-id and password to
* use in talking to the server. This is beyond the level of
* functionality provided by the standard 'ISQL' utility. Defaults are
* to the user-id at login which contains database mappings for Sybase
* id and optional password. The server to use is obtained by a shell
* environment variable (SYBASE) or through a system-wide logical for VMS.
*/
main(argc,argv)
int argc;
char *argv[];
{
DBPROCESS *dbproc; /* 'thread' record */
LOGINREC *login; /* authorization record */
RETCODE return_code; /* Sybase define int */
char buffer[STACK]; /* command buffer */
char filnam[32]; /* filename buffer */
char *serv=NULL /* server name */
,*user=NULL /* user-id */
,*pass=NULL; /* user-password */
struct { /* use static storage */
char buf[STACK]; /* couldn't get malloc */
} cmd[STACK]; /* to work here?? */
int I, C=-1;
FILE *fp;
for ((I=1; I<argc; I++) {
if (!strcmp(argv[I],"-s")) /* select a server */
serv = argv[++I];
else if (!strcmp(argv[I],"-u")) /* select user-id */
user = argv[++I];
else if (!strcmp(argv[I],"-p")) /* select password */
pass = argv[++I];
else { /* unknown option */
sprintf(buffer, "command %s is unknown", argv[I]);
woof(buffer);
}
}
/*
* Establish our error and message handlers; you'll note a lack of
* typical error handling after each routine. This is because they
* are being handled in the routines below, external to the main-line
* code resulting is a simple development of applications.
*/
dbmsghandle(msg_handler);
dberrhandle(err_handler);
/*
* See if we can login, and attach to the server, exit if not
*/
if ((login = dblogin()) == NULL)
exit(ERREXIT);
if (user != NULL) DBSETLUSER(login,user);
if (pass != NULL) DBSETLPWD(login,pass);
if ((dbproc = dbopen(login,serv)) == NULL)
exit(ERREXIT);
/*
* Once established, loop for SQL commands; each line entered is send
* to Sybase buffered. A line doesn't get executed until the dbsqlexec()
* function is called. Each individual line is sent/buffered with the
* dbcmd() function.
*/
for (;;) { /* loop until null */
for (I=0;;I++) {
/*
* For each line, construct a prompt which contains the
* statement number as does ISQL. Ask for input, break if none.
* The 'ISQL' utility required the word 'go' to process the
* command buffer.
*/
printf("sql-%d> ", I);
gets(buffer);
if (!strlen(buffer))
break;
/*
* "?" is a command I added to list all previously entered
* commands (correct or not).
*/
if (!strcmp(buffer,"?")) {
for (I=0; I<=C; I+)
printf("%d, %s\n", (I+1), cmd[I].buf);
I=-1; break;
}
/*
* "!n" is a command I added to recall a previous command (n) in
* a format similar to the UNIX history pull
*/
if (buffer[0] == '!') {
dbfreebuf(dbproc);
sscanf(buffer, "!%d", &i);
if (I < 1 || I > C) {
woof("command not in the stack");
I=-1; break;
}
dbcmd(dbproc,cmd[(i-1)].buf); /* use old cmd */
break;
}
/*
* "@n" is a command similar to "!" but allows editing via the
* "vi" editor.
*/
if (buffer[0] == '@') {
dbfreebuf(dbproc);
sscanf(buffer, "@%d", &i);
if (I < 1 || I > C) {
woof("command not in the stack");
I=-1; break;
}
/*
* Get the old command and write it to a temp file using
* our process id; call the VI editor to edit and when
* we return, save and issue; assume nothing can go
* wrong here ;-)
*/
sprintf(filnam,"%dISQL",getpid()); /* temp file name */
fp = fopen(filnam,"w");
fprintf(fp, "%s\n", cmd[--I].buf);
fclose(fp);
/* With the file written, go edit it using VI */
system(buffer);
sprintf(buffer,"vi %s",filnam);
/* With file edited, read back and into buffer */
fp = open(filnam,"r");
strcpy(cmd[I].buf,"");
for (; fgets(fp,sizeof(buffer),buffer) != NULL; )
strcat(cmd[I].buf,buffer);
fclose(fp);
dbcmd(dbproc,cmd[I].buf);
I=1; break;
}
/*
* Append a space to the command. This is required since all
* dbcmd() does is append this string to the existing command
* buffer. This identical in function to 'ISQL'.
*/
strcat(buffer," "); /* add separator */
dbcmd(dbproc,buffer); /* send to message box */
}
/*
* We're at the end of the input loop; check the 'I' variable
* 0= stop and exit program
* >0 we have a command to process
* <= go back to input loop
*/
if (!I)
break;
else if (I < 0)
continue;
/*
* execute the command as simple as possible; this is achieved
* with the following function calls:
*
* dbresults() - retrieve all the results of the command buffer
* dbprhead() - print column headings
* dbprrow() - print a line of returned data
*
* Since the user can enter multiple SQL commands (and return
* multiple results), we need to execute a while loop to get them
* all.
*
* For each execution, call dbprhead() for titles, then
* dbprrow() for the data.
*/
if (dbsqlexec(dbproc) != FAIL)
while (dbresults(dbproc) != NO_MORE_RESULTS) {
dbprhead(dbproc);
dbprrow(dbproc);
}
/*
* With the command executed (or error resulted), save the
* buffer the server has into its command buffer; then do it
* all over again.
*/
if (++C) >= STACK) C=0;
dbstrcpy(dbproc, 0, sizeof(buffer), buffer);
strcpy(cmd[C].buf,buffer);
}
dbexit();
exit(STDEXIT);
}
/*
* This routine is quite simple but SURPASSES the 'ISQL' utility customers
* pay for. About 5 customers in the class liked it better than theirs
* and took printouts (less comments I've added here ;-).
*/
T.R | Title | User | Personal Name | Date | Lines
|
---|