[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

921.0. "RALLY & Rdb vs Oracle" by MARVA1::KIMMEL (Lorey Kimmel - EIS - DTN 341-2397) Mon Apr 29 1991 16:21

    Hi,
    
    I have been looking for Oracle vs. Rally/Rdb performance comparisons. 
    Everything I have found is Oracle vs. Rdb.  Is there any information on
    Oracle vs. Rdb using Rally?  I guess what I am really looking for is
    Oracle vs. Rally performance. 
    
    Any pointers or info would be greatly appreciated.
    
    (I have looked in the Rally competetion notes file)
    
    Thanks in advance,
    
    Lorey
    
T.RTitleUserPersonal
Name
DateLines
921.1ROM01::FERRARISDiscover wildlife... have kids!Tue Apr 30 1991 13:3918
Lorey,

one common mistake that is done by customers is to confuse the database
_engine_ with the tools that work layered on top of that engine. You can't
compare Rally with Oracle as a whole; you'd probably succeed in comparing the
form management tool that Oracle sells, named SQL*forms, with Rally BUT, as far
as I can tell, it's pretty much hard to compare the performance aspects of two
4GL tools without counting in the logical/physical design of the database the
application runs on, and the performance of the engine itself. Anyway, for a
technical comparison of the tools, I suggest you to have a look at note 11 in
the Rally competition notes file.

Why do you need performance data? What is your exact problem? Perhaps if you
tell us, somebody could come out with some good suggestion...

Cheers,

	Max
921.2package vs packageMARVA1::KIMMELLorey Kimmel - EIS - DTN 341-2397Tue Apr 30 1991 16:4923
    Well, you are right, I want to compare package to package (Oracle
    SQL*forms with Oracle DB to Rally with Rdb)  I have a customer who is
    considering converting from the Oracle environment to Rally/Rdb.  We've
    almost got them convinced, but they want to be assured that they will not
    see any performance degradation with going to Rally.  They plan on using
    Datatrieve for most of their reports - I told them they could use Rally
    for some reports also.

    I know and YOU know that they can most likely expect increased
    performance, but me telling them that isn't quite enough.  They want
    some real benchmark numbers.  I've looked at all the stuff in the Rally
    Competition file, but there doesn't seem to be any real numbers.

    Again, any help will be appreciated.  They customer is going to
    evaluate SmartStar also, and we are a little afraid they just might
    start looking at some other products.  I'd hate to see us lose just
    because we couldn't provide some numbers.

    Thanks,

    Lorey
    
    P.S. I'll check out note 11
921.3See RALLY_V22 note 328KOBAL::KIRKI've lost my hidden agenda!Tue Apr 30 1991 19:5911
    Loery,
    
    Graham Boundy in Toronto recently completed an extensive benchmarking
    exercise with RALLY & Rdb. I think that the results were very
    promising. 
    
    I would suggest that you see note 328 in the RALLY_V22 conference, and
    contact Graham for more details.
    
    Good luck,
    Richard
921.4Try asking the migration centerROM01::FERRARISDiscover wildlife... have kids!Thu May 02 1991 20:0615
>    I have a customer who is
>    considering converting from the Oracle environment to Rally/Rdb. 

As far as a conversion is requested, if I were in your shoes I'd try to contact
Dave TYFYS::Munns and ask for information on the Oracle->Rdb conversion with
the Trifox tools. I remember to have read somewhere that you have gains in
performances just putting the Trifox tools over the O db, and more and more if
you switch O with Rdb. The interesting part is that we (as DEC) can provide
consultancy in doing this and even do it by ourselves, via the Migration
Center. You could have them convert their application this way and consider
using Rally for new applications.

Hope this helps a little,

	Max
921.5Real perfomance numbers pleaseTRCA03::MCMULLENKen McMullenThu May 02 1991 21:088
>>I remember to have read somewhere that you have gains in
>>performances just putting the Trifox tools over the O db, and more and more if
>>you switch O with Rdb
    
    I have read the same thing, but the source is very dubious (ie
    marketing literature). Has anyone done a benchmark between Trifox tools
    and Oracle's SQL*Forms? Has anyone compared Trifox tools against an O
    database and then against an Rdb database? 
921.6Comparison?MARVA1::KIMMELLorey Kimmel - EIS - DTN 341-2397Fri May 03 1991 17:2213
>As far as a conversion is requested, if I were in your shoes I'd try to contact
>Dave TYFYS::Munns and ask for information on the Oracle->Rdb conversion with
>the Trifox tools. 

    I have contacted the Center for Migration Services, they are too
    expensive.  I'm still looking for performance comparisons.  I see a lot
    of stuff on Oracle, and a lot on Rdb, is there anything that compares
    the two in one document?  Maybe I'm just not searching correctly. 
    
    
    I'd appreciate any pointers to existing performance comparisons.
    
    Lorey
921.7TYFYS::DAVIDSONMichael DavidsonMon May 06 1991 17:3621
>>    I have contacted the Center for Migration Services, they are too
>>    expensive.  I'm still looking for performance comparisons.  I see a lot
>>    of stuff on Oracle, and a lot on Rdb, is there anything that compares
>>    the two in one document?  Maybe I'm just not searching correctly. 
    
    Not anymore expensive than any other Profit and Loss Center within
    Digital.  Come July 1 you are going to find a lot of organizations
    that use to be FREE, now EXPENSIVE.  How expensive is expensive when
    WE have the experience with the tools -- you don't.  The Center for
    Migration Services was set up as a Profit and Loss Center by the
    Corporate VP's, not us.  However, there may be changes in the works
    for the future.

    Lory, performance comparisons really are moot point if you aren't
    going to work with the CMS.  We have an EXCLUSIVE contract with
    Trifox.  All use of tools and orders for runtime licenses come
    through us.

    Moderator:
      CONVERSION
      DATABASE_CONVERSIONS    
921.8What about my original request?MARVA1::KIMMELLorey Kimmel - EIS - DTN 341-2397Mon May 06 1991 18:3624
    re .7
    WE (DEC) are not going to do the actual conversion.  The customer has
    $15 k to spend, PERIOD!  They are an educational environment.  The
    decision to use Rdb is fact.  They are comparing Rally and SmartStar to
    convert to.  They asked for performance comparisons more as a
    validation that Rally /Rdb will not be worse than SQL$forms/ Oracle.  

    I don't understand why we don't have this kind of information, or if we
    do, why it is so hard to find.  I understand that it is not just a
    performance issue so do they.  This sales almost in the bag, but it's
    just isn't going to look good if we can't come back with some simple
    performance numbers.  They are an all DEC shop, and I'd like to keep it
    that way. 

    <Frustration level high>  I feel like I've been asking for the moon,
    and everyone had offered good advice, but my original question has not
    yet been answered.  I already tried all the other avenues mentioned.  I
    just wanted some numbers.  If we have to add some kind of disclaimer to
    the numbers, that is fine.  If they need a PID to see numbers, that is
    fine.  If they have to be unofficial, and presented as examples, that
    is fine.  I'm not trying to fight Oracle here, they've already been
    beaten (by price), I'm just trying to help give the customer a warm and
    fuzzy feeling.  I guess the answer to my original question is: We don't
    have and/or provide those kind of statistics to our customers.
921.9Rally or Oracle - doesn't really matterTRCA03::MCMULLENKen McMullenMon May 06 1991 23:4943
    Lorey,
    
    I understand your high level of frustration, having in the past
    attempted the same exercise. Unfortunately you are most likely not
    going to find the information your require. If you do you will most
    likely not be able to share the information with the customer.
    
    When Oracle sells their software, the contract contains clauses about
    not using the software for benchmarks. This makes it very hard to
    obtain performance numbers. Oracle will perform benchmarks in a
    competitive sales situation, but they will not allow customers to
    release the information. 
    
    Now let's look at your customers environment. You have not said what
    type of applications they currently have, how many users, complexity,
    datbase size... I am sure that if the application support 10-20 users,
    and the databases are less than 500 MB, then the 4GL and database will
    not really matter. That is Oracle's or DEC's or Smarstar's or Ingres' tools
    will all provide about the same performance. When the applications grow
    in number of users, complexity and database size your customer will
    want good scalability with their existing tools or be able to co-exist
    with other tools/software that provide the required growth. That is
    what really differentiates 4GLs today. All 4GLs I have had experience
    with do not support large number of users and/or complex applications
    very well. You can sometimes throw a lot of hardware at the
    application, but then that can get too expensive.
    
    I do not know how you will convince the customer that Rally will
    perform equal to or better than SQL$FORMS. It should not be worse!
    A long time ago both DEC and Oracle bought some components of their
    4GLs from the same source. Just take a look at the action sites in a
    Rally application and an SQL$FORM application...hmmmm. 
    
    Remind your customer about Oracle's contracts and that public
    performance numbers may not be obtainable.
    
    good luck,
    
    Ken McMullen
    
    P.S. Don't we give our software away to educational institutions? Why
    would Smartstar be in the running. I hope this potential Rally sale is
    not costing us money. 
921.10Magic Numbers?TROA02::NAISHO�4ME Paul Naish DTN 631-7280Tue May 07 1991 15:5030
    Lorey, I think a number of us share your frustration and would love to
    have 'magic numbers'. Unfortunately, its like the ad says, "actual
    mileage may vary with driving conditions" so it goes with performance
    numbers.
    
    If ORACLE published TPC-A benchmark results, that would give you at
    least an Apples-Apples comparison of user based performance. Their
    TPC-B numbers are very questionable and I would not recommend comparing
    them against our TPC-B results.
    
    I'm in a similar problem converting a DG customer to VAX. We have
    already determined that we need to run a bench-mark in order to
    determine sizing for needed performance. Sales also agrees with this.
    
    Not too confuse matters more, but it has been suggested by one ORACLE
    Consulting firm, and an article in UNIX World that that concept of
    converting a 4GL is counter productive. The reason that a 4GL is used
    is to reduce development time. When converting, new features of the
    target environment will likely not be fully utilized because the
    conversion utility must try and maintain the context of the original
    4GL. I think this point may apply more to SMART-STAR rather than
    TRIFOX. So converting to RALLY means using the existing system as the
    detail design specification and re-developing instead of converting.
    This permits full use of RALLY features but more important, assures
    that performance features (bottle-necks) are handled properly.
    Remember, you can provide all the numbers needed, but the new
    development must be done properly. Otherwise, they end of with a poorly
    performing system and with a 4GL, this is very easy. Make sure that
    perceptions are set properly so that they don't come back on you if
    the system does not perform up to expectations.
921.11TROA02::NAISHO�4ME Paul Naish DTN 631-7280Tue May 07 1991 15:532
    Lorey, also remeber that they just might not have enough money in their
    budget to do what they want. $15K is not very much. 
921.12Don;'t expect TPC-A from OracleIJSAPL::OLTHOFHenny Olthof @UTO 838-2021Tue May 07 1991 22:4921
    re 10:
    
    Sorry Paul, I just attended a three day seminar/training at Oracle with
    lot's of Oracle (and Digital users) there. One of the customers asked:
    "Why don't you (Oracle) publish TPC-A numbers". The answer was, that
    "Oracle wants to compare database performance, not everything around it
    like communication, forms etc. You CANNOT compare one TPC-A benchmark
    with another!". Well, this really amused the audience but no argument
    from the audience helped to change their minds. 
    
    Guess it's comparable to a person, who wants to buy a fast car.
    He's interested in the top-speed, not the number of cylinders or
    horsepower of the engine. A fast engine with bad aerondynamics does not
    break speed records.
    
    Bottom line, don't expect any TPC-A numbers from Oracle. Even worse,
    they claim to have the record on VAX with 425 TPC-B TPS on 4*6560. They
    made a clear statement that all efforts will be done to keep that
    position, no matter what (TPC-XXX) numbers Digital will release. So
    why don't we prove, that we can do better if we upgrade the 4*6540 we
    used to get to 300 TPS to a 4*6560 cluster also?
921.13db vendors like to focus on the dbMRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Wed May 08 1991 14:3313
    We are currently in discussions with Oracle to support not only the
    TPC-B testing on Digital systems, but TPC-A as well.  Oracle, like
    most RDBMS vendors, is only really interested in the performance of
    the back-end database, and sees TPC-A as primarily a hardware/system
    test that primarily benefits the system vendor.  Other vendors,
    notably INFORMIX, have supported other system vendors in their TPC-A
    efforts, and we will be using that as leverage.
    
    Also, what really counts is $k per transaction, noy just the raw
    number of tps.
    
    Mark
    
921.14Thanks, I needed that!MARVA1::KIMMELLorey Kimmel - EIS - DTN 341-2397Wed May 08 1991 22:4125
    WOW!  Thanks for all the good input!  At least now I know where I
    stand.  I can still be frustrated, but for a different reason.

    re .9 - Thanks for the sympathy and advise.  The info about
    non-published figures from Oracle I can use.  We won't so a benchmark
    in this case, it's not worth it.
    No, we really aren't spending too much money on this.  Most of my
    research is being done in my "spare" times.
    Yes, we give software to Education Institutions, bur so does SmartStar!

    re .10 - We are planning to redevelop, not convert.  The customer is
    creating a functional spec based on there current system right now.

    re .11 - Due to the cost, it looks like they are going to have to do
    most of the development work themselves.  They have plenty of time to
    train (I told them Rally and RDB training is a MUST), and plenty of
    time to develop.  We will mainly do the RDB design with that $15K with
    a little Rally consulting and support

    Thanks to all who responded.  At least I now know I'm not alone in
    this, and can format some kind of report for the customer.

    	It still would be nice if we could create and run a benchmark
    ourselves comparing the products, but I'm sure there is a lot of legal
    issues there.
921.15When available, final outcome pleaseTRCA03::MCMULLENKen McMullenWed May 08 1991 23:056
    Lorey,
    
    Your note generated a lot of interest. Let us know what the customer
    finally decides on and why they choose that direction.
    
    Ken
921.16We're so nice to help our "friends" at Oracle.17301::LANGSTONassimpleaspossiblebutnotsimplrThu May 09 1991 01:1147
RE: 921.13 by MRKTNG::SILVERBERG

�    We are currently in discussions with Oracle to support not only the
�    TPC-B testing on Digital systems, but TPC-A as well.

We?  Who exactly (what organization) is this we?

�  Oracle, like
�    most RDBMS vendors, is only really interested in the performance of
�    the back-end database, and sees TPC-A as primarily a hardware/system
�    test that primarily benefits the system vendor.

I don't buy this for a second.  For one thing, Oracle is more than just an
"RDBMS vendor."  They sell 4GLs, a network piece, financials, CASE tools,
a front-end graphical design tool, etc.  While I can't speak for the quality
of these other products, I'm sure their interests could be served by releasing
TPC-A numbers, assuming the numbers are as good as their TPC-B numbers; a TPC-A
test could show the performance of SQL*forms, SQL*net and all their other 
SQL*everything.

But wait a minute: are "we" going to support them for TPC-A, as you say in your
first sentence or are they not interested, as you imply in your second?  

Maybe they are doing TPC-A because it's "primarily a hardware/system test that 
primarily benefits the system vendor [Us in this case]."  
...yeah, that's the ticket...


�  Other vendors,
�    notably INFORMIX, have supported other system vendors in their TPC-A
�    efforts, and we will be using that as leverage.

"We" will be using *what* "as leverage" with who to do what?


�    
�    Also, what really counts is $k per transaction, noy just the raw
�    number of tps.

Yes, but that cost ($k per transaction) includes the whole enchilada.  According
to the 10 November 1989 TPC BENCHMARK(TM) A, Standard Specification, 
Clause 9 - Pricing:

"9.1.2   The proposed system to be priced is the aggregation of the [System 
Under Test], terminals and network components that would be offered to achieve
the reported performance level.[...]"
  
921.17ask USSDATABS::JOEDAD::NEEDLEMANtoday nas/is, tomorrow...Thu May 09 1991 15:3931
    re .16

    Mark Silverberg was an (the ?) Oracle proponent in FABS until recently.
    He has just joined USS to focus on UNIX databases. According to a prior
    entry in this conference, he is an ex-oracle employee as well. 


    as for TPC-A. my own cut at this is Oracle would never run a test, and
    publish results to make themselves look bad. Without using ACMS (or
    transarc or tuxedo), they would perform horrendously on TPC-A which is
    an OLTP benchmark. TP1 or TPC-B is a totally different story, hence
    their numbers and their attempt to call it OLTP.

    Oracle had employees in an advanced ACMS class I attended over 2 years
    back. One was an ex-deccie and his job was to get ACMS working with
    oracle. Now that they make SOME use of the dlm, I suppose they can run
    the VMS test. I am sure our USS people will do everything in their
    power to make ULTRIX based systems beat anyone elses Unix based systems
    in any benchmark. That includes supplying Oracle, Sybase,  Informix...
    with whatever resources they can buy, borrow or coerce to make ULTRIX
    look good. That is their metrics.

    Up to a point, I understand and accept this. Some people go over the
    line however and it is tough to swallow.

    I personally believe that Oracle is an untrustworthy partner at any
    level.
                                                    
    Barry


921.18I'll keep ya' posted!MARVA1::KIMMELLorey Kimmel - EIS - DTN 341-2397Thu May 09 1991 20:038
    It may be a while, but I'll try to let everyone know what happens.  My
    gut feel is that they will choose Rally (because it's cheaper than
    SmartStar in the educational environment).
    
    Thanks for all the feedback.  I still wish we has something to give to
    customers.  
    
    Lorey
921.19Personal attacks are unbecoming 8^)MRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Fri May 10 1991 15:1345
    re: .17
    
    Barry:  I think you are a little off the subject, and making personal
    attacks with false information is not good notes ettiquette.
    
    1).  I have never been, nor do I ever want to be, an employee of
    Oracle.  I have never stated I was, either in any notes file or
    other discussion.
    
    2).  Attacking me might make you feel good, but why waste your attacks
    on someone as far down the chain as me?  Why not attack the hundreds of
    other employees working to develop a better relationship with Oracle?
    Why not attack Jack Smith, Win Hindle, Peter Smith, Bill Demmer, Dom
    LaCava, etal who are sponsoring Digital's investments in this area?
    I've been personally attacked by other DBS employees over the same
    subject for the past 3 years...your cheap shots are nothing new.  What
    bothers me, however, is that after a year of constructive discussions
    with DBS, you again start the personal attacks on fellow employees.
    I hope that this is not the start of a new DBS smear campaign.
    
    3).  I spent a considerable amount of time trying to get Oracle to
    support more Digital product content in their offerings.  The ACMS
    effort was just one.  Why attack a company for attempting to use more
    of our products in their offering?
    
    4).  There are multiple levels of OLTP, and TPC-B is just one example
    of measuring a low level of tp technology, ie the database engine.  The
    TP stands for Transaction Processing doesn't it?  Every vendor,
    including Digital who releases TPC-B numbers does so in the context of 
    processing transactions on-line.  I agree TPC-A is a much higher and
    robust level, but TPC-B is also within the tp environment.
    
    5).  I am the ULTRIX/SQL Marketing Manager in USS, and I am fighting to 
    help make it the leader in the UNIX RDBMS Market.  Your help is
    appreciated.  However, the top 5 3rd party UNIX RDBMS vendors own about
    80% of the UNIX RDBMS Market, and we are also working to insure they
    run better on a Digital ULTRIX platform than any other.  The task is
    virtually impossible, especially without support from folks such as
    yourself, but we are continuing the efforts against the odds and
    against our own internal groups.  If you feel that this is not the 
    right strategy, please feel free to let Dom LaCava know.  Don't waste
    your time attacking us worker bees.
    
    Regards,
    Mark
921.20no "smear"DATABS::DATABS::NEEDLEMANtoday nas/is, tomorrow...Fri May 10 1991 16:1257
    re .19

    Mark, I DID make a mistake. The entry I read was in another data
    management conference, not here. Since the information that I read in
    the other note was false, I will be happy to place a correction
    attributed to you.                 

    I believe you may be off base on much of your reply however, I can
    assure you, no personal attack was intended and rereading my note, I do
    not believe a personal attack was made. 

    I did not state that YOU wrote the note claiming to be an employee. I
    wrote that "According to a prior entry in this conference, he is an
    ex-oracle employee as well". As I wrote above, I did read this and now
    I will now have to search out the entry I read and place a note that
    you were not.

    As for your 2nd point, I have made my opinions known to several people
    in these organizations about how I feel about partnering with Oracle.
    As for cheap shots, do not lump me with another unknown that dates back
    3 years ago. I was in SWS then fighting to sell against Oracle. I have
    met you in meetings, and have never attacked you, just disagreed with
    you. I do not intend to change now. Your use of the word "smear" is
    highly charged Mark.

    You third point seems off base. I said that they were attempting to
    work with ACMS so that they could compete in the TPC-A type
    environment. I would love it if Oracle used all DEC software and
    offered the best oracle price/performance on DEC gear. Trouble is, at
    the 2 Oracle marketing events I have attended, they have attacked VMS
    due to their own problems in locking. We also have numerous instances
    of Oracle trying to unhook Digital at accounts.
                                                
    As for point four, this is a semantic war. Gartner Group and others
    consider TPC-B a database stress test, and TPC-a the OLTP benchmark as
    I stated. The definitions that Oracle implies in their advertisements
    try and lump them together and that is not the case and all our people
    should know how to sell against this tactic.

    As for point 5, you and I have identical missions, to make ULTRIX/SQL 
    the best Unix RDBMS. I have the additional issue of making sure people
    know Rdb/VMS is the best in the non-unix RDBMS market. I do not promote
    my competitors products at all.

    I agree that your task of making your competitors offerings run best on
    ULTRIX is probably impossible. I attribute it greatly to the fact that
    the firms you are working with have no incentive to do so. Informix is
    partially owned by two other vendors, Oracle's  mission is well know.
    Their actions and dis-information campaigns speak for themselves.
    Ingres is also no longer independent and HP is one of the funding
    sources. What little I do, I do to promote ULTRIX/SQL because that is
    how DEC can grow. 
                                            
    I will state again though, apologies for any perceived attack.

    Barry
    
921.21over & doneMRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Fri May 10 1991 17:548
    accepted & agree that I might have been a little strong, but old wounds
    heal slowly.  Keep up the passionate work; we will need it to win in
    the market.  The following reply will be a report on UNIX OLTP, and
    how "TP Light" is strong in the UNIX environment.  The reply is long,
    so be warned if you use certain notes utilities.
    
    Mark
      
921.22UNIX OLTP..WARNING*LONG*MRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Fri May 10 1991 17:55816
Journal:   Patricia Seybold's Unix in the Office  August 1990 v5 n8 p1(17)
           * Full Text COPYRIGHT Seybold Office Computing Inc 1990.
------------------------------------------------------------------------------
Title:     Unix OLTP: getting ready for commercial prime time. (on-line
           transaction processing) (includes a related article on open
           struggles with remote procedure calls and one on OLTP heavyweights
           moving in to the Unix market)
Author:    Rymer, John R.
------------------------------------------------------------------------------
Descriptors..
Topic:     On-Line Transaction Processing
           UNIX
           Market Penetration
           Market Analysis
           Performance Specifications
           Industry Analysis.
Feature:   illustration
           table
           chart.
Caption:   The OLTP spectrum. (table)
           FT Unix extensions. (table)
           The X/Open OLTP model. (chart)

Record#:   09 447 421.
- -----------------------------------------------------------------------------
Full Text:

Unix OLTP

ONLINE TRANSACTION PROCESSING (OLTP) and Unix have arrived at the same point
at the same time.  OLTP is changing.  It is no longer confined to the
high-capacity, high-speed, dedicated systems used by airlines, banks, and
credit card companies.  OLTP has spread into applications that involve
smaller amounts of data and fewer users and transactions than the massive
traditional OLTP systems.

Unix plays a prominent role in these new "OLTP Light" applications.  The
reason is low costs.

As users scale OLTP downward, they find that Unix systems offer the right
performance at the right price for their new applications.  A systems
integrator reports delivering a big claims processing system for 20 percent
of the estimated cost of an IBM Customer Information Control System (CICS)
equivalent.  Another user, David Sherr, director of equity systems software
architectures for Shearson Lehman in New York, sought bids for hardware to
support an OLTP application with a $3 million budget.  The winning bid was
$1.7 million.  "We have a Unix strategy," he says.  "But even if we weren't
disposed to Unix, you just can't argue with economies like that."  Indeed,
the low cost of Unix systems makes it possible to build OLTP applications
that would be prohibitively expensive on proprietary systems.

Unix has also grown up to meet the challenges of OLTP applications.  Unix and
its file system were not designed to satisfy the special reliability and
control requirements of OLTP applications.  However, hardware vendors such as
Sequent and Pyramid provide extensions that make Unix reliable enough for
OLTP.  Relational DBMS (RDBMS) vendors fix the file system problem by
creating their own file systems, and provide transaction control mechanisms
like transaction queue management, transaction logging, and rollback/recovery
within their database management software.  The result is a solid foundation
for OLTP Light applications.

But Unix OLTP users want more--they want to run bigger OLTP applications on
Unix, and they want software independence.  Currently, most are dependent on
their DBMSs.  They have the freedom to change hardware vendors when they need
to, but if they change DBMS vendors, they sacrifice their applications.  The
best hope for software independence--what David Sherr calls "the Holy Grail
of open systems"--lies with X/Open Limited and the International Standards
Organization (ISO).  The two organizations are working in concert to create
standard protocols and interfaces for Unix OLTP in a distributed environment.
Unix vendors hope the new standards will allow them to compete with
proprietary OLTP systems such as IBM's CICS/MVS/VSAM to build reservation
systems as well as for OLTP Light applications.

The first products based on the X/Open and ISO work will start appearing this
fall, beginning a period of superheated development in Unix OLTP.  This
report examines the state of the art in Unix OLTP and the chances that Unix
will grow to become the predominant basis for transaction processing.

The New OLTP

The OLTP applications based on Unix today are different from reservation
systems or bank networks.  But they're no less vital to their users.  Take,
for example, the OLTP applications running at Tootsie Roll Incorporated
outside of Chicago.  Three or four order entry clerks (more are added to
handle the Halloween rush) complete about 70 transactions per hour using an
Oracle DBMS running on a Sequent processor.  Tootsie Roll's main criterion
was Oracle's reporting and analysis tools.  "We used to have a big cart that
delivered batches of thick IBM reports to us every day," says President Ellen
Gordon.  "But, to compete, we needed more immediate information."  Now, when
decision-makers need information, they either get it themselves or have a
quick 4GL routine written by MIS.

Access to information its transaction information is what's most important to
Tootsie Roll.  The company doesn't require nonstop availability of the
transaction system.  Tootsie Roll won't lose millions of dollars a minute if
its system goes down.  This is a fundamental difference between applications
like Tootsie Roll's and reservation or banking networks.

Are systems like Tootsie Roll's OLTP?  Veterans of mainframe OLTP say these
are just database--primarily decision support--applications.  Users like
Tootsie Roll, however, say OLTP isn't useful without effective decision
support.  These business users live by former Citicorp chairman Walter
Wriston's insight into competing in an information economy: Information about
money, said Wriston, is more valuable than the money itself.  The same is
true of goods and services.  In embracing this view, users like Tootsie Roll
are changing the way we view OLTP, and changing it in a way that favors Unix.

WHAT IS A TRANSACTION?  OLTP starts with the transaction.  A retailer sells a
red dress to a customer.  A bank customer withdraws $100 from a checking
account using a teller machine.  A furniture maker orders a load of oak from
a supplier.

Each of these interactions is different, yet each has two things in common.
First, each changes the state of the participating entity's information about
itself.  The retailer's revenues go up.  The bank customer's checking-account
balance goes down.  An amount is deducted from the furniture manufacturer's
budget account for lumber supplies.  And so on.

Second, each transaction has two states: It is either in process or complete.
A transaction is complete when all parties can keep their parts of the
bargain.  For example, the retailer won't sell the red dress if the customer
doesn't have enough money to pay for it.  In OLTP, completed transactions are
said to have been "committed."  If a transaction can't be completed, all
parties must be able to cancel them and return to the conditions prevailing
before the transaction began.  The retailer, for example, won't record the
dress sale as revenue.  In OLTP, this process is called "aborting a
transaction."

ONLINE TRANSACTIONS.  OLTP systems don't change these fundamental
characteristics of transactions.  Rather, they seek to reduce the time needed
to complete transactions and the time between the completion of a transaction
and its reflection in an organization's data about its business--its
inventory, revenues, costs, etc.  The teller machine saves the bank customer
from standing in line to cash a check.  The sale of the red dress may be
immediately posted against inventory and revenue files, giving the retailer
immediate information from the field.

An OLTP system collects information about transactions and posts the changes
to the organization's information dictated by updating a shared database or
file.  The nature of transactions imposes four basic requirements on all OLTP
systems.  The four, called the ACID properties, are: atomicity, consistency,
isolation, and durability.

Atomicity.  Atomicity recognizes that a transaction involves two or more
discrete pieces of information.  All pieces of information in a transaction
must be committed, or none are.  In the red-dress example, the sale of the
dress involves, at minimum, a purchase price and an inventory adjustment.  If
the customer buys the dress, the purchase price is added to a revenue file,
and a unit representing the dress is deducted from an inventory file.  Both
files must be changed to avoid inconsistent information.  If one can't be
changed, neither is.

Consistency.  Consistency requires that a transaction either create a new and
valid state of the organization's shared data or, if the transaction aborts,
return the data to its previous state.  If, at the last minute, the customer
can't pay for the red dress, the retailer must be able to back out of the
sale, returning the revenue and inventory files to their pre-sale conditions.

Isolation.  While a transaction is in process, its details must be isolated
from other transactions.  Only when a transaction is committed can its
effects on shared data be shown to other transactions.  In the red-dress
example, the customer may run down to the teller machine on the corner to
withdraw enough money to pay for the dress.  While she does, other
transactions must proceed on the assumption that the dress hasn't been sold.

Durability.  Durability means that an OLTP system saves a committed
transaction's changes to shared data in the event of a subsequent failure.
If the retail store is hit by a blackout a minute after the customer buys the
red dress, that sale must be reflected in the appropriate files when the
system recovers.

The ACID requirements say nothing about OLTP system availability.  A system's
availability depends on the cost of lost time to conduct transactions.
Reservation or telephone-switching systems require nonstop availability,
called 24 x 7 (24 hours, 7 days a week) by OLTP specialists.  The reason is
that the cost of business lost during a reservation system outage justifies
the cost of building systems that tolerate faults extremely well.  For other
businesses, however, short outages may be tolerable.  Many users can afford
lower guaranteed system availability.  Users and vendors alike tend to view
OLTP applications as fitting into a spectrum, as depicted in Illustration 1.

Unix: From Laggard to Leader

Five years ago, the phrase "Unix OLTP" was a contradiction in terms.  There
was no Unix OLTP to speak of.  Today, Unix is hot as an OLTP technology.
What has happened to thrust Unix into the limelight as an OLTP technology?
There are several factors, both technical and nontechnical.

USERS WANT IT.  The most important reason for the spread of Unix OLTP is the
trend toward pushing OLTP applications closer and closer to the customer.
For example, in a bank, that means installing teller machines that capture
transactions directly from the customer's fingertips.  In a retail store, it
means capturing transactions at smart cash registers that are linked to
servers running inventory and pricing databases.  In the furniture
manufacturer example, it means using electronic data interchange to swiftly
process orders.  The closer OLTP systems are to the customer, the quicker
transactions can be processed, making a treasure trove of information about
trends in buying, costs, consumer preferences--you name it--available to
business decision-makers.

If a bank processes thousands of transactions a day and each transaction is
worth an average of $100, it's not hard to justify big expenditures on OLTP
systems like teller machines.  But, as daily transaction rates and
per-transaction values fall, traditional OLTP systems become too expensive.
The new in-store systems planned by companies such as K-mart and JC Penney
are small Unix servers, not big minis.  Their system cost is more in line
with smaller transaction volumes and values.

Low costs are part of the story.  A growing number of users are moving to
Unix as a strategy to adopt open systems.  "We moved to Unix OLTP as part of
a larger commitment to Unix for our business," says George Caneda, a systems
development manager at B.E.A. Associates, a Wall Street money management firm
with $9 billion under management.  Telephone companies, which constitute a
large market, require Unix System V compliance in their OLTP systems, as do a
growing number of federal government agencies.

The phone companies, the federal government, and the number of corporate
users committing to Unix add up to a substantial market.  No one knows
exactly how big it is, but, given that OLTP systems as a whole are believed
to generate $50 billion a year in revenues, even a small share of the total
is lucrative.  High-end Unix performance specialists like Sequent and Pyramid
are out to capture this market.  Later this year, they'll be joined by Tandem
and Stratus, the most successful vendors of proprietary fault-tolerant (24 X
7) systems.  Unisys, Hewlett-Packard, IBM, and Digital won't be far behind.

UNIX KERNEL FIXES.  A big part of the reason Unix OLTP has caught on is
because vendors have fixed Unix's inherent weaknesses.  The proof of their
success is the final small amount of attention Unix OLTP users pay to their
operating system software.  Most have few, if any, problems.

Vendors such as Sequent, Pyramid, and, more recently, Stratus and Tandem,
have each had to solve the same basic problems in their Unix versions.
They've done so without compromising the standard Unix interface.  (See
Illustration 2.)  However, the Unix kernels provided by Sequent, Pyramid, and
others in this field are no longer the standard System V kernel.
Interestingly, many of the common fixes undertaken by these vendors will be
incorporated into Unix System V Release 4, which will allow Unix OLTP vendors
to bring their kernels into closer compliance with the standard kernel over
time.

The fixes fall into four major areas: system robustness, data integrity, file
system, and process management.

System Robustness.  Unix has a reputation for being unreliable that is no
longer deserved.  It used to be that system errors, known as panics, could
hang Unix.  The default remedy: reboot, and often, Management processes that
clean out system log files and kill off obsolete daemons eliminate the most
troublesome panics.  Unix System V Release 4 incorporates these fixes in a
standard way.

Data Integrity.  Standard Unix processes I/O by writing first to system
buffers and then flushing the buffers to disk.  Applications run using data
in the buffers.  This design raises a risk to data integrity.  A power outage
can blow away data in buffers before it is written to disk.  The simple
solution is to use Unix's synchronous write-through feature to bypass Unix's
buffers and go directly to disk.  This improves reliability, but at the
sacrifice of performance.

The popular approach is to let the DBMS take care of the problem.  The DBMSs
institute their own buffer schemes, and all have begun using parallel
management of multiple queues to preserve data integrity without compromising
performance.

File System.  The standard Unix file system is tuned to access lots of small
files--exactly the opposite of what's needed by OLTP applications.  Most OLTP
applications demand fast access to large files at random.  Indexed file
schemes like ISAM and IBM's VSAM are very successful in meeting this need.

The simple solution is to use a more efficient, tunable file system, such s
the Fast File System (FFS) included in the Berkeley Unix, or to provide
access to ISAM.  The RDBMS vendors start from scratch, creating their own
file systems using Unix's "raw I/O" feature.  Raw I/O allows an application
to take direct control of an I/O device, in this case, a disk.  The raw I/O
approach adds a proprietary element to an open system.  Dharma Systems
Incorporated (Hollis, New Hampshire) is the one vendor we've found that
rejects the raw I/O approach.  The reasons: Users have access to standard
Unix system admin tools from within Dharma, and the Dharma system is more
easily ported across Unix versions.

Memory Management.  Unix's standard memory management is inappropriate to
OLTP.  Unix assigns one process to each user.  The operating system must load
new memory page tables, process control areas, and user registers--a
procedure called a context switch--every time one user session ends and
another begins.  Context-switching eats up a lot of system overhead and
memory, and can limit the number of users on a Unix OLTP system to a couple
of hundred.  Supporting more users requires memory management or a subsystem
that can field user requests for services, store them in queques, and match
them to available processes as they become available.

Unix OLTP vendors and users deal with this problem in a wide variety of ways,
from writing a daemon to field and handle multiple user requests for
individual server processes to writing their own transaction-control
software--called a transaction monitor--to service multiple user tasks within
single Unix processes.

24 x 7 AND SMP.  Solving the fundamental limitations of off-the-tape Unix is
only part of what's necessary to support OLTP applications.  High
availability OLTP applications require disk-mirroring and/or automatic
process recovery.  In disk-mirroring, every disk I/O is written to two disks,
preserving a spare copy of the data.  Automatic process recovery shifts
processing to a hot standby machine in a "cluster" configuration.

The next level up on the OLTP Spectrum (see Illustration 1) is fault-tolerant
systems, which duplex every system component, provide for online component
replacement, and support sophisticated diagnostic and configuration
management capabilities.  Applications in this range are called "24 x 7"
because they, theoretically, will never go down.

Lastly, there's a big push on among Unix OLTP vendors to support symmetrical
multiprocessor (SMP) hardware designs.  The Unix kernel and RDBMS software
both need special hardware extensions to support these configurations.  SMP
promises to allow Unix systems to grow without swapping out processors.  In
the future, vendors will also boost performance by using multiprocessors to
process instructions, queries, service requests, etc. in parallel streams.

Proprietary OLTP systems like Stratus's VOS and Tandem's Guardian don't have
these difficulties.  They were written to support OLTP, and they've been
improved during years of experience with critical applications.  The
proprietary solutions cost more than Unix OLTP solutions.  Vendors say the
years of tuning and optimization that have gone into the proprietary systems
are the reasons for their higher costs.  Prices also reflect what the market
will bear, given limited competition.

FOCUS ON TP MONITORS.  The typical Unix OLTP system today comprises a Unix
hot box, a version of the Unix kernel that's been extended to support
redundant disks, multiprocessing, and other performance and/or availability
features, and an RDBMS.  The RDBMS usually provides support for the
atomicity, consistency, isolation, and durability (the ACID properties)
required by OLTP applications.

The central role of RDBMSs in Unix OLTP makes a lot of people uncomfortable.
Users committed to open systems strategies are keenly aware that their
dependence on an RDBMS limits the openness of their systems RDBMS software
locks in users with a variety of proprietary elements, from their custom file
systems to transaction management features like queue management to
performance features like stored procedures.  If I use Sybase, for example, I
write my applications in a proprietary version of SQL, Transact SQL, and use
a private service applications programming interface (API).  Even if other
RDBMS vendors support the same functions Sybase does, they usually implement
these functions in different ways.  One way to increase the openness of Unix
OLTP systems is to separate transaction management services from data
management facilities, making transaction management services available
through a separate API.

OLTP professionals trained in CICS don't like assigning transaction
management to an RDBMS.  They'd rather have a separate piece of software--a
transaction monitor--to coordinate the execution of transactions.  A
transaction monitor is responsible for managing the procecesses, queues, and
system states involved in processing and committing or aborting online
transactions.  Transaction monitors generally have the following components:

Name Service.  The name service in a transaction monitor stores the names and
adresses of data management and I/O services.  When an application requests a
service, the name service fields the request and delivers it appropriately.

Queue Manager.  If the name service fields a request for a service that is
currently occupied in another transaction, it places the request in a queue.
The effect of queueing is to bypass Unix's requirement that each service
request have a dedicated process.  Transaction managers allow multiple
applications to use a single process, and can dramatically raise the number
of users supported on a system.

Scheduling Service.  The scheduling service in a transaction monitor
schedules processes to handle competing requests for transaction services.

Communications Service.  The communications service allows the transaction
monitor to deliver service requests to appropriate services.

Log Service.  As it coordinates the execution of transactions, the
transaction monitor stores information about the transaction's progress and
the state of the systems involved in executing it in a log.  The information
must be current and detailed enough to roll back the transaction if
necessary.

In our red-dress example, a transaction monitor would identify the steps
needed to complete the transaction as a discrete unit of work.  It would
track the correct completion of data-gathering and the updating of shared
data.  It would store in a log reference information about the steps of the
transaction and the state of the systems involved.  The information in the
transaction manager's log would then be used to roll back the transaction if
aborting it became necessary.

Transaction monitors offer another important benefit to Unix OLTP: support
for more users.  Currently, most Unix OLTP systems top out at about 200
users.  A transaction monitor, by scheduling and managing user requests for
transaction services more efficiently than current RDBMSs do, could raise the
user threshold of Unix to 1,000 or more.  Independence Technologies
Incorporated (Fremont, California), a systems integrator-turned-Unix OLTP
vendor, used a version of AT&T's Tuxedo transaction monitor for Unix to build
a health care claims system that supports 3,000 users.

In this case, the proprietary-system users and Unix users agree: Unix OLTP
will benefit if the transaction management is separated from the RDBMS.

Unix has never had a standard transaction monitor--RDBMSs have filled this
role, or users have written their own monitors.  Users and vendors alike
agree that transaction monitors are necessary to make Unix a serious
contender in OLTP.  But what's the best way to fill the need for better
transaction monitor software in a standard--even better, an open--way?  This
is, after all, Unix OLTP.  It should be open.

THe X/OPEN MODEL.  If Unix OLTP is going to be open, applications must be
able to call on databases or transaction monitors through a set of common
APIs.  What's needed is a set of Posix-like interfaces for OLTP.  X/Open
Limited is defining a series Of APIs to allow, for example, an OLTP
application with components that execute on an Oracle RDBMS or an Ingres
RDBMS using the same calls.  A different API would allow an application to
call transaction monitors from different vendors using common commands.

Like the promise of Posix itself, the X/Open OLTP model will take years to
become a working reality.  Nevertheless, it's crucial to the development of
open OLTP systems.

X/Open is defining its APIs by breaking OLTP into three functional
components--resource managers, transaction managers, and applications--and
defining open APIs to each component.  (See Illustration 3.)

Resource Manager.  Resource managers include RDBMSs, file systems, and print
services.  A resource manager is responsible for managing a shared resource
according to the ACID properties.  Transaction managers and applications work
with resource managers by asking them to perform services on their behalf.
Applications and transaction managers don't have to know how the requested
service is performed.  As long as all elements support the same service
interface, users should be able to plug in new components as they need to.

X/Open has circulated for industry comment a formal proposal for an
interface--the XA interface--to resource managers.  (See Illustration 4.)

TRansaction Manager.  A transaction manager provides the transaction
management services that today's transaction monitors provide.  It also gives
applications access to communications services.  Transaction managers use the
XA interface to obtain the services of resource managers to complete a
transaction. X/Open has reserved a place in its model for the emerging ISO
Distributed Transaction Processing (DTP) protocol as the interface between
two transaction monitors cooperating in a single transaction.

Application.  Applications define transactions as having a beginning, a
sequence of operations, and an end.  The transaction manager coordinates and
tracks the execution by resource managers of the operations defined in an
application.  They ask transaction managers to complete transactions on their
behalf through a still-to-be-specified interface.

In our red-dress example, an application defines the operations that must
occur during any sale to a customer, from the collection of data at the cash
register to the updating of the inventory and revenue files on a server.  A
transaction manager directs the steps in the process, making sure that all of
them are performed properly.  A resource manager updates the inventory and
revenue files at the direction of the transaction manager.

Distributed OLTP.  The X/Open's model assumes its interfaces must allow
different parts of a transaction to be executed on different, heterogeneous
systems linked by a network.  This is known as distributed transaction
processing.

Distributed transaction processing is the future of OLTP.  By allowing
transactions to be executed across a network, users can provide additional
capacity to existing applications without swapping in new systems, and can
leverage older OLTP systems in new applications.  For instance, the store in
our red-dress example might want to add a dynamic pricing application to its
systems that allows a regional office to monitor how merchandise is moving
and to transmit price changes to stores to help stimulate demand.  If the
retailer's pricing database is on an IBM mainframe and its in-store systems
are based on Unix, a distributed OLTP system might allow it to implement the
new application without rewriting its pricing database.

The X/Open model incorporates two concepts to accomplish distributed
transactions.  First, it distinguishes between local transactions and global
transactions.  A local transaction is a set of operations that a resource
manager executes as a local unit of work under its own control.  A global
transaction is a set of operations that is under the control of transaction
managers and that includes local transactions.

Second, the X/Open model assumes that the transaction manager and resource
managers in an OLTP system use a two-phase commit (2PC) protocol.  A 2PC
protocol builds an extra step into the process of committing a transaction.
It allows the transaction manager to tell all involved systems to prepare to
commit their parts of the transaction.  If all of the involved systems
respond to the "prepare to commit" message by saying they are ready to
commit, the transaction manager then issues a Commit command.  (See
Illustration 5.)

The X/Open transaction processing model gives OLTP vendors an important
target.  X/Open has a long road to travel before its work on OLTP is
complete.  The X/Open OLTP group released an interim model a year ago.  We
expect it to introduce another interim version soon that will move
communications management from the transaction manager to a new
communications manager.  (See "X/Open Struggles with RPCs page 5.")  Beyond
fine-tuning the model, X/Open faces other issues as it seeks to define open
distributed OLTP.  System management, for example, is a gaping hole.  No
standards organization or vendor has crafted a unified way to administer and
manage distributed networks.  And there are other issues X/Open hasn't
addressed at all.  For example, what is the standard way to identify a set of
distributed operations as a single transaction?  And how can sophisticated
transaction structures, such as nested transactions, be performed in a
standard way across distributed systems?

In the meantime, the Unix OLTP vendors are pushing to expand beyond their
base in OLTP Light applications by offering transaction monitors and a
variety of performance enhancements.  IBM, Unisys, and systems vendors are
building integrated OLTP environments for Unix.  (See "The OLTP Heavies Weigh
In," page 14.)  All but two of the major vendors are proceeding with
commitments to abide by the X/Open model and its interfaces.  However, it
remains to be seen how quickly X/Open's work will be implemented across a
variety of products.  The X/Open dissenters--VISystems Incorporated (Dallas),
which provides a CICS-like environment under Unix, and Dharma
Systems--believe the X/Open model is impractical because it doesn't address
the need to optimize the components of an OLTP system.

What Unix Offers Now

Users building Unix OLTP solutions today have three basic choices: They can
rely on the facilities of a general purpose RDBMS, they can implement one of
several independent transaction monitors, or they can build atop an optimized
OLTP operating environment that includes a database built for OLTP and a
transaction monitor.

There has been much activity in each of these three areas as Unix OLTP
vendors seek better performance and support for bigger systems than are
feasible today.

FROM DSS TO OLTP.  The RDBMS cut its teeth in information processing as a
decision support engine.  As we've seen, users like Tootsie Roll have pushed
the RDBMS vendors to support OLTP in addition to easy data access.  To keep
pace with demands for bigger, faster Unix OLTP systems, Oracle, Ingres,
Informix, and Sybase have each announced "OLTP releases" during the last
year.  Among these vendors, only Sybase was designed originally to support
OLTP.

The OLTP releases of the major RDBMSs are either retuned or wholly redesigned
to better support OLTP's requirements, which would allow their RDBMS engines
to function as resource managers in heterogeneous OLTP environments under the
control of transaction monitors.  This report covers the highlights of these
efforts.

Tuning the Engines.  Each of the major RDBMS vendors has tuned its engine in
essentially the same ways.  They all allow databases to be locked by the row
page during a transaction, rather than requiring entire tables to be locked.
And they all employ the same techniques to reduce the amount of I/O required
to service transanctions.

Architectural Adjustments.  Each of the major RDBMS vendors has also changed
the architecture of its software to support more uses and higher throughput
in OLTP applications.  There are two possible approaches.  The first is to
build a multithreaded OLTP operating system on top of Unix.  The second
approach is to layer a transaction monitor between a conventional RDBMS and
applications to give the apprearance of multithreading.

To speed the proc-essing of transactions, RDBMSs can simultaneously process
two or more transactions by using multiple server processes.  This approach,
called the multiclient-multiserver, or multithreaded, architecture, processes
transactions in parallel, usually on multiprocessor hardware.  An alternative
to the multiclient-multiserver architecture is integrating a transaction
monitor and a multiclient-single server RDBMS.  In that configuration, the
transaction monitor can achieve the same benefits as a multithreaded RDBMS
engine.

Sybase and Ingres both have multiclient-multiserver architectures.  Sybase
added this feature with a virtual server architecture that allows an SQL
Server process to two or more of the processors in a multiprocessing system.
Ingres provides multiserver support with a multithreaded RDBMS kernel.  Both
of these database vendors provide their own transaction management software.

The Multiclient Server option of Oracle Release 6, due out this fall, and
Informix Online, an OLTP version of Informix, both use the transaction
monitor approach.

Oracle and Informix both plan to rely on their own transaction managers as
well as on AT&T's Tuxedo transaction monitor technology.  Both vendors are
working with systems vendors such as Unisys and Hewlett-Packard to provide
suppport for Tuxedo within their DBMS software.

Optimization vs. Openness.  It is too early to tell which pair of vendors has
the architectural advantage.  Both the multiserver architecture and the
transaction monitor approaches will yield systems that can support larger
number of users.  And so users of all these major RDBMSs will be able to do
more with the Unix OLTP systems.  We believe the winner in the competition
for ever-better performance will be the vendor that can best optimize its
transaction management components and RDBMS kernel, while leveraging
standards.

We believe Sybase, Ingres, and Dharma Systems may have a short-term advantage
in optimizing their systems.  Each of these vendors provides what we call
OLTP operating environments--data management and transaction management
software created, optimized, and controlled by one vendor.  Dharma's TPE*
combines a multithreaded SQL RDBMS kernel and a Network Transaction Kernel
based on Hewlett-Packard/Apollo's Network Computing System (NCS) RPC that
manages transactions.

In contrast, Oracle and Informix appear more interested in moving to
available standards by embracing Tuxedo.  In the short term, both vendors may
be at a disadvantage in optimizing their systems because Tuxedo is a
source-level product over which neither has total control.  Systems vendors
will actually be handling the integration of the DBMS and transaction
management software.  We expect these vendors to need time to learn what it
takes to make the combination of Tuxedo and their RDBMSs really hum.

The wild card in the database market will be user demand for openness.  By
integrating AT&T's Tuxedo, Oracle and Informix will be early adopters of what
looks like an important standard in Unix OLTP.  Tuxedo is the first product
to support X/Open's XA interface.  As RDBMS vendors implement support for the
XA interface in their products, applications written to Tuxedo should be able
to access heterogeneous DBMSs without requiring modification.

Users talk about openness--witness David Sherr's earlier comment about the
Holy Grail of software independence--but we doubt many commercial will move
to XA-based open OLTP architectures soon.  There are very practical reasons
for this conclusion.  First, none of the big RDBMS vendors are likely to
offer XA interfaces until early 1991.  Second, users say they are reluctant
to support more than one RDBMS until they see a greater payoff for the
additional effort required.

Still, both Sybase and Ingres are hedging their bets by moving to open their
own APIs and committing to support the X/Opens XA interface at a future date.

TP MONITOR OPTIONS.  And what of the transaction monitors the industry agrees
are so critical to the future of Unix OLTP?  At this point, users have two
major options: a version of AT&T's Tuxedo Release 4.0 or VIS/TP, a CICS-like
transaction monitor for Unix from VISystems.

Tuxedo and VIS/TP provide basic transaction-monitoring capabilities.  But
they are targeted at different groups of users.  VIS/TP offers IBM CICS shops
a practical approach to Unix OLTP.  VISystems provides tools to port CICS
applications to run under its own transaction system, and consulting services
to optimize ported applications.

The one question about VIS/TP is its future position in IBM's Unix OLTP
strategy.  VISystems has a cooperative marketing agreement with IBM.  But IBM
will port a future version of CICS under Unix, and VISystems says it is not
working with IBM on the project.

AT&T's Tuxedo, in contrast, doesn't provide tools to migrate proprietary OLTP
applications to Unix.  AT&T's strategy is to provide interoperability
facilities, not migration facilities.  Tuxedo is a new API aimed at users who
are willing to start with a clean slate to develop applications.  There are
very few third-party applications for Tuxedo 4.0 at this point.

AT&T's "Wanna Be" Standard.  AT&T announced Tuxedo 4.0 in February 1989 and
has been pushing it hard as the standard Unix transaction monitor.  Some
vendors are taking the bait.  At press time, Unisys, Amdahl, AT&T computer
Systems (no surprise there), and Sequent had licensed Tuxedo technology.
Oracle and Informix have announced they will support Tuxedo.  Hewlett-Packard
appeared ready to license an enhanced version of Tuxedo from Independence
Technologies Incorporated (ITI).  ITI has added software layers to Tuxedo
that make it easier to support additional communications and
industry-specific protocols in the product.  The first versions of Tuxedo
should start appearing this fall.

Tuxedo appears to be catching on because it works, because the market is
ready to consider a Unix-based transaction manager, and because it implements
emerging OLTP standards.  Licensees like the client-server architecture and
support for distributed transaction processing of Release 4.  (See
Illustration 6.)  There are even second sources for Tuxedo software (ITI).
Licensees also like the opportunities AT&T has given them to add value to the
base product.

Predictions that Tuxedo will dominate the market, however, are premature.
Even big licensees hedge their bets on the technology.  "Tuxedo is there and
it works," said one licensee privately.  "It gives us a foundation for added
value, but we're not making a strategic commitment to it yet."

Vendors are reluctant to make strategic commitments to for two reasons.  The
first is key gaps in Tuxedo's functionality.  Tuxedo does not support PCs or
Macintoshes as clients.  It only supports character-based Unix clients.  In
early 1991, AT&T will begin fixing this problem by releasing a generic DOS
client.  In addition, Tuxedo relies on standard Unix password security.  Even
AT&T concedes that it needs a validation service to check the passwords users
submit against access control lists.

The second reason for vendors' hesitation about Tuxedo is their doubts about
its openness to new technologies.  Vendors who believe remote procedure calls
are the simplest method for building network applications, for example, point
out that Tuxedo doesn't support an RPC yet.  Others see Tuxedo as a
monolithic product that won't be easy for AT&T to open up.  AT&T has not, for
example, provided an open interface to its logging service.  If a licensee
wants to substitute a new logging service, it is very difficult to do so.
These vendors would rather see a modular architecture akin to the Open
Software Foundation's Distributed Computing Environment (DCE).  Others worry
that the industry doesn't know enough yet about transaction monitors in open,
distributed OLTP to intelligently choose a standard.

Users, on the other hand, appear much less interested in debating Tuxedo's
merits than in evaluating it.  The big users we talked to were all either
using Tuxedo, evaluating it, or planning to evaluate it in the very near
future.  Tuxedo is the most immediately available alternative among Unix
transaction monitors today, they say.

The Distributed Future

Vendors and users alike believe the future of Unix OLTP lies in distributed
transaction processing.  This is why distributed OLTP is at the heart of
X/Open's project to define open OLTP interfaces in addition to ISO's effort
to define transaction processing protocols.

Distributing the execution of transactions across a network is a difficult
technical feat, particularly in an environment that is not under the control
of one vendor.  Users are skeptical that reliable distributed OLTP systems
will be available anytime soon.  "We haven't built enough distributed systems
to know what the proper constructs for distributed OLTP are," says Shearson's
David Sherr.  Still, Sherr is one of many who are closely watching the
efforts of the OSF and others to deliver on the promise of flexibility and
scalability of distributed systems.

Earlier this year, assumptions about distributed transaction processing were
challenged by the OSF's DCE technology initiative.  DCE goes further than
most previous technologies supporting distributed OLTP by insisting that any
system must be totally modular in structure and be portable to a variety of
operating system kernels, not just Unix.  (See Network Monitor, July 1990.)
Modularity promises to ensure that systems, resource management, and
communications software will evolve to higher function and greater efficiency
without blowing away applications built on it.  Portability to a variety of
kernels promises application portability.  Ultimately, the user's investment
in applications is protected and extended.  No Environment/toolkits or RDBMSs
discussed above meet both conditions.

Transarc Corporation (Pittsburgh), vendor of the Andrew File System selected
as part of the DCE, has become a leading proponent of the DCE approach for
OLTP.  Transarc expects to begin announcing the first products that conform
to its DCE-aligned architecture for OLTP in late 1990, with products becoming
available in 1991.  Transarc also has a strong IBM connection; IBM has an
equity investment in the company.  We expect IBM to work with Transarc to
deliver a future CICS version for AIX.

TRANSARC SOFTWARE ARCHITECTURE.  Transarc's distributed OLTP environment can
be layered atop a variety of operating system kernels.  It incorporates the
services provided with OSF's DCE: threading services, RPC, distributed time
service, naming services, authentication services, and user and group
management.  Transarc calls these Core Services, and they support a variety
of distributed applications.  (See Illustration 7.)

Extended Services.  Atop this distributed services platform, Transarc adds
the following Extended Distributed Systems Services that support OLTP
application development and management:

* Distributed Transaction Service, which coordinates and manages transactions
involving multiple processors and shared resources using a 2PC protocol

* Transactional RPC, which is Hewlett-Packard/Apollo's NCS with additional
semantics that describe transactions

* Logging Services, which information needed to recover after a transaction
aborts for any reason

* Recovery and Locking Services, which are available for use by a variety of
resource managers

* Protocol and Interface Translators, which allow interoperability of OLTP
systems using different protocols (LU6.2, etc.) and interfaces (XA, etc.)

* Programming Veneer, which uses C procedures and macros to shield developers
from the details of transaction management as they build their applications

Taken together, these services define the components of a transaction
manager.  Transarc plans to call them, collectively, the Transarc Toolkit.

Resource Managers, Management, Development Tools.  Resource Managers in the
Transarc architecture conform to the X/Open model by accommodating RDBMSs,
file systems like C-ISAM and VSAM (as well as Transarc's own Structured File
System), and distributed Unix file systems.  Systems management is a big
question mark in the architecture because so much work is still to be done to
define utilities in distributed environments.  Transarc hopes the OSF will
provide help in this area with an upcoming technology initiative.

At the highest level of the architecture are high-level tools like screen
builders and CASE environments.  Transarc is developing an application
development and run-time environment called the Transarc Application
Programmers Environment.  The environment will provide a screen generator, a
transaction monitor library, a security library, and basic system
administration facilities.  In addition, Transarc hopes to attract
third-party tool vendors to its environment.

Conclusions

There are a lot of reasons to be excited about Unix OLTP.  It clearly
comprises a set of technologies that can help businesses solve problems at
attractive costs today.  At the same time, however, Unix OLTP technology is
very much in flux.  Vendors and users are struggling to understand how they
can build bigger systems, more powerful systems, and distributed Unix OLTP
systems.  As they do, they're redefining OLTP.  IBM mainframe veterans are
horrified by the thought of exposing OLTP systems to the inquiries of users.
Unix OLTP users reply: "Sorry, we need it that way.  Change the rules."

We believe the number of Unix OLTP applications will continue to grow.  The
rate of that growth, however, will be determined by the success of vendors
and users in dealing with four issues: system management, integration and
optimization, and standards.

System Management.  Transarc is about as bullish as a company can be about
open and distributed OLTP.  Yet even Transarc concedes that it will be years
before users have the kind of system management tools they'll need to manage
large distributed OLTP systems.  OSF's next big technology initiative will be
in distributed systems management.  Everyone in Unix OLTP hopes OSF finds a
robust basket of system management technologies to back--and fast.

Integration and Optimization.  Tight integration and optimized interactions
between the elements in a system are the keys to reliable performance.
Integration and optimization are real challenges in Unix OLTP systems because
of the number of vendors involved in the typical system.  Today, most systems
are the products of a hardware (and operating system kernel) supplier and a
database vendor.  Add to this mix a transaction monitor, and the number of
potential problems goes up.

Standards vs. Innovation.  There's a tug-of-war in Unix OLTP between those
who believe Unix OLTP needs standards now and those who believe Unix OLTP
needs innovation now and standards later.  The standards agreements needed to
open up Unix OLTP are complex and intertwined.  The fact that the industry
has been able to agree on the XA interface for RDBMS services is a positive
sign.  However, we don't expect a full suite of standards to be complete for
two years.

We expect the industry will need three years to sort out these issues.  In
the meantime, Unix OLTP will appear to be a fertile field to some, an
unstable landscape to others.  We don't believe users should preoccupy
themselves with the standards fights in Unix OLTP.  Unix OLTP presents real
opportunities to deliver quality applications at low costs today, and
functionality is advancing rapidly.  If you don't take advantage of the
technology where you can, your competitors will.
921.23I did a benchmark and won !ZURFCC::OLLODARTExpatriateThu May 30 1991 19:5735
    Hello,
    
    I did a benchmark against ORACLE/SQLforms/VAX, INGRES/ABF/VAX, NCR/ORACLE
    IBM/AS400 and won with RALLY/RDB/VAX.
    
    The test consisted of 2.8 GB database, about 10 forms, menus. 2 batch
    jobs, one a hugh report and one to update the database at a steady
    rate. The batch jobs were each started 16 times.
    
    The results were for average response time for set of 10 different
    transactions: 
    
    VAX was a 6410 with 64 MB memory and 4 RA90 on a HSC 50.
    
    
    ORACLE/VAX 1.7 Seconds (3 data disks, plus 1 system disk)
    INGRES/VAX 2.1 Seconds (3 data disks, plus 1 system disk)
    RALLY/RdB/VAX 1.2 seconds (3 data disks, plus 1 system disk)
    NCR/ORACLE never got it running
    IBM/AS400 1.3 seconds with excuse me 40 disks
    
    
    Bye the way, we won the contract. It is a 40 man year project.
    
    
    Regards,
    
    Peter.
    
    P.S., I still have the CPD databases from all three VAX benchmarks (VPA),
    so if you want to show your customer what was running on the machine
    at the time from a VMS perspective, let me know by mail.
    
    			
    It is also for free !!
921.24congratsTRCA03::MCMULLENKen McMullenThu May 30 1991 20:474
    Congratulations, nice win.
    
    Can you describe the benchmark a little more please. Was it multi-user,
    the type of load? By the way did we use HASH keys on all the tables?
921.25Sell the Digital advantage - speed!KOBAL::KIRKI&#039;ve lost my hidden agenda!Thu May 30 1991 21:1510
    Peter,
    
    What version of RALLY were you using? We have had reports from many
    customers that with RALLY V2.2 (latest version), RALLY is now clearly 
    the fastest 4GL running against Rdb.
    
    Sell the Digital advantage! Rdb and RALLY for fast application
    development and runtime.
    
    Richard
921.26An Article on the winTRCA03::MCMULLENKen McMullenFri May 31 1991 18:1810
    re .23
    
    Peter,
    
    It would be nice if someone from your office could write an article
    about this win. Many Digital offices shy away from Rally due to
    historical V1.x experiences. Having it published in Digital Today or
    Insight would give the win good exposure.
    
    Ken
921.27more infoZURFCC::OLLODARTExpatriateFri May 31 1991 22:3061
    The benchmark was with V2.0. Richard, you know who I am talking about,
    APG, They are of course unknown to most people who live outside of 
    Switzerland. The sell outdoor advertising spaces. The have 80% of the market
    in Switzerland and are in France and Germany as well. 
    
    I would like to write a story on it if your interested.
    
    Basically we only had 10 interactive users. The batch jobs were
    Rally reports, with control breaks. The joined about 6 of the tables
    in the database, the biggest one being about 5 million records. It
    took about three hours for it to run with the sorting. We started 
    16 of those. In addition, 2 update programs that updated the 5 million
    records also ran in batch which was written by my coworker in Fortran. 
    
    Each base table was stored with via a hash key. Secondary keys were
    B-tree types in secondary storage areas. This is fast for joins
    and ranges just using keylookup. 
    
    I still have the database on tapes. Rally forms were fairly simple,
    except for a few where there was a lot of parent child stuff. 
    
    (At least it was easy to build in Rally, since NCR couldn't do it
    In fact NCR never got the database running. What a bunch of ...)
    
    The 6410 was running at 85% CPU and disks where at 25 IOs and we were
    still getting a great response time due to the hashing. I think the
    max was about 2 seconds and they queried lots of different things.
    (2 seconds for a parent 5 children and 15 grandchildren records).
    
    If RdB wasn't single threaded, Rally would perform great with lots of 
    users, since it doesn't really use a lot of overhead for forms. The menu
    system is one of the best I've seen. It is far superior to Ingres
    for example.
    
    With ODI you can do 
    server style reusable processes. ACMS is only one way. You can use
    RTR as well, since its system service message routing is easy to
    implement into Rally's routines. (This is my opinion, I like all 
    Digital Products). 
    
    I really wish people would stop bad mouthing our products. Rally never
    seems to get away from v1.1. I always run into it and see it by 
    new customers. "Oh Rally ....."
    
    The nicest thing about this benchmar was that Oracle thought 
    they would win the whole time. The were so angry at loosing 
    that they tried to make trouble in our other accounts that use oracle. 
    Some Oracle VP from Europe even meet with our AC to see whether friend
    or foe for the future with Digital. 
    
    For myself, after so MANY bitter frustrating losses to Oracle, I was
    really happy to wip'em as they say.
    
    I see what I can do with our sales to get a story on it. I know the
    customer would be interested in something like this, in the mean time
    I know most of them closely as I spend 1 - 2 paid days a week helping
    them with the project. 
    
    Regards,
    
    Peter