[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

395.0. "Comments on "SECURE ORACLE"???" by SNOC02::ANDERSONK (The Unbearable Lightness of Being) Mon Aug 07 1989 15:58

         <<< CHEFS::DISK$APPLICATION:[NOTES$LIBRARY]VIA_FORUM.NOTE;2 >>>
                                 -< VIA FORUM >-
================================================================================
Note 238.0                        Secure ORACLE                       No replies
LARVAE::SPIRES "Richard Spires"                     187 lines  28-JUL-1989 11:14
--------------------------------------------------------------------------------
The following notes are based on a presentation of Secure Oracle attended
by my colleague Andy Beale.



Notes from ORACLE:  "Trusted Database Interpretation" Seminar

Summary

T.D.I.	     -	   The Trusted Database Interpretation of the "Orange 
                   Book".
	     	   This is currently in second draft and expected to 
                   be released in 1990.

ORACLE V6.0  -	   Nearly C2 compliant.
	     	   needs Group Access Controls
	     	         Audit of Individuals
	     	         Audit of Audit Trail Deletions
	     	         Audit of "Connect Internal" privilege.

	     -	   ORACLE's MLS system will utilise the MAC's of the 
                   operating system upon which it resides. 
	     	   Each Security Classification will have an 
                   "Instance" of the Oracle database.  This will be 
                   able to Read/Write to its own database but only 
                   Read databases of a lower classification.
	     	   Development work is under way on SEVMS and other B1 
                   systems.


The Seminar was given by Linda Vetter, ORACLE SECURE SYSTEMS.  She 
recently moved from NCSC having been involved in evaluation of 
operating systems.

Discretionary Controls

After outlining aspects of Discretionary Security she indicated that 
V6.0 of oracle was very close to C2 functional compliance.  (The 
outstanding features will be included in the next release CV 6.1, 
V7.0??)

Also in the next release are:

Roles	     -	   the definition of privileges by user function.

	     -	   refinement of the use of DBA, SYS privilege.

Application level security support.

Enhanced compatibility with external products (RACF, ACF2...)

Referential  -  data consistency through integrity constraints.
Integrity

The following are requirements placed on the host operating system to 
achieve C2 level security:

	     -	   Protection of Storage Objects (i.e. files)
	     -	   Separation of Individual address space/domains
	     -	   Support for "Connect Internal" access
	     	   i.e. Control over who can start/stop databases.


This version has been submitted to NCSC for evaluation which will 
begin just as soon as they have something to evaluate against i.e. 
when the T.D.I. is released.  It will initially be evaluated on a VMS 
platform with other implementations following close behind.  
N.B.  These later ones only examine 'porting' aspects.

Mandatory Controls

Careful distinction between Named Objects (Tables, Views, Sequences, 
Roles) and Storage Objects (files).  When referring to operating 
systems these tend to be one and the same (i.e. files) but with 
databases its different. (You can say that again!)

Three models for Multi level Databases.

1)  Integrity Locks


                    +-------+---------+-----------+
                    |       |         |           |
    +-----+         | User  | Trusted | Untrusted |          +-----+
    | User+---------+ appl. |  Filter |   DBMS    |          | Data|
    |     |         |       |         |           +----------+     |
    +-----+         |       |         |           |          +-----+
                    +-------+---------+-----------+
                    |      Trusted    O.S.        |
                    +-----------------------------+
	     -	   duplication of functionality in filter and DBMS

	     -	   need for encryption in Filter (for integrity)

2)  Subsets

                        +------------------+
                        |                  |
   +-----+              |   User Appl.     |               +-----+
   | User|              +------------------+               | Data|
   |     +--------------+  Trusted  DBMS   +---------------+     |
   +-----+              +------------------+               +-----+
                        |   Trusted  OS    |
                        |                  |
                        +------------------+

	     -	   Each Subset (i.e. DBMS, OS, appl) have separate 
                   security policies

	     -	   devices controlled by Most Privileged Subset


3)  Monolithic

         +--------+          +------------------+
         | User   |          |    Trusted       |
+----+   | Appli. |\    /\   |                  |     +----+
|User|   +--------+ \  /  \  |    DBMS   &      +-----+Data|
|    +---+ Trusted|  \/    \ |                  |     |    |
+----+   |  O. S. |         \|      O. S.       |     +----+
         |        |          |                  |
         +--------+ Trusted  +------------------+
                    Network

                                -  Runs on native hardware
                                -  Dedicated machine
                                -  not general purpose
                                -  separate evaluation

ORACLE using subset approach:

	     -	   Quickest way for evaluation (just evaluate DBMS)
	     -	   Consistent with Portability guidelines
	     -	   Use MAC's of Operating System
	     -	   No additional MAC evaluation within DBMS
	     -	   Multiple instances of ORACLE, 1 per each security 
                   classification, each with own Global Data Areas 
                   etc.

	     	   Classification done at any granularity and 
                   logically combined as multi level relations.  
                   Physically de-composed and stored in single level 
                   Storage Object e.g. files

e.g.        User 3             User 2            User 1

           +------+           +------+          +------+
           | T.S. |           |   S  |          |  U/C |
           |      |           |      |          |      |
           |ORACLE|           |ORACLE|          |ORACLE|
           |      |           |      |          |      |
           +------+           +------+          +------+
            RW| \RO\             | RW \            | RW
              |  \  \            |     \ RO        |
              |   \  \           |      \          |
              |    \  \_______________   \         |
              |     \            |    \   \        |
           +------+  \        +------+ \   \    +------+
           | T.S. |   \______ |   S  |  \   \___|  U/C |
           |      |           |      |   \______|      |
           | Data |           | Data |          | Data |
           +------+           +------+          +------+


Thus each user can Read/Write to data of same level but only Read data 
of lower level.

e.g.  Employee data table with 1 record at each level.  User Three 
sees 3 records, User Two sees 2 and User One only 1.   

Little overhead since protection only checked at file open (on DB 
startup).
                               
Can pass "Advisory Labels" to application i.e. which records came from 
which classification. 

Modification of Lower level data automatically upgrades it (can't 
write down).

Development  

Systems for most B1-A1 systems to be available.  SEVMS Secure OS, 
GEMSOS etc. B1 functionality in 1990?  Evaluation later. 

No encryption!!

Export restrictions as per base operating system.
T.RTitleUserPersonal
Name
DateLines
395.1B1 approach has many pros/consSQLRUS::DAVISONJay Davison - DTN 264-1168Thu Aug 24 1989 16:1350
    Well...
    Nobody jumped forward with any comments, so I'll just say a few quick
    words about the B1 Oracle approach.

    The "subsetting" approach being used by Oracle ("subsetting" is the term
    used in the draft TDI) in order to achieve B1 makes sense for them
    for several reasons:
    
    - It will allow Oracle to keep the same design across multiple secure
      OS platforms, since the OS is performing the mandatory access control
      policy.
    - It is a conceptually simple architecture that has been more or less
      blessed by the NCSC (the agency within NSA that evaluates secure 
      systems and subsystems).  The first draft TDI was heavily biased
      toward this architecture.  As the base note says, it should give them
      an advantage at evaluation time, since the DBMS is really just
      enforcing the discretionary access control policy.  This means that
      the DBMS can be evaluated independently of the OS.
    - Oracle has been heavily involved with the "SeaView" A1 research
      effort, and is currently a subcontractor for the prototype
      implementation effort.  The SeaView approach advocated the use of
      this type of security architecture for the DBMS.
    - Oracle has a contract with the NCSC to develop a B1 DBMS using
      the subsetting approach (on SEVMS, I believe). (Teradata also has a
      contract to produce a B1 DBMS using the "monolithic" approach).
      These contracts are "helping" the NCSC "validate" the TDI criteria, 
      in the same way that Multics helped them with the Orange Book 
      criteria.  Whether you believe/accept this type of arrangement between
      a vendor and a "neutral" agency is a separate matter not to be
      discussed here (it's a sore point).

    The above list gives some of the advantages.
    However, there are some interesting disadvantages too.  Very briefly:
    
    - Multilevel data (and consequently metadata too) must be separated
      into single level OS files.  This can amount to quite a lot of files for
      applications using many security classifications and compartments.
      Indexes must also be split up into many separate files, since they
      hold data too.  Reconstructing the multilevel data from single level
      files will quickly become a performance problem.
    - The DBMS will have to maintain a lot of info about what OS files are
      used to reconstruct the schema/relations.  The separation of the
      data into single level files is not transparent to the DBMS.
    - Integrity of data will be impossible to enforce.  Since the DBMS can
      not "see" all the data (because it can't open the OS files to
      determine if a piece of data exists), it can't enforce entity
      integrity and referential integrity of the data.  This is a subtle,
      but key byproduct of this architecture.  It means that the integrity
      of the data can be sacrificed in order to achieve secrecy.  That will
      be unacceptable to many users.
395.2Aren't some of these inherent in secure systems?COOKIE::BERENSONVAX Rdb/VMS VeteranThu Aug 24 1989 23:0018
>    - Integrity of data will be impossible to enforce.  Since the DBMS can
>      not "see" all the data (because it can't open the OS files to
>      determine if a piece of data exists), it can't enforce entity
>      integrity and referential integrity of the data.  This is a subtle,
>      but key byproduct of this architecture.  It means that the integrity
>      of the data can be sacrificed in order to achieve secrecy.  That will
>      be unacceptable to many users.

Is there an alternative here for any implementation?  After all, if you
permit integrity constraints between levels, than it is easy to
determine information just by performing existence checks.  In other
words, integrity constraints become the equivalent of determining that
an underground nuclear test just occured due to the fact that point A
sent point B a message, even though the messaged was encrypted and you
couldn't understand it.

A subject for a discussion outside this conference would be how we
enforce integrity constraints at B levels of security.
395.3Some alternatives, but none for OracleBANZAI::DAVISONJay Davison - DTN 264-1168Fri Aug 25 1989 05:2640
>>    - Integrity of data will be impossible to enforce.  Since the DBMS can
>>      not "see" all the data (because it can't open the OS files to
>>      determine if a piece of data exists), it can't enforce entity
>>      integrity and referential integrity of the data.  This is a subtle,
>>      but key byproduct of this architecture.  It means that the integrity
>>      of the data can be sacrificed in order to achieve secrecy.  That will
>>      be unacceptable to many users.

>Is there an alternative here for any implementation?  After all, if you
>permit integrity constraints between levels, than it is easy to
>determine information just by performing existence checks.  In other
>words, integrity constraints become the equivalent of determining that
>an underground nuclear test just occured due to the fact that point A
>sent point B a message, even though the messaged was encrypted and you
>couldn't understand it.

Yes, there is an alternative, and it basically boils down to allowing
the type of "inference" channel that is described above.  Some users will
want the secrecy of the data, but will not want the integrity of the data
sacrificed in order to achieve the secrecy.  For these users, the "inference
channel" is OK, as long as they can properly audit it, or control it through
proper database design (taking data classification into consideration).  The
point is that it may be best to provide options in the situations where
the secrecy characteristics of the data are contrary to its integrity.
The problem goes way beyond the problem mentioned above of integrity
constraints being performed between levels (too detailed for this forum).
Thg point I was making is that the Oracle B1 architecture can never make these
decisions because the DBMS can't even get to the data to see if it is in fact
there.  If some "low level" portion of the DBMS could at least "understand"
that a security/integrity problem was about to occur, then it could do
some smarter things (even if they may result in inference problems), like
automatic upgrade, for instance.

>A subject for a discussion outside this conference would be how we
>enforce integrity constraints at B levels of security.

I agree.  This is an interesting problem.  Obviously they can be enforced if
they are not enforced across multiple levels of data.  The key to the problem
lies in getting the data classified correctly during input (classification
constraints?).  I'd be happy to discuss it offline, since it's highly relevant.