[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference ulysse::rdb_vms_competition

Title:DEC Rdb against the World
Moderator:HERON::GODFRIND
Created:Fri Jun 12 1987
Last Modified:Thu Feb 23 1995
Last Successful Update:Fri Jun 06 1997
Number of topics:1348
Total number of notes:5438

1132.0. "Oracle client/server performance?" by BIGRED::THOMPSON (Texan 'til I die) Wed Apr 08 1992 00:30

    I need some help and wondered if someone might point me in the right
    direction.
    
    We have a customer who has a Penta application (written using Oracle). 
    They had all their users on a microVAX 3600 and then upgraded to a VAX
    4000 model 300.  They got 12-15 users on the system and then the
    performance got critical.  Their corporate direction is Un*x but they
    like VMS.  We had a meeting with the Oracle techies and us and thought
    we had a good solution - move the users to DECsystem 5500's and leave
    the database on the VAX and use SQL*NET to access the data.  All went
    well and the customer implemented the solution.  They increased their
    users to 25-30.  Now the VAX 4000 is 100% CPU bound with Oracle server
    processes and one of the DECsystems is hardly used.  It seems that the
    VAX is still doing a lot of the work.   Is this true?  Most of the CPU
    time is spent in Supervisor mode - where Oracle runs.  I thought they
    would see a lot of I/O's not CPU time - what is it doing?  They are going
    to get Oracle to come out and take a look, but they are wondering why
    this client-server stuff didn't buy them anything.  There are no
    interactive users on the VAX 4000.
    
    Any ideas or pointers would be helpful!  Or is this typical Oracle
    client/server performance?  Is ours any better?
    
    Thanks!
    
    Patti
      
    
    
T.RTitleUserPersonal
Name
DateLines
1132.1Common ProblemTRCOA::MCMULLENKen McMullenWed Apr 08 1992 23:366
    A customer in Toronto tried to run a SQueaL*FORMS/ORACLE application in
    Client/Server mode across their network (uVAX client, VAX server) and
    found that SET HOST with screens running across the network was faster
    and more efficient! The Big O seems to have some difficulties with
    their client/server implementation.
    
1132.2Oracle in a client/server environmentUSWRSL::WITTENBER_STThu Apr 09 1992 02:112
    Anil Kumar at dtn 223-5190 heads up a team of DEC folks doing extensive
    testing of Oracle in a client/server environment. 
1132.3Same under UltrixHGOVC::DEANGELISMomuntaiThu Apr 09 1992 08:026
re .1

Same thing was found in a large Ultrix environment in SE Asia - rlogin was 
faster than client/server... (using SQL*forms).

John.
1132.4Client/Server program infoMRKTNG::SILVERBERGMark Silverberg DTN 264-2269 TTB1-5/B3Thu Apr 16 1992 16:08384
-------------------------------------------------------------------------------
LEDDEV::KUMAR "Anil Kumar, USS/SDES, MLO4/5-T37"                    15-APR-1992
--------------------------------------------------------------------------------
		HETEROGENEOUS CLIENT/SERVER DATABASE PROGRAM
				BY 
		SYSTEMS DESIGN & ENGINEERING SERVICES


	We are glad to announce some of the results obtained so far, under the
	Heterogeneous C/S Database (HCSD) program. The intent of this program 
	is to characterize 3rd Party Database environments on DEC platforms, 
	including RISC/MIPS & ALPHA on Ultrix and OSF1.

	As an initital effort under the HCSD program, we have completed 3 
	projects.  The results are posted in the following notes.

	The HCSD program and results were presented at RISC/ULTRIX Sales Support
	Symposium, on March 30th in Boston, MA.

	A public directory has been set up to serve as a repository for ALL	
	sorts of information coming out of this program. 
	It is located at:
			LEDDEV::DISK$USER16:[PUBLIC.HCSD]

	The types of information available in this area are:
		- HCSD Program Plan (HCSD_PROGRAM_PLAN_REVA.PS)
		- RISC/Ultrix Presentation (HCSD_OVERVIEW.PS)
		- Platform Requirements (*REQ.PS)
		- Engineering reports of tested configurations (*ENG_RPT.PS)
		- White Papers (*WHITEPAPER.PS)
		- Platform Templates (*TEMPLATES.PS)


	Any questions, suggestions, please feel free to contact us:

	NAME			DTN				NODE
	George Kasper		223-4453 (508)493-4453		LEDDEV::KASPER
	Paul Kotschenreuther	223-4215 (508)493-4215		OLDTMR::COUCH
	Anil Kumar		223-5190 (508)493-5190		LEDDEV::KUMAR
	LeRoy Smith		223-4886 (508)493-4886		OLDTMR::SMITH






            		GENERAL C/S MARKETING MESSAGES
	        	==============================

	o Open Configurations do really work!!!

	o Off-loading Applications CPU load could be the best solution from
                Price/Performance perspective !!!
	
	o Ideal solution for Preserving Current Investment


	o Effective Growth Strategy for Time-share Installations


	o System Optimization through Specialization (distribution of work load)


	o Systems configured by intermixing various H/W & OS could be the most
          efficient deployment of IS


	o Benefits of adding Application Servers to existing Standalone System


		- Increases number of Users

		- Improves overall performance of the system (most of the cases)

		- Allows for distribution of processing power

		- Allows deployment of different OS as needed

		- Allows of multiple development environments

		- Extends longevity of standalone system

		- Preserves investments already made

		- Enables Scalability to meet growth demands, cost effective

		- Allows flexibility in system configuration to meet 
		  organizational requirements



        	CHARACTERIZATION OF DEPARTMENTAL CLIENT/SERVER SYSTEM
	        =====================================================

		                PROJECT OBJECTIVE
                		-----------------

	Objective of this project was to evaluate a work load which is mainly 
	backend (database engine) intensive and establish the MAXIMUM number 
	of actual, and not theoretical connects, in an Ultrix environment.

	Note that the work load is just a single function, and such these 
	results are not EASILY SCALABLE, even for the backend configurations!

	Also, the Reserve CPU and Memory details are provided just to 
	highlight the usage, and could be used to configure other work.
	

		        MARKETING MESSAGES FOR ULTRIX C/S SYSTEM
	        	----------------------------------------


	o Maximum C/S system with user connects of 440 !!!
               (>Ultrix limit-256, >Ultrix/Telnet limit-176)


	o Scalability of C/S system for  Users Connects !!!


	o Scalability of C/S system for Active Users !!!


	o Reserve/unused CPU and Memory Increase !!!


	o Caveat: C/S may not be suitable for all types of User environment!




          		RESULTS from DDBS 5100/5500
	         	---------------------------
	
	Successful C/S implementation of Oracle database on Ultrix OS in a
	(3)5100/ (1)5500 configuration with RTE, a Debit/Credit work load in 
	a Telnet environment, over TCP/IP protocol with following results:

	o Max Number of connects, DS(3)5100/5500 C/S	- 440 @60sec Think Time 

	o Max Number of Users, DS5500 Standalone	- 135 @10sec TT

	o Max Number of Users, DS(3)5100/5500 C/S	- 150 @10sec TT

	Note that the criteria was 90% of responses to be within 2 secs!

BACKUP DATA:

Refer to Engineering Report for at 
LEDDEV::DISK$USER16:[PUBLIC.HCSD]*ENG_RPT.PS

	Anticipated Maximum Connections
--------------------------------------------------------------------------------
Systems				Configuration		Max. Connections
--------------------------------------------------------------------------------
DECsystem 5500			Standalone		176
DECsystem 5500 & 1 DS 5100	Server/Client		176
DECsystem 5500 & 2 DS 5100s	Server/Client		352
DECsystem 5500 & 3 DS 5100s	Server/Client		440
--------------------------------------------------------------------------------

             Active Users Supported
--------------------------------------------------------------------------------
Systems				Configuration		Active Users 
--------------------------------------------------------------------------------
DECsystem 5500			Standalone		135
DECsystem 5500 & 1 DS 5100	Server/Client		120
DECsystem 5500 & 2 DS 5100s	Server/Client		150
--------------------------------------------------------------------------------

             Reserve/Unused CPU 
--------------------------------------------------------------------------------
Systems/Configuration		Available  MIPS			Unused MIPS
			Client	Server	Total (Used)	Client	Server	Total
--------------------------------------------------------------------------------
DS 5500 Standalone	NA	32.5	32.5  (17.5)	NA	15	15
DS 5500 & 1 DS 5100	21.7	32.5	54.2  (32.2)	11	11	22
DS 5500 & 2 DS 5100s	43.4	32.5	75.9  (29.9)	33	13	46
--------------------------------------------------------------------------------

             Reserve/Unused Memory Memory(Mbytes)
--------------------------------------------------------------------------------
Systems/Configuration		Available Memory	Reserve Memory
			Client	Server	Total (Used)	Client	Server	Total
--------------------------------------------------------------------------------
DS 5500 Standalone	NA	256	256   (174)	NA	82	82
DS 5500 & 1 DS 5100	256	256	512   (347)	31	134	165
DS 5500 & 2 DS 5100s	512	256	768   (534)	106	128	234
--------------------------------------------------------------------------------



         		CHARACTERIZATION OF PC-SERVER SYSTEM
	         	====================================


		                 PROJECT OBJECTIVE
		                 -----------------

	Objective of this project was to demonstrate the operability and 
	get a rough feel for DS5100 system capacity as a 20 Users PC-server.



		  MARKETING MESSAGES FOR PC-ULTRIX C/S SYSTEM
                  -------------------------------------------

	o Proved Operability of Oracle C/S functionality across Pathworks PC
                Server


	o Allows a DB solution at $4881 cost/seat for 40 users


	o System has performance/capacity beyond 40 Users




		RESULTS from PCDBS PC/5100
		--------------------------

	Successful C/S implementation of Oracle database on Ultrix OS & Pat
        (20)316sx/325c based PCs/5100 configuration, a Debit/Credit work load,
	in a PC environment, over multiple protocols with following results:


	F20 PCs (20 Physical connects)          10 sec Think Time

	F20 PCs (approximating to 40)           5 sec TT

	F17 PCs					Minimum  TT


Note: 316sx & 325c  gave same response times, a reflection on Workload and not
	on PCs

BACKUP DATA:

Refer to Engineering Report for at 
LEDDEV::DISK$USER16:[PUBLIC.HCSD]*ENG_RPT.PS




		CHARACTERIZATION OF APPLICATION SERVER SYSTEM
		=============================================


                 		PROJECT OBJECTIVE
		                -----------------

	Objective of this project was to evaluate DS 5000-240 as an Application
	server (Frontend) for VAX/VMS database server (backend).

	Note that the results mention DS5900 because we used DS5900 for testing,
	as we didn't have assess to DS5000-240.  However, we believe these 
	numbers are, for all practical	purposes, valid for DS 5000-240.

	Also note that we have collaborated with the FABs/ISA Performance Group
	to develop and use use an Oracle Financial work load, used in the 
	characterize the Application server.

	For details on financial characterization, please contact:

	NAME			DTN				NODE
	John Finocchiaro	264-4324 (603)884-4324	vaxmkt::finocchiaro
	Mark Slater		264-6309 (603)884-6309	vaxmkt::slater
	Bernie Velivis		264-3376 (603)884-3376	vaxmkt::velivis


		MARKETING MESSAGES FOR APPLICATION SERVER
		-----------------------------------------

	o DS 5000-240 is an EXCELLENT Application Server for Highend VAX/VMS
	  machine (clusters).


	o Proved Operability of Oracle C/S functionality across VAX/VMS to 
	  RISC/ULTRIX Systems


	O Preserve Current Investments


	O Increase # Users


	O Improve the System Performance






			RESULTS from DBAS DS5900
			------------------------

	Successful C/S implementation of Oracle database on a heterogeneous
        RISC/ULTRIX DS5900/ VAX6000-510 configuration, a useful CPU in
        a LAT environment, over DECnet protocol with following results:


	o Response Time Decrease (VAX 6000-410)          - 9sec->3sec


	o Number of Users Increase, (VAX6000-410)        - 20->45


	o Response Time Decrease (VAX 6000-510)          - 13Sec->4sec


	o Number of Users Increase, (VAX6000-510)        - 36->71



	VAX6000-410 Standalone and C/S Comparison
============================================================
		    VAX 6000-410 TIME SHARE
============================================================

				95th percentile	Average
		6410	5900	TRANSACTION	Transaction
USERS	TPUT	%BUSY	%BUSY	SERVICE TIME	Service Time
-----	----	-----	-----	------------	------------
 4	508	18	N/A	           4	           3
 8	994	36	N/A	           6	           4
12	1461	53	N/A	           7	           5
14	1673	61	N/A	          10	           6
16	1881	69	N/A	          11	           7
20	2235	80	N/A	          17	           9

------------------------------------------------------------
	Max. tput is 2235, limited by 6410 CPU @80%
============================================================


============================================================
	    D5900 CLIENT + VAX 6000-410 SERVER
============================================================

				95th percentile	Average
		6410	5900	TRANSACTION	Transaction
USERS	TPUT	%BUSY	%BUSY	SERVICE TIME	Service Time
-----	----	-----	-----	------------	------------
10	1305	 9	13	           4	           3
20	2581	19	24	           5	           3
30	3817	37	36	          10	           5
45	5545	59	53	          19	           8

------------------------------------------------------------
	Max. tput is 5545, limited by response time
============================================================



	VAX6000-510 Standalone and C/S Comparison
============================================================
	VAX 6000-510 TIME SHARE
============================================================
USERS	TPUT	6510	5900	AVERAGE
		%CPU	%CPU	SERVICE
			BUSY	BUSY	TIME
-----	----	-----	-----	------------
14	1787	33	N/A	 2.5
22	2763	52	N/A	 3.2
30	3629	70	N/A	 5.1
36	4175	80	N/A	 7.0
42	4675	93	N/A	13.2	<-- Today
--------------------------------------------


============================================================
	VAX 6000-510 SERVER + DS5900 CLIENT
============================================================
USERS	TPUT	6510	5900	AVERAGE
		%CPU	%CPU	SERVICE
			BUSY	BUSY	TIME
-----	----	-----	-----	------------
20	2605	 8	24	 2.2
45	5644	27	53	 3.8   <-- Today
50	6197	30	58	 4.0  
60	7290	37	69	 5.1  
71	8497	43	80	 6.5  
============================================================