T.R | Title | User | Personal Name | Date | Lines |
---|
1518.1 | same thing here | ALFAM7::GOSEJACOB | | Thu Feb 20 1997 05:46 | 9 |
| re .0
Hmmm, I just tried DB_BLOCK_SIZE=8k, DB_FILE_MULTIBLOCK_READ_COUNT=32 and
restarted the database (7.3.2.2). Similar result here: when I do a show
parameter db_file_multiblock_read_count I get 16 (16 * 8k == 128k).
On the other hand: don't worry to much about this limit. The Unix
kernel itself will not do anything larger than 64k I/O's.
Martin
|
1518.2 | What if ? | BRADAN::ELECTRON | | Thu Feb 20 1997 08:53 | 6 |
| Hi Martin,
What if I am using raw devices and Oracle is reading directly.
Regards
Electron
|
1518.3 | nope, doesn't make a difference | ALFAM7::GOSEJACOB | | Thu Feb 20 1997 09:11 | 7 |
| re .2
There are various layers in the kernel, LSM for example and I don't have
a clue which else, that limit I/O's to 64k. Which means in this respect
it doesn't really make any difference whether you put your Oracle data
files on raw devices or a file system.
Martin
|
1518.4 | higher limit would help | WONDER::REILLY | Sean Reilly, Alpha Servers, DTN 223-4375 | Fri Feb 21 1997 07:21 | 14 |
|
Raw device reads can go over 64k, so if you are not using LSM, etc.,
there can be benefits.
In fact the TPC-D audit used, if my information is correct, 128K
multiblock reads only because it was an Oracle limit. They'd have
gone higher if possible.
On most i/o subsystems I'll agree that bandwidth is approaching
saturation at around 64K trabsfer sizes, but there are boosts from
going higher. This may not be important for a typical customer
application, but it helps things like benchmarks.
- Sean
|
1518.5 | We also used LSM | BRADAN::ELECTRON | | Tue Feb 25 1997 04:50 | 6 |
| Hi,
We are using raw devices and LSM. We patched VOLINFO.MAX_IO to get LSM
to write greater than 64k.
Thanxs
|
1518.6 | | EPS::VANDENHEUVEL | Hein | Tue Mar 25 1997 10:42 | 27 |
|
[catching up with note file. sorry for replying so late]
> We are using raw devices and LSM. We patched VOLINFO.MAX_IO to get LSM
> to write greater than 64k.
Did you get a chance to quantify the results from this.
I always thought it was sorta nice to have LSM break up a single
large IO from the application (Oracle) into multiple concurrently
executing parallel IOs. This would only be usefull ofcourse where
the hardware is setup to allow for parallel activities also.
Specifically one would want to stripe set members to be spread
over multiple IO adapters (eg KZPSA), and for really high through-
puts over multiple busses (PCI).
(I know of a SORT benchmark reading at 200Mb/sec using what admittedly
seems a little bit excessive IO subsystem: 360 disks on 168 scsi busses
behind 56 HSZs connected to 28 KZPSAs in 8 PCI busses. I can't find my
notes just now to verify whether they used 32Kb or 16Kb stripe sizes,
but for sure it was less or equal to 32Kb).
thanks,
Hein.
|