T.R | Title | User | Personal Name | Date | Lines |
---|
248.1 | Rather Vague Question... | XDELTA::HOFFMAN | Steve, OpenVMS Engineering | Wed Feb 26 1997 08:56 | 23 |
|
Your customer's question is rather vague.
One can overlap most I/O activities under OpenVMS, without encountering
corruptions -- though you do not indicate the target device for the
sys$qio IO$_WRITExBLK call. (Are the outstanding read and write
requests on the same file?) If not sharing IOSBs or buffers or
other storage associated with the particular I/O activity, most any
I/O activities can be safely overlapped.
Be aware that there are specific steps for the synchronization of
asynchronous activity (the sys$synch call is one of the recommended
ways to synchronize completion), and there are specific requirements
around the allocation and specification of an event flag (with a call
to lib$get_ef) and around the storage allocation and specification of
an IOSB structure (IOSB storage must be non-volatile and non-shared
over the life of the asynchronous call). This information is covered
in the Programming Concepts manual, while additional synchronization
needed in shared memory is covered in the VAX and Alpha Architecture
Reference Manuals.
The RMS notes conference is located at VMSZOO::RMS_OPENVMS.
|
248.2 | need more detail on problem being solved | AUSS::GARSON | DECcharity Program Office | Thu Feb 27 1997 01:10 | 9 |
| re .0
Are we talking disk or terminal?
If terminal, there are some restrictions to active I/O operations to the
same terminal.
I would not recommend doing $QIO to a file also being accessed via RMS.
I doubt that buffering is properly synchronised in that case.
|
248.3 | To The Same Disk & Data | MSAM00::FOOSZEMUN | | Thu Feb 27 1997 06:24 | 8 |
| Hi,
Thank for you information. Anyway, to make the question clearer,
the operation is accessing the same disk and block of data as well.
Thanks,
SMF
|
248.4 | I/O Reordering -- IOSB Indicates Completion... | XDELTA::HOFFMAN | Steve, OpenVMS Engineering | Thu Feb 27 1997 10:09 | 31 |
|
: Thank for you information. Anyway, to make the question clearer,
: the operation is accessing the same disk and block of data as well.
In other words, the customer wants to know if the I/O subsystem will
reorder outstanding sys$qio requests to the XQP and to a particular
block on a disk device...
My *guess* is that the customer isn't looking for the (real) problem.
(ie: "why does the write take so long to complete?")
I would encourage the customer to determine why the write operation
is slow, and determine if this slowness can be resolved. (Large
numbers of small write operations can quite conceivably `flood' a
disk drive spindle, or various other components of the I/O path such
as the I/O controller, or can suck down all available process quotas,
or can potentially deplete available pool.) Find the contention...
The I/O subsystem does potentially reorder I/O requests, and I'd
definitely prefer to have the completion status from the write back
before I issued a read -- well, before I issued a read that expected
to see the data updated by the write...
If your customer is doing block-level access to data and needs more
speed -- your customer will want to seriously consider mapping the
file as a section file. (The section file `caches' the disk file data
in memory -- faster, but caching has its own tradeoffs, and has its
own requirements around periodic cache flushes to disk...) I've posted
C code that demonstrates mapping and accessing a section over in the
DEC C notes conference.
|