| >- a separate update process has been having problems with intermittent locking
> with this approach, and we are suspicious of this implementation since killing
> the pid associated with this process appears to allow the update to happen.
> this is curious, since the cursor is defined as "SQLSRV_TABLE_CURSOR/
> SQLSRV_MODE_READ_ONLY".
On what sort of locks does the locking occur ? (the "stall messages" screen of
RMU/SHOW STAT will show that). Probably a table or record lock.
The "read only" cursor mode only indicates that the program is not allowed to
do any updates via that cursor. It has no impact on the sort of locking done by
the database. To get true read only behaviour, you should start a read-only
transaction and commit it once you are done with the query. Otherwise, the SQL
default is to start a read-write transaction. In this mode, all the rows (and
index nodes) your query will touch will be read-locked until the end of the
transaction, preventing any other program from updating those records or index
nodes.
Also, you can go one step further in improving the performance of your program.
If the same query is executed each time, then you need to prepare it only once.
Make sure all variable information is passed via host variables instead of
litterals in the SQL text.
sqlsrv_associate
sqlsrv_prepare
sqlsrv_allocate_sqlda_data
sqlsrv_declare_cursor -- SQLSRV_TABLE_CURSOR/SQLSRV_MODE_READ_ONLY
repeat as necessary
sqlsrv_execute_immediate (to start a read only transaction)
sqlsrv_open_cursor
repeat as necessary
sqlsrv_fetch_many
sqlsrv_fetch
sqlsrv_close_cursor
sqlsrv_execute_immediate (to commit the read only transaction)
sqlsrv_release_statement
sqlsrv_release
If you have multiple statements, you can keep them all prepared the same way.
Just make sure to remember the statement ids, SQLDAs and cursor ids for each
statement.
Finally, since you are (cleverly) using the fetch_many mechanism, make sure to
use as large as possible a buffer. Not sure what the maximum buffer size is for
5.1 though.
/albert
|
| Hi,
Thanks so much for the response.
I am in the process of modifying the code to "set transaction read only"
to guarantee the transaction state. Thanks for making that suggestion.
Although the code deals with "select" statements, with the exception of
the "set transaction", the selects are different (eg. search for different
values, and even different tables). I'm assuming that I must continue to
prepare these statements as well.
Our most important concern is the connection space. Is there a problem in
establishing a single connection that persists as long as the vax/vms
node continues to operate? Will the integrity of the data be compromised
given the update process mentioned in my base note (eg. if a row is
modified by the update process and my program is constantly accessed via
UCX, will that changed row be available to my program?)
Thanks again VERY much Albert for your help,
Patti
|
| >Although the code deals with "select" statements, with the exception of
>the "set transaction", the selects are different (eg. search for different
>values, and even different tables). I'm assuming that I must continue to
>prepare these statements as well.
As I mentionned, you will get very significant performance improvements if you
can try and reuse previously prepared statements. I suspect that, even though
the application issues a large number of different SELECT statements, some of
them are used more frequently than others. You may want to add a simple
"statement" cache to your code. The cache would remember a set of SQL
statements, along with their statement id, sqldq, cursor name. When you are
asked by the upper layers of your code to do a certain query, you first check
the cache to see if you already did get the same statement before, and if so,
you can skip the prepare steps. You would need to define some policy for
managing that cache, so that frequently used statements would stay longer. I
believe I have some example code that implements a simple cache of that kind.
I can mail it to you if you want.
>Our most important concern is the connection space. Is there a problem in
>establishing a single connection that persists as long as the vax/vms
>node continues to operate? Will the integrity of the data be compromised
>given the update process mentioned in my base note (eg. if a row is
>modified by the update process and my program is constantly accessed via
>UCX, will that changed row be available to my program?)
No, that should not be a problem in your case (all you do is read-only
queries). If you were doing read-write transactions, or storing/inserting
records, then I would recommend that you disconnect and reconnect periodically.
Note however that the DBA on the server may need to shudtown the database
(and/or the sql/services server). This would then cause errors later on in your
program as it tries to use the database. You may want to catch those errors
and have your program reconnect when they happen.
One point though: the read only transaction lets you read information even
though it is being updated at that same moment. However in that case, your
program will see the data as it was prior to the update, so it may be slightly
stale, like this:
S U1 C S U2 C S U3 C
Updater: !----------! !----------! !----------!
S R1 C
Reader !-----------!
Time -------------------------------------------------------->
"S" and "C" indicate the start and end of a transaction. "U" indicates an
update. "R" denotes the reading of the same info.
In this example, the "reader" only sees the update U1. It does not see the
update U2 (it will on the next cycle). This is because the transaction that
did this update was still active when the reader started its read-only
transaction.
If this is not acceptable, then you cqn have the reader use READ WRITE
transactions instead of READ ONLY. However, the reader will then have to wait
for the updater to commit, and it may also block other updaters. A bit like
this (where "." denotes a waiting process):
S U1 C S U2 C S U3 C
Updater: !----------! !----------! !....----------!
S R2 C
Reader !..-----------!
Time -------------------------------------------------------->
In this case, however, the reader will always see the most current status of
the shared data.
/albert
|
| Albert,
Can't thank you enough for getting back to me on this one, and I'm very
happy to take you up on the 'prepared/select_stmt_cache' code that you
offered below.
E-mail is: [email protected]
Phone: (508) 493-2828
Thanks once again for all your help!
Patti
--------------------------------------------------------------------------------
>As I mentionned, you will get very significant performance improvements if you
>can try and reuse previously prepared statements. I suspect that, even though
>the application issues a large number of different SELECT statements, some of
>them are used more frequently than others. You may want to add a simple
>"statement" cache to your code. The cache would remember a set of SQL
>statements, along with their statement id, sqldq, cursor name. When you are
>asked by the upper layers of your code to do a certain query, you first check
>the cache to see if you already did get the same statement before, and if so,
>you can skip the prepare steps. You would need to define some policy for
>managing that cache, so that frequently used statements would stay longer. I
>believe I have some example code that implements a simple cache of that kind.
>I can mail it to you if you want.
|