| > What operating system(s) is being used, what version of ACMSxp and
> what database(s) are being used?
Digital Unix V3.2D-1 , ACMSxp V2.0, Oracle V7.1.6.2.0, DEC_Cobol V 2.3
> When you collect the data from the three different databases is it a
> read only query for each database?
Yes, read-only
> When the data is merged is the result of the merge stored in a database
> and displayed back to the user? Or is the merge just
> stored in a database and a success message sent back to the user.
The result is stored in a database and when the user is ready, he will start
a task to read the data.
> How much data is retrieved from each database?
Varying from 10 to 2000 records.
Problem discription (Dutch PTT, HKP Project):
Most of the time the user will do the same type of actions. (1)Enter a customer
number, (2)some other actions and (3)send a request for customer data. The time
between (1) and (3) is about 5 min. The time to collect the data (from (3))
is about 2 min. (Not acceptable to let the user wait that long!)
To collect the data, we need to access three different databases (Applications).
Database 1 is local (Oracle), database 2 and 3 are on a other system
(Tandem and Unisys) using DCE/RPC calls to connect.
The idea is:
step 1: user calls a task and passes a customer number
step 2: called task submits another task, control returns to user
step 3: Submitted task collects and merges the data (parallel ???)
step 4: When the user needs the data (+/- 5 min after step 1), start a task,
check or step 3 is ready, get the merged data and sent to the user
Robert
|
| >>>To collect the data, we need to access three different databases
>>>(Applications). Database 1 is local (Oracle), database 2 and 3 are
>>>on a other system (Tandem and Unisys) using DCE/RPC calls to connect.
>>>The idea is:
>>>step 1: user calls a task and passes a customer number
>>>step 2: called task submits another task, control returns to user
>>>step 3: Submitted task collects and merges the data (parallel ???)
>>>step 4: When the user needs the data (+/- 5 min after step 1), start a task,
>>>check or step 3 is ready, get the merged data and sent to the user
Here is what I would do, other members of the team may have suggestions
as well.
Steps 1,2 I would not modify.
In step 3 the submitted task would call a processing procedure that
does the following.
1. Create a thread to call a routine that collects the data out
of the tandem database:
2. Create a thread to call a routine that collects the data out
of the Unisys database.
3. Call the local oracle database and get the required data from it.
4. Use pthread_join to wait for the two threads started prior
to step 3 to complete.
5. Merge the data and store it based on the customer id.
Notification that the job is done could be handled in several
different ways, notification would be keyed off of the customer id.
To retrieve the data the user would call a task
that calls a processing procedure. The customer id that is passed in
would be used to find the merged data. If it is not found then the
processing procedure could wait a while and try again. Or you could
fail immeadiately and return back to the task and let the retry be
handled by the task. Assuming you raised a non-transaction exception in the
processing procedure and the task was restartarble the call to the processing
procedure would be retried up to the task retry limit.
Another method that could be used is as follows:
If the data to be displayed back to the user is not very large then
perhaps the data itself could be stored in a durable record queue
instead of a database.
You could have the processing procedure call a task that enqueues
a record onto a data queue. The key associated with the record would
be the customer id. The user would then call a task that would
dequeue the data from the record queue.
Another method that could be used is as follows:
If the data to be displayed back to the user is not very large then
perhaps the data would be first merged and stored in the database.
Then a task would be called from the processing procedure that
would also store the data in a NON-DURABLE record queue. The user
would try and dequeue the data from the record queue based on the
customer id and display it back to the user. this would be very
fast. Since a non-durable queue is being used if the machine should
be recycled before the user data is obtained it would be lost from
the non-durable record queue. If the task can not find the data
on the non-durable record queue then the database could be queried
for the data.
It might be worth keeping the merged data in the database for the
rest of the day or so in case the customer calls back, that way the
merged data is still in the database. You could have a task that runs in
the request queue that has the job of cleaning up merged customer
data records that are more than x hours or days old. When you submit
the task you could set it up so that it is repeated at the interval
that you desire.
Hope this helps
Rich.
ps. If you decide to go with the multithreaded programming approach
in the processing server we can help you with that, if help is needed.
|