T.R | Title | User | Personal Name | Date | Lines |
---|
5043.1 | | M5::LWILCOX | Chocolate in January!! | Tue Feb 18 1997 14:48 | 6 |
| Jerry, this sounds ever so vaguely familiar from many years ago.
Does it seem to make a difference if you define the buffers logical to
something larger?
I might be thinking of an entirely different problem, but it rang a bell.
|
5043.2 | Disoptimal DB design for backup speed | NOVA::DICKSON | | Tue Feb 18 1997 15:59 | 7 |
| How come there are so many small storage areas? There is processing
overhead in BACKUP involved with having large numbers of storage areas
being backed up at once. It takes a lot of memory too. Each storage
area gets its own set of buffers and its own thread with stack space.
How many tapes are being written in parallel? What type of tape drive
is being used?
|
5043.3 | how much overhead | M5::JHAYTER | | Tue Feb 18 1997 16:46 | 23 |
|
Liz, upping buffers made the problem worse.
> How come there are so many small storage areas?
I think the customer was just "testing". I didn't feel like creating
a 10 gig db.
>There is processing
> overhead in BACKUP involved with having large numbers of storage areas
> being backed up at once. It takes a lot of memory too. Each storage
> area gets its own set of buffers and its own thread with stack space.
That explains the some of the additional pgfile needed and some of the
time. But the overall increase in cpu and page faulting still seems
a bit out of line. 6x for cpu, 300x for pgflts, at least in my test.
> How many tapes are being written in parallel? What type of tape drive
> is being used?
On the customer side don't know, I'll check. On my testing I was going to
disk.
|
5043.4 | | DUCATI::LASTOVICA | Is it possible to be totally partial? | Tue Feb 18 1997 16:53 | 6 |
| I think that you can assume that the increase in CPU time is
probably related to the increase in page faults. I suspect that
RMU probably scales the number of 'somethings' based on the
number of storage areas and/or the size of each. You might want
to try doubling the process's working set limits and see of that
helps.
|
5043.5 | | NOVA::DICKSON | | Wed Feb 19 1997 09:24 | 23 |
| Performance degradation stops being linear and becomes exponential
once key resources approach saturation. So it is entirely reasonable
for a 2x increase in load to result in a 5x degradation in performance
if you have pushed it over the knee in the curve.
A formula useful for approximating performance is
Tresponse = Tbase * (1 + (u/(1-u))
Where Tresponse is the time to complete an operation under load
Tbase is the time to complete the operation in the
absense of heavy load
u is the fraction of the resource (CPU, network, or disk
bandwidth for example) being used.
You can see that as utilization approaches 1.0, Tresponse goes to
infinity.
2000 storage areas with even one disk buffer (32kB) each needs 64
Megabytes of address space. It is not surprising that you are
getting a lot of page faults. BACKUP attempts to read from all 2000 areas
at once. (This is planned to change...)
|
5043.6 | Queueing Theory 101 | NOVA::DICKSON | | Wed Feb 19 1997 14:50 | 11 |
| For anyone trying to figure out how that formula could be correct, it
works on probabilities. If the resource you want to use is idle, then
your request will be processed right away, in time Tbase. If the
resource you want to use is busy half the time on average, then when
your request comes along you have a 50-50 chance of having to wait for
a previous request to complete. The busier the resource, the greater
the probability you will have to wait in a queue somewhere, and
therefore the longer the queue as everybody else waits too.
This formula takes all that into account, and is very useful for
back-of-the-envelope calculations about performance scalability.
|