| RAID-5, in all but the most ideal conditions, has generally
poor write performance. When you hobble it by turning off
the write-cache, this performance is likely to get even
worse. And, in the case of your RAIDs, the large chunk
sizes make the likelyhood of large write optimizations even
less likely.
The first thing I would do is turn on the write cache for
the logical unit having the restore done. That should have
some positive affect. After that it gets harder. If this
kind of write load is going to be common, the customer needs
to think again about using RAID-5.
Typically, RAID-5 writes require reading the old copy of
the data and the corresponding XOR. From this, the new
XOR is calculated and then the new data and XOR can be
written. This is called read-modify-write and requires
four I/Os for every write. Implementations that do work
to close the write-hole, may do even more I/O to keep track
of the state, though these will tend to be small.
As the width of a write affects more members of the array
by being close to the stripe width, the write algorithm
changes and it becomes more efficient to read the unaffected
data to calcuate the XOR. It has been too long for me to
remember what this particular write algorithm is called.
At the point where the write size is the stripe width,
no data has to be read, since the XOR can be calculated
from all the data being written. At this point a RAID-5
looks remarkably like RAID-3 and the only extra I/O needed
(aside from the state management ones) is the one to write
the XOR data.
Recalling that your chunk size is 40something sectors across
four members, your stripe width is around 800 KB. Since the
HSZ has a 64 KB I/O size limit you'll always be doing a read-
modify-write. With the write-cache enabled, the HSZ would
have the chance to collect enough data to do a full width
write.
|
| Hi, thanks for your quick reply,
I have two questions more: 1) what happen if vrestore continuosly fills write
back
cache, and 2) what happen when vrestore writes on raids sets, it countinuosly
add
new data: does not modify any data.
thanks for any help.
[Posted by WWW Notes gateway]
|
| 1. The cache will become full at which time, the controller will
have to flush it to allow more data into the cache.
2. One hopes that the data gets written to disk...
The typical write load served by a cache such as the one in the
HSZ is one where many of the same blocks are being written over
and over and the cache absorbs all the writes so that it only
needs to write infrequently to disk. A vrestore isn't going to
present this kind of load to a device.
But, the other use of the cache in RAID-5 is to accept a lot
of hopefully contiguous data, so that the controller firmware
can collect a lot of little writes into a single large one.
Or, to take advantage of the RAID-3 optimizations, a smaller
set of large writes that span an entire stripe.
The 3rd use of the cache is also particular to RAID-5 is to
handle all the metadata I/O that is needed to maintain the
array in a safe state. This is I/O that the host never sees,
but can signficiantly affect the performance if you have to
wait on a relatively slow disk to complete a write.
I don't know that enabling the write-back cache will help
this particular I/O load, but I think it is poor use of
the feature not to use it. I doubt it will hurt and anything
could help.
|