Title: | Ask the Storage Architecture Group |
Notice: | Check out our web page at http://www-starch.shr.dec.com |
Moderator: | SSAG::TERZA N |
Created: | Wed Oct 15 1986 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 6756 |
Total number of notes: | 25276 |
Hello We have a customer who has migrayed from KZPSC (With 32 MB cache) to an KZPSA- HSZ50(32MB cache) on a SW500 box. He finds that the spplication performnce has suddenly deterioated.The disk layouts and other things are the same. What might be the problem? One thing which I found was that he was using WIDE disks on a BA356 Box whern using an KZPSC . I suppose HSZ50 does not support WIDE disks which might be the problem and he is getting very bad IO throughput. Secondly , disks were directly connected to KZPSC while in HSZ50's case it is coming through an KZPSA which is an addtional overhead. Are the above two point are the only ones or I have missed out something. Please do comment on the above.any more suggestions are welcome. The config is AS8200 5/300 Memory 1 GB. Digital Unix 3.2d-2 Warm regards Balaji [Posted by WWW Notes gateway]
T.R | Title | User | Personal Name | Date | Lines |
---|---|---|---|---|---|
6656.1 | GVPROD::MSTEINER | Tue May 06 1997 01:50 | 9 | ||
>> What might be the problem? On the KZPSC the disks were set write-back, but the customer forgot to apply the same setting on the HSZ units ? PS: You can connect wide drives to HSZ, but you'll need the 8 bit module in the BA356. Michel. | |||||
6656.2 | SMURF::KNIGHT | Fred Knight | Tue May 06 1997 15:18 | 11 | |
Don't forget that the KZPSC is directly connected to the systems PCI bus. The HSZ must go via a SCSI bus into a scsi adapter and then onto the PCI bus. The PCI bus is something like 100+ Mb/sec, but the wide SCSI bus is 20- Mb/sec. It has nothing to do with the overhead of the KZPSA; you just created a bottleneck when you replaced the PCI direct connect with the SCSI interconnect. Fred | |||||
6656.3 | EPS::VANDENHEUVEL | Hein | Thu May 08 1997 10:46 | 20 | |
Well, the HSZ will ofcourse introduce an additional latency when the data has to come from / go to the disk. Verifying the write- back setting is ofcourse a good suggestion. Also, the swxcr was likely using stripe sets with small (default = 8KB) chunk sizes delivering data in parallel from drives where the HSZ is likely to be set up with large (128KB?) chunk sizes, hitting just a disk at a time. You really should define 'performance is bad' better. Notably, was the application looking for througput (bps, MB/sec) or IO rate (tps, IO/sec). Mostly read or mostly write? To fully calculate out the problem, we'll need average IO rates, avg IO sizes, queue depths? read-ahead? striping/raid-5/jbod? and so on. hth, Hein. |