Title: | AdvFS Support/Info/Questions Notefile |
Notice: | note 187 is Freq Asked Questions;note 7 is support policy |
Moderator: | DECWET::DADDAMIO |
Created: | Wed Jun 02 1993 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 1077 |
Total number of notes: | 4417 |
Hi, I'm working a series of issues with a customer that's trying to get their system switched over to Digital UNIX from HP. We've pretty much got all of their ase and other things straightened out, but now they're running into some performance issues. They're using Progress database, writing to a raid 3/5 unit on an HSZ50. The filesystem is advfs (data1_domain). They've got 512MB of memory and 32MB of cache on the HSZ50, and they're running DU 4.0b. I've been doing some monitoring and I'm not sure is the problem is with the way advfs is configured or not. I had them increase AdvfsMaxCachePercent to 20, but that didn't seem to help too much. They've run other benchmarks in the past on this same config, and they claim to have seen about 10-12MB/sec throughput on the device. They don't have any docs on what the config was when they ran that, though. Here's some of the monitoring output: First, the advfsstat output for the domain in question. There's only 1 device in the domain (rz130c) and they're using it for Progress data: and temp space: Tue Apr 22 15:06:08 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 3 587 3 128 12 65 0 3 3 0 1 3 581 3 128 12 61 0 3 3 0 1 2 453 2 128 10 64 0 3 3 0 1 15 453 3 128 12 68 0 3 3 0 1 5 465 3 128 16 65 0 3 3 0 1 31 414 0 0 12 68 0 3 3 0 0 34 428 2 128 12 70 0 4 3 0 1 3 673 3 128 14 65 0 3 1 0 0 2 640 2 128 12 68 0 3 4 0 0 3 597 3 128 12 70 0 3 4 0 1 Tue Apr 22 15:06:38 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 4 579 4 128 15 72 0 3 2 0 1 3 611 3 128 13 70 0 3 4 0 0 3 587 3 128 12 73 0 0 7 0 0 3 582 3 128 12 73 0 0 4 0 0 4 584 4 128 11 74 0 3 4 0 1 3 564 3 128 12 73 0 3 4 0 1 3 570 3 128 13 67 0 3 4 0 0 4 634 4 128 24 66 0 3 4 0 0 3 644 3 128 14 68 0 3 4 0 1 3 600 3 128 22 64 0 0 7 0 0 Tue Apr 22 15:07:08 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 4 612 4 128 15 73 0 0 7 0 0 4 642 4 128 12 73 0 3 4 0 0 4 664 4 128 28 61 0 0 2 0 1 3 662 3 128 29 61 0 3 4 0 0 3 636 3 128 12 73 0 3 4 0 0 3 622 3 128 22 63 0 3 4 0 0 4 663 4 128 35 59 0 0 10 0 0 3 645 3 128 13 68 0 3 4 0 1 4 588 4 128 14 68 0 3 4 0 1 3 574 3 128 11 75 0 3 4 0 0 Tue Apr 22 15:07:38 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 4 581 4 128 14 75 0 3 2 0 0 3 615 3 128 12 76 0 3 4 0 0 3 613 3 128 11 75 0 3 4 0 0 4 606 4 128 15 73 0 3 0 0 0 3 613 3 128 14 70 0 3 4 0 1 2 222 2 128 4 64 0 3 16 0 3 2 462 2 128 12 72 0 3 4 0 1 3 593 3 128 12 72 0 3 4 0 1 4 638 4 128 14 68 0 3 4 0 1 4 605 4 128 15 73 0 3 4 0 0 Tue Apr 22 15:08:08 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 3 598 3 128 12 74 0 3 4 0 1 4 635 4 128 15 73 0 3 1 0 0 3 625 3 128 13 68 0 3 4 0 0 3 603 3 128 13 70 0 3 4 0 0 4 621 4 128 17 67 0 3 4 0 1 3 627 3 128 14 65 0 3 4 0 1 4 614 4 128 12 73 0 3 14 0 8 3 592 3 128 16 71 0 3 4 0 0 4 641 4 128 13 71 0 3 4 0 1 3 565 3 128 12 73 0 0 7 0 0 Tue Apr 22 15:08:38 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 3 540 3 128 11 75 0 3 4 0 0 3 579 3 128 11 77 0 3 4 0 0 3 580 3 128 12 73 0 3 4 0 0 3 583 3 128 12 72 0 3 4 0 1 4 583 4 128 15 71 0 3 0 0 19 3 588 3 128 12 74 0 0 7 0 0 4 596 4 128 12 70 0 3 4 0 1 4 643 4 128 15 72 0 3 2 0 1 3 620 3 128 14 67 0 3 4 0 1 3 634 3 128 12 73 0 3 4 0 1 Tue Apr 22 15:09:08 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 4 570 4 128 15 72 0 3 0 0 0 3 593 3 128 14 64 0 3 4 0 1 3 463 3 128 12 66 0 3 4 0 0 3 549 3 128 12 74 0 3 3 0 0 11 489 1 128 10 68 0 3 3 0 1 35 486 0 0 14 66 0 3 3 0 1 27 524 0 0 12 62 0 3 3 0 1 22 489 4 128 15 65 0 3 3 0 1 4 534 4 128 16 63 0 3 2 0 0 3 502 3 128 11 64 0 3 3 0 0 Tue Apr 22 15:09:39 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 26 478 0 0 11 65 0 3 10 0 1 28 513 0 0 16 61 0 3 3 0 0 36 481 0 0 10 68 0 3 14 0 0 29 455 2 128 16 64 0 3 3 0 0 3 516 3 128 10 68 0 3 3 0 1 4 488 4 128 13 67 0 3 3 0 1 12 501 1 128 12 65 0 0 6 0 0 28 417 0 0 12 62 0 3 4 0 0 24 596 4 96 19 63 0 3 3 0 0 34 433 0 0 11 72 0 3 2 0 1 Tue Apr 22 15:10:09 EDT 1997 vol1 rd wr rg arg wg awg blk wlz rlz con dev 25 425 2 128 9 67 0 3 2 0 1 4 496 4 128 12 65 0 3 2 0 1 3 443 3 128 9 69 0 4 5 0 1 5 675 5 128 18 89 0 3 4 0 0 5 713 5 128 32 75 0 3 3 0 0 23 779 2 128 53 64 0 3 3 0 1 50 782 0 0 87 59 0 3 0 0 29 36 751 0 0 81 58 0 3 7 0 1 33 742 0 0 70 58 0 3 7 0 0 14 800 6 128 100 57 0 3 3 0 1 This is just a sample, but the rest is pretty much the same. Next is the iostat output for the device in the domain (rz130c). This was run at the same time: Tue Apr 22 15:05:07 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 86 0 0 184 16 124 4 198 9 1 5 2 92 0 421 0 0 84 11 1628 188 101 5 1 26 8 65 0 219 0 0 26 4 1934 223 32 2 3 28 6 63 0 211 0 0 52 7 1968 224 56 2 0 27 6 66 0 211 0 0 95 10 1915 221 53 2 3 28 6 63 0 196 0 0 87 12 1899 214 35 2 2 25 7 66 0 211 0 0 45 6 1976 228 43 2 3 27 6 64 0 196 0 0 34 5 1913 217 13 2 0 26 5 68 0 400 0 0 47 7 2022 206 93 4 1 24 8 67 0 318 0 0 34 6 1939 206 53 3 3 27 6 63 Tue Apr 22 15:05:34 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 87 0 0 184 16 127 4 198 9 1 5 2 92 0 355 0 0 317 31 1817 189 51 2 1 24 6 68 0 303 0 0 194 19 1835 193 11 1 0 25 6 68 0 383 0 0 45 7 2043 204 117 9 3 24 6 67 0 1580 0 0 51 7 1771 181 112 7 0 26 7 67 0 145 0 0 80 13 2467 256 32 2 1 19 8 73 0 145 0 0 102 10 2560 266 24 2 0 20 6 74 0 948 0 0 52 7 2297 234 88 5 18 22 6 54 0 1910 0 0 36 5 1745 178 107 6 40 28 7 25 0 187 0 0 36 5 1809 194 75 2 29 26 6 39 Tue Apr 22 15:06:01 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 88 0 0 184 16 131 5 197 9 1 5 2 92 0 271 0 0 52 7 1926 207 56 2 3 27 6 64 0 203 0 0 105 10 1984 210 43 2 0 28 6 66 0 890 0 0 73 10 1518 175 91 5 0 28 9 63 4 268 0 0 26 4 1779 203 24 2 3 29 6 62 2 476 0 0 103 15 1427 160 95 4 0 28 5 66 0 183 0 0 39 6 1390 156 115 3 0 33 5 62 0 167 0 0 39 5 1489 158 117 3 0 32 5 62 0 296 0 0 52 7 1319 152 85 4 3 32 5 61 0 302 0 0 63 9 1379 159 125 5 0 30 5 65 Tue Apr 22 15:06:28 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 88 0 0 183 16 134 5 197 9 1 5 2 92 0 697 0 0 71 9 1713 192 85 5 1 26 8 66 0 198 0 0 39 5 1902 220 21 2 3 25 6 66 0 423 0 0 133 16 1713 191 101 5 1 25 6 69 0 185 0 0 58 8 1854 210 75 2 0 25 6 68 0 206 0 0 62 9 1771 198 75 2 0 26 5 69 0 183 0 0 56 8 1769 196 56 2 3 27 6 64 0 183 0 0 69 10 1734 193 75 2 1 27 8 65 0 192 0 0 48 7 1745 197 75 2 0 28 6 67 3 170 0 0 46 6 1745 197 75 2 3 28 5 64 Tue Apr 22 15:06:55 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 183 16 137 5 197 9 1 5 2 92 3 326 0 0 72 10 1731 195 75 2 0 26 5 69 0 175 0 0 41 6 1907 209 53 2 0 26 5 69 0 175 0 0 59 8 1853 203 80 3 3 27 5 65 0 175 0 0 102 10 1923 209 75 2 0 26 6 68 0 175 0 0 90 13 1761 199 53 2 0 26 8 65 0 177 0 0 33 5 1915 212 77 3 0 28 6 66 0 168 0 0 51 8 1989 220 75 2 3 27 6 65 0 168 0 0 28 4 2016 223 75 2 0 27 5 67 0 168 0 0 52 7 2022 218 77 3 0 27 6 67 Tue Apr 22 15:07:22 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 183 16 141 6 197 9 1 5 2 92 0 152 0 0 70 10 1873 205 53 2 3 25 5 67 0 168 0 0 31 4 2019 222 75 2 0 27 6 67 0 168 0 0 49 8 2005 216 77 3 2 27 8 63 0 176 0 0 36 6 1897 215 77 3 2 27 6 65 0 144 0 0 147 19 1657 186 53 2 1 25 7 67 0 160 0 0 61 9 1739 197 75 2 3 26 5 66 0 160 0 0 66 9 1833 202 77 3 0 27 5 68 0 168 0 0 48 7 1811 205 75 2 0 27 5 68 0 160 0 0 45 6 1803 202 53 2 3 26 8 63 Tue Apr 22 15:07:49 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 183 16 144 6 197 9 1 5 2 92 0 160 0 0 143 12 1785 201 75 2 0 26 6 68 0 112 0 0 46 6 1204 134 51 2 0 17 4 78 0 72 0 0 26 4 860 95 27 1 0 12 3 85 0 168 0 0 61 9 1851 204 77 3 3 27 6 64 0 168 0 0 43 6 1825 206 56 2 0 27 5 68 0 168 0 0 31 4 1809 205 75 2 0 27 5 67 0 160 0 0 68 9 1809 199 77 3 3 27 8 62 0 176 0 0 95 10 1880 214 53 2 0 28 6 66 0 168 0 0 52 8 1915 212 77 3 0 28 6 66 Tue Apr 22 15:08:17 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 182 16 147 6 196 9 1 5 2 92 0 168 0 0 58 8 1835 208 75 2 0 27 5 67 0 168 0 0 28 4 1795 203 75 2 3 28 6 63 0 160 0 0 62 9 1875 206 77 3 2 29 6 64 0 176 0 0 28 4 1907 215 53 2 3 29 6 63 0 152 0 0 78 12 1691 189 75 2 3 25 8 64 0 168 0 0 36 5 1947 215 80 3 0 28 6 66 0 175 0 0 118 14 1726 195 74 2 35 28 7 30 0 152 0 0 60 8 1649 185 53 2 12 27 5 56 0 176 0 0 28 4 1761 199 75 2 0 27 5 67 Tue Apr 22 15:08:44 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 182 16 150 7 196 9 1 5 2 92 0 160 0 0 87 11 1683 189 75 2 0 26 6 68 5 173 0 0 47 7 1753 195 75 2 1 27 8 64 0 324 0 0 66 9 1683 189 53 2 3 26 5 66 0 175 0 0 87 12 1795 197 77 3 0 27 6 67 0 175 0 0 45 6 1790 203 75 2 0 26 6 68 0 175 0 0 37 5 1854 210 53 2 3 26 6 66 0 183 0 0 42 6 1857 204 80 3 0 27 6 67 0 183 0 0 34 5 1877 213 75 2 0 28 5 66 0 185 0 0 72 10 1745 197 53 2 3 26 8 63 Tue Apr 22 15:09:11 EDT 1997 tty fd0 rz8 rz130 rz140 cpu tin tout bps tps bps tps bps tps bps tps us ni sy id 1 89 0 0 182 16 153 7 196 9 1 5 2 92 0 214 0 0 123 13 1860 205 56 2 0 27 6 66 0 2148 0 0 48 6 1249 137 146 10 0 31 6 62 0 342 0 0 64 10 1769 199 99 5 0 27 6 67 0 223 0 0 46 6 1553 179 93 2 3 30 6 62 0 199 0 0 86 13 1406 160 75 2 0 26 5 69 0 231 0 0 25 5 1574 185 115 3 0 30 6 65 0 207 0 0 44 6 1542 171 75 2 3 28 8 62 0 231 0 0 36 5 1641 180 99 3 0 31 5 64 0 231 0 0 79 11 1516 171 93 2 1 30 7 63 It looks like they're only seeing 1.5-2.5MB/sec on the device, but I can't figure out if advfs isn't getting the writes out there any faster or if the HSZ50 isn't accepting them any faster. I also monitored the memory usage during the test. the free page number was consistently low, so I'm thinking it might be a memory bottleneck: Tue Apr 22 15:05:20 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 3137 32 44K 248 18K 584K 84K 348K 2650 92K 588 80 1K 674 6 2 92 4137 31 44K 127 18K 297 21 247 0 24 0 257 3K 1K 31 7 63 3137 32 44K 127 18K 170 21 123 0 23 0 255 3K 1K 28 6 67 4137 31 44K 110 18K 108 0 108 0 0 0 236 3K 1K 26 5 69 4137 31 44K 104 18K 319 0 319 0 0 0 245 5K 1K 25 9 67 4137 31 44K 104 18K 179 24 127 0 28 0 297 4K 1K 29 6 65 4137 31 44K 376 18K 376 40 151 0 91 0 291 4K 1K 27 7 66 4137 31 44K 292 18K 172 22 123 0 23 0 216 3K 1K 26 6 69 4137 31 44K 174 18K 112 0 112 0 0 0 242 3K 1K 27 6 67 3137 32 44K 104 18K 246 37 167 0 42 0 244 3K 1K 24 7 70 Tue Apr 22 15:05:50 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4137 31 44K 217 18K 586K 84K 349K 2650 92K 588 80 1K 676 6 2 92 4137 31 44K 149 18K 296 23 245 0 24 0 271 3K 1K 26 6 68 4137 31 45K 107 18K 192 22 143 0 23 0 404 2K 1K 46 7 47 4136 32 44K 120 18K 147 0 147 0 0 0 403 4K 1K 74 7 19 3137 32 45K 104 18K 179 24 127 0 28 0 300 3K 1K 37 6 57 3137 32 44K 128 18K 107 0 107 0 0 0 232 3K 1K 31 5 64 3137 32 44K 128 18K 184 21 137 0 23 0 219 3K 1K 30 6 64 4136 31 44K 128 18K 336 0 318 0 7 0 227 5K 1K 28 8 64 3137 32 44K 128 18K 188 25 130 0 35 0 240 3K 1K 30 6 64 4137 31 45K 120 18K 178 22 129 0 23 0 177 2K 1K 32 5 63 Tue Apr 22 15:06:21 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4137 31 44K 248 18K 588K 84K 351K 2650 92K 588 80 1K 677 6 2 92 4137 31 44K 129 18K 294 21 245 0 24 0 175 2K 1K 36 5 59 4136 32 44K 114 18K 178 21 130 0 23 0 184 2K 1K 31 5 65 3137 32 44K 128 18K 419 24 350 0 38 0 201 5K 1K 29 8 63 4137 31 44K 128 18K 107 0 107 0 0 0 250 3K 1K 26 6 68 4137 31 44K 120 18K 113 0 113 0 0 0 232 3K 1K 29 5 66 3137 32 44K 112 18K 439 61 167 0 114 0 251 3K 1K 26 6 68 3137 32 44K 128 18K 155 15 124 0 19 0 223 2K 1K 25 5 70 3137 32 44K 121 18K 109 0 109 0 0 0 224 2K 1K 29 5 66 4137 31 44K 111 18K 172 23 123 0 23 0 223 2K 1K 27 5 68 Tue Apr 22 15:06:51 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4137 31 44K 237 18K 590K 84K 353K 2650 92K 588 81 1K 678 6 2 92 5136 30 44K 127 18K 313 21 245 0 31 0 223 2K 1K 30 6 64 4137 31 45K 88 18K 242 45 142 0 51 0 244 2K 1K 27 5 68 3137 32 45K 96 18K 143 10 116 0 16 0 233 2K 1K 25 5 70 3137 32 44K 112 18K 107 0 107 0 0 0 232 2K 1K 27 5 68 3137 32 45K 104 18K 107 0 107 0 0 0 238 2K 1K 31 6 64 3137 32 44K 137 18K 218 37 139 0 42 0 239 4K 1K 25 9 66 4135 32 45K 112 18K 320 0 320 0 0 0 235 2K 1K 28 5 67 3136 32 45K 103 18K 106 0 106 0 0 0 244 2K 1K 30 6 64 3136 32 45K 102 18K 169 21 122 0 23 0 244 2K 1K 27 5 68 Tue Apr 22 15:07:21 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4136 31 44K 232 18K 592K 84K 354K 2650 93K 588 81 1K 680 6 2 92 3136 32 45K 128 18K 401 55 275 0 64 0 239 3K 1K 28 6 65 3136 32 45K 128 18K 107 0 107 0 0 0 240 2K 1K 27 5 68 3136 32 45K 128 18K 314 0 314 0 0 0 246 4K 1K 29 8 63 3136 32 45K 128 18K 106 0 106 0 0 0 238 2K 1K 28 6 67 4135 32 45K 101 18K 179 15 122 0 27 0 412 3K 1K 28 7 65 4136 31 44K 128 18K 415 61 168 0 106 0 249 2K 1K 28 6 66 3136 32 44K 133 18K 107 0 107 0 0 0 232 2K 1K 27 5 68 3136 32 45K 120 18K 169 21 122 0 23 0 229 2K 1K 27 6 68 3136 32 45K 120 18K 107 0 107 0 0 0 228 2K 1K 27 5 68 Tue Apr 22 15:07:52 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 3136 32 44K 253 18K 595K 85K 356K 2650 93K 588 81 1K 681 6 2 92 3136 32 45K 128 18K 375 42 260 0 64 0 186 2K 1K 21 5 74 4136 31 45K 112 18K 107 0 107 0 0 0 76 969 646 8 2 90 4136 31 45K 120 18K 106 0 106 0 0 0 237 3K 1K 31 6 64 3136 32 45K 112 18K 153 15 122 0 19 0 226 2K 1K 26 5 69 3136 32 45K 111 18K 108 0 108 0 0 0 234 2K 1K 28 6 66 3136 32 45K 120 18K 378 21 330 0 23 0 220 4K 1K 29 8 63 3136 32 45K 136 18K 107 0 107 0 0 0 248 2K 1K 28 6 66 3136 32 45K 120 18K 169 21 122 0 23 0 238 2K 1K 28 5 66 4136 31 45K 112 18K 189 25 127 0 29 0 236 3K 1K 28 6 67 Tue Apr 22 15:08:22 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4136 31 44K 240 18K 596K 85K 357K 2650 93K 588 82 1K 682 6 2 92 3136 32 45K 120 18K 357 43 260 0 47 0 398 3K 1K 30 6 64 4135 32 45K 114 18K 107 0 107 0 0 0 284 3K 1K 31 6 63 4136 31 45K 128 18K 365 15 334 0 19 0 265 4K 1K 28 8 64 3136 32 45K 128 18K 106 0 106 0 0 0 267 2K 1K 28 6 66 5135 31 45K 120 18K 375 40 150 0 91 0 409 3K 1K 62 7 31 3136 32 45K 128 18K 170 22 122 0 23 0 294 2K 1K 39 6 55 4136 31 45K 128 18K 107 0 107 0 0 0 218 2K 1K 28 6 67 3136 32 45K 118 18K 243 45 142 0 51 0 224 2K 1K 26 6 68 3136 32 45K 128 18K 327 0 315 0 6 0 228 4K 1K 27 8 65 Tue Apr 22 15:08:53 EDT 1997 Virtual Memory Statistics: (pagesize = 8192) procs memory pages intr cpu r w u act free wire fault cow zero react pin pout in sy cs us sy id 4137 31 44K 254 18K 599K 85K 359K 2650 93K 588 82 1K 683 6 2 92 4137 31 45K 120 18K 407 60 277 0 66 0 240 2K 1K 27 6 67 4137 31 45K 120 18K 110 0 110 0 0 0 229 2K 1K 26 6 68 4137 31 45K 120 18K 107 0 107 0 0 0 231 2K 1K 29 6 66 3137 32 45K 120 18K 107 0 107 0 0 0 227 2K 1K 27 6 67 4137 31 45K 120 18K 107 0 107 0 0 0 234 2K 1K 28 5 67 3137 32 45K 112 18K 380 21 333 0 23 0 227 4K 1K 29 8 63 4137 31 45K 128 18K 182 26 128 0 28 0 242 2K 1K 28 6 66 4137 31 45K 120 18K 213 21 166 0 23 0 177 4K 1K 31 7 62 4137 31 45K 120 18K 118 0 110 0 1 0 221 3K 1K 28 6 66 Anyone have any other ideas? thanx. John McDonald Atlanta CSC [Posted by WWW Notes gateway]
T.R | Title | User | Personal Name | Date | Lines |
---|---|---|---|---|---|
1047.1 | SMURF::SCOTT | Tue Apr 22 1997 17:59 | 19 | ||
john, The advfsstat output seems to indicate that the I/O system isn't being stressed heavily. The queue statistics, however, can be misleading, as advfsstat shows a snapshot value rather than an interval value. The iostat shows a low bps:tps ratio. This could be an artifact of the application's I/O characteristics. Is the I/O randomly located, as opposed to sequential? Does this application make any fsync calls? With a light steady I/O load, advfs on v4.0 is too aggressive in handling its I/O requests. You might improve the performance by using chvol (-t) to increase the I/O threshold (try 32768). This will allow for more opportunity to take a hit on a buffer sitting on the I/O queue, as well as improve I/O consolidation. Hope this helps. larry | |||||
1047.2 | KITCHE::schott | Eric R. Schott USG Product Management | Tue Apr 22 1997 18:06 | 10 | |
Do you have any sys_check output? There are a lot of parameters that can be affecting this... be sure they have dskcoll program so we get some better I/O stats... regards Eric | |||||
1047.3 | Thanx so far | NNTPD::"[email protected]" | John McDonald | Wed Apr 23 1997 10:29 | 15 |
Thanx for the replies larry/eric, I'm still gathering data, but after I posted the original note I went ahead and ran diskx -p on another device on the hsz50 to see what kind of throughput we were seeing at the HW level. I ran it fixed size with the transfer size the same as the apps, and variable size, and both results showed a throughput of about 2MB/sec. I'm also checking out their hsz50 to make sure they haven't done something like turn the cache off. Once again, thanx for the pointers. John McDonald Atlanta CSC [Posted by WWW Notes gateway] |