[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference turris::digital_unix

Title:DIGITAL UNIX(FORMERLY KNOWN AS DEC OSF/1)
Notice:Welcome to the Digital UNIX Conference
Moderator:SMURF::DENHAM
Created:Thu Mar 16 1995
Last Modified:Fri Jun 06 1997
Last Successful Update:Fri Jun 06 1997
Number of topics:10068
Total number of notes:35879

10056.0. "LSM striping/HSZ50/DU3.2G problem?" by NESBIT::BGIRVAN () Thu Jun 05 1997 16:34

    [Cross-posted in LSM conference]
    
    
    Has anyone come across the following situation ?
    
        System:     8400 with 6 440Mhz CPU's, 6 GB memory, KFTHA, 2
    DWLPB's,
                    6 KZPSA-BB, 4 dual HSZ50's with 128 MB cache and 24
                    RZ29B-VA's as 4 RAID 0+1 disks, TZ87 loader.
    
        OS          DU 3.2G with full DUV32GAS00001-19970501 patches
    applied +
                    latest simport.o patch.
    
        Application SYBASE 10 and 11
    
        We have created striped volumes across the PCI buses. Most are used
    as
        raw devices for Sybase but a few are used as a dump area to bring
    in
        the Sybase dumps from tape before importing them into the database.
    The
        dumps directories have had both ADVFS and UFS applied with the same
        result. And that result is that performing a checksum on the dump
    file
        gives a different answer than was origionaly obtained. Running a
        checksum a few seconds later gives yet a different answer every
    time.
    
        We then tested this by restoring the dump file to a UFS created
        directly on an HSZ50 disk and on an un-striped LSM volume and the
        checksum was as it should be.
    
        We are also getting a lot of Sybase 605 errors when running the
        benchmark batch tests, suggesting that the database has been
    corrupted.
    
        Is the LSM striping the cause of the above problems?
    
        I have used LSM striping in many previous benchmarks, but I have
    never
        seen this behaviour.
    
        The LSM layout is as follows:(I've left out all the subdisks)
    
    
        volboot:
    
        volboot 3.1 0.5
        hostid bench4
        disk rz3 sliced
        disk rz26 sliced
        disk rz27 sliced
        disk rz2 sliced
        end
        ###############################################################
        ###############################################################
        ###############################################################
        ###############################################################
        ###############################################################
        ###############################################################
        #######################
    
    
    
        volprint -S rootdg
    
        VOLUMES   PLEXES    SUBDISKS  PLEXFREE  SDFREE    DISKS
        68        68        133       0         0         6
    
        voldg list Details rootdg
    
        Group:     rootdg
        dgid:      864375462.1025.bench4
        import-id: 0.1
        flags:
        config:    seqno=0.2122 permlen=173 free=31 templen=88 loglen=26
        config disk rz3 copy 1 len=173 state=clean online
        config disk rz3 copy 2 len=173 state=clean online
        config disk rz26 copy 1 len=173 state=clean online
        config disk rz26 copy 2 len=173 state=clean online
        config disk rz27 copy 1 len=347 state=clean online
        config disk rz2 copy 1 len=347 state=clean online
        config disk rz25 copy 1 len=347 state=clean online
        config disk rz10 copy 1 len=347 state=clean online
        log disk rz3 copy 1 len=26
        log disk rz3 copy 2 len=26
        log disk rz26 copy 1 len=26
        log disk rz26 copy 2 len=26
        log disk rz27 copy 1 len=52
        log disk rz2 copy 1 len=52
        log disk rz25 copy 1 len=52
        log disk rz10 copy 1 len=52
    
    
        voldg list Details rootdg
    
        Group:     rootdg
        dgid:      864375462.1025.bench4
        import-id: 0.1
        flags:
        config:    seqno=0.2122 permlen=173 free=31 templen=88 loglen=26
        config disk rz3 copy 1 len=173 state=clean online
        config disk rz3 copy 2 len=173 state=clean online
        config disk rz26 copy 1 len=173 state=clean online
        config disk rz26 copy 2 len=173 state=clean online
        config disk rz27 copy 1 len=347 state=clean online
        config disk rz2 copy 1 len=347 state=clean online
        config disk rz25 copy 1 len=347 state=clean online
        config disk rz10 copy 1 len=347 state=clean online
        log disk rz3 copy 1 len=26
        log disk rz3 copy 2 len=26
        log disk rz26 copy 1 len=26
        log disk rz26 copy 2 len=26
        log disk rz27 copy 1 len=52
        log disk rz2 copy 1 len=52
        log disk rz25 copy 1 len=52
        log disk rz10 copy 1 len=52
    
    
        voldisk list Details
    
        DEVICE       TYPE      DISK         GROUP        STATUS
        rz10         sliced    rz10         rootdg       online
        rz2          sliced    rz2          rootdg       online
        rz25         sliced    rz25         rootdg       online
        rz26         sliced    rz26         rootdg       online
        rz27         sliced    rz27         rootdg       online
        rz3          sliced    rz3          rootdg       online
    
    
        voldisk list rz10
        (rz10 disklabel)
    
        Device:    rz10
        devicetag: rz10
        type:      sliced
        hostid:    bench4
        disk:      name=rz10 id=864375581.1073.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz10h char=/dev/rrz10h
        privpaths: block=/dev/rz10g char=/dev/rrz10g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=1855472
        private:   slice=6 offset=0 len=512
        update:    time=865500611 seqno=0.27
        headers:   0 248
        configs:   count=1 len=347
        logs:      count=1 len=52
        Defined regions:
         config   priv     17-   247[   231]: copy=01 offset=000000
         config   priv    249-   364[   116]: copy=01 offset=000231
         log      priv    365-   416[    52]: copy=01 offset=000000
    
    
        voldisk list rz2
        (rz2 disklabel)
    
        Device:    rz2
        devicetag: rz2
        type:      sliced
        hostid:    bench4
        disk:      name=rz2 id=864375514.1057.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz2h char=/dev/rrz2h
        privpaths: block=/dev/rz2g char=/dev/rrz2g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=25133556
        private:   slice=6 offset=0 len=512
        update:    time=865500611 seqno=0.39
        headers:   0 248
        configs:   count=1 len=347
        logs:      count=1 len=52
        Defined regions:
         config   priv     17-   247[   231]: copy=01 offset=000000
         config   priv    249-   364[   116]: copy=01 offset=000231
         log      priv    365-   416[    52]: copy=01 offset=000000
    
    
        voldisk list rz25
        (rz25 disklabel)
    
        Device:    rz25
        devicetag: rz25
        type:      sliced
        hostid:    bench4
        disk:      name=rz25 id=864375563.1066.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz25h char=/dev/rrz25h
        privpaths: block=/dev/rz25g char=/dev/rrz25g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=12327882
        private:   slice=6 offset=0 len=512
        update:    time=865500611 seqno=0.27
        headers:   0 248
        configs:   count=1 len=347
        logs:      count=1 len=52
        Defined regions:
         config   priv     17-   247[   231]: copy=01 offset=000000
         config   priv    249-   364[   116]: copy=01 offset=000231
         log      priv    365-   416[    52]: copy=01 offset=000000
    
    
        voldisk list rz26
        (rz26 disklabel)
    
        Device:    rz26
        devicetag: rz26
        type:      sliced
        hostid:    bench4
        disk:      name=rz26 id=864375463.1035.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz26h char=/dev/rrz26h
        privpaths: block=/dev/rz26g char=/dev/rrz26g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=25133556
        private:   slice=6 offset=0 len=512
        update:    time=865500610 seqno=0.42
        headers:   0 248
        configs:   count=2 len=173
        logs:      count=2 len=26
        Defined regions:
         config   priv     17-   189[   173]: copy=01 offset=000000
         log      priv    190-   215[    26]: copy=01 offset=000000
         log      priv    296-   321[    26]: copy=02 offset=000000
         config   priv    322-   494[   173]: copy=02 offset=000000
    
    
        voldisk list rz27
        (rz27 disklabel)
    
        Device:    rz27
        devicetag: rz27
        type:      sliced
        hostid:    bench4
        disk:      name=rz27 id=864375464.1042.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz27h char=/dev/rrz27h
        privpaths: block=/dev/rz27g char=/dev/rrz27g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=25133556
        private:   slice=6 offset=0 len=512
        update:    time=865500610 seqno=0.42
        headers:   0 248
        configs:   count=1 len=347
        logs:      count=1 len=52
        Defined regions:
         config   priv     17-   247[   231]: copy=01 offset=000000
         config   priv    249-   364[   116]: copy=01 offset=000231
         log      priv    365-   416[    52]: copy=01 offset=000000
    
    
        voldisk list rz3
        (rz3 disklabel)
    
        Device:    rz3
        devicetag: rz3
        type:      sliced
        hostid:    bench4
        disk:      name=rz3 id=864375462.1028.bench4
        group:     name=rootdg id=864375462.1025.bench4
        flags:     online ready private imported
        pubpaths:  block=/dev/rz3h char=/dev/rrz3h
        privpaths: block=/dev/rz3g char=/dev/rrz3g
        version:   1.1
        iosize:    512
        public:    slice=7 offset=0 len=25133556
        private:   slice=6 offset=0 len=512
        update:    time=865500610 seqno=0.42
        headers:   0 248
        configs:   count=2 len=173
        logs:      count=2 len=26
        Defined regions:
         config   priv     17-   189[   173]: copy=01 offset=000000
         log      priv    190-   215[    26]: copy=01 offset=000000
         log      priv    296-   321[    26]: copy=02 offset=000000
         config   priv    322-   494[   173]: copy=02 offset=000000
    
    
    
    
    
    
        Billy
    
T.RTitleUserPersonal
Name
DateLines
10056.1KITCHE::schottEric R. Schott USG Product ManagementThu Jun 05 1997 21:549
Do you have new-wire-method set to off?

have you run sys_check on the system?

http://www-unix.zk3.dec.com/tuning/tools/sys_check.html

or note 7178


10056.2It's easy when you know howNESBIT::BGIRVANFri Jun 06 1997 04:5114
    The problem lay in the banks scripts which created all the volumes as
    LSM 'gen' devices. What they did then was simply to layer a file system
    on the LSM block device and mount this. They claim they do this back at
    base. They did in fact show that this seemed to work when a simple
    volume was used, but not with a striped volume. Changing to 'fsgen' was
    all that was required.
    
    
    Billy
    
    
    BTW Eric, new-wire-method = 0, and I have run sys_check on the system.
    It's a very usefull utility.