T.R | Title | User | Personal Name | Date | Lines |
---|
169.1 | some thoughts... | DECWET::EVANS | Be a Point Of Light! | Tue Nov 26 1996 17:58 | 9 |
169.2 | still having problems | GIDDAY::HAGAN | | Sun Dec 01 1996 21:56 | 71 |
169.3 | Can anyone help? | GIDDAY::HAGAN | | Sun Dec 08 1996 14:19 | 7 |
169.4 | | DECWET::RANDALL | | Mon Dec 09 1996 14:45 | 19 |
169.5 | | GIDDAY::HAGAN | | Mon Dec 09 1996 18:27 | 198 |
169.6 | | DECWET::RANDALL | | Tue Dec 10 1996 14:38 | 25 |
169.7 | still no joy | GIDDAY::HAGAN | | Tue Dec 10 1996 19:00 | 218 |
169.8 | | RHETT::LOH | | Wed Dec 11 1996 12:37 | 8 |
169.9 | Mail sent to steve. | DECWET::RANDALL | | Wed Dec 11 1996 14:00 | 31 |
169.10 | | GIDDAY::HAGAN | | Wed Dec 11 1996 20:14 | 29 |
169.11 | more info from customer | RHETT::LOH | | Thu Dec 12 1996 09:55 | 121 |
169.12 | | DECWET::FARLEE | Insufficient Virtual um...er.... | Thu Dec 12 1996 10:00 | 8 |
169.13 | | RHETT::LOH | | Thu Dec 12 1996 10:00 | 11 |
169.14 | | RHETT::LOH | | Thu Dec 12 1996 10:11 | 9 |
169.15 | | RHETT::LOH | | Thu Dec 12 1996 11:08 | 41 |
169.16 | | DECWET::RANDALL | | Thu Dec 12 1996 12:00 | 10 |
169.17 | | GIDDAY::HAGAN | | Thu Dec 12 1996 15:24 | 10 |
169.18 | see topic 41 for bug reporting guidelines | DECWET::LENOX | | Thu Dec 12 1996 17:04 | 3 |
169.19 | | DECWET::RANDALL | | Thu Dec 12 1996 17:49 | 9 |
169.20 | | RHETT::LOH | | Wed Dec 18 1996 08:44 | 7 |
169.21 | | RHETT::LOH | | Mon Dec 30 1996 11:45 | 7 |
169.22 | | DECWET::RANDALL | | Mon Dec 30 1996 12:54 | 11 |
169.23 | | RHETT::LOH | | Mon Jan 06 1997 08:36 | 36 |
169.24 | | RHETT::LOH | | Mon Jan 06 1997 13:52 | 12 |
169.25 | | CRONIC::LEMONS | And we thank you for your support. | Thu Jan 09 1997 07:44 | 71 |
169.26 | | DECWET::RANDALL | | Thu Jan 09 1997 18:03 | 6 |
169.27 | ANY NEWS ABOUT THIS PROBLEM ??? | OSL09::NILSTAD | | Wed Jan 22 1997 03:14 | 6 |
169.28 | | DECWET::RANDALL | | Wed Jan 22 1997 16:55 | 10 |
169.29 | Having the same issue with the TZ877 | NIOSS1::THOMPSON | | Thu Jan 30 1997 05:43 | 9 |
| Hello !
I'm getting the same thing. I ended up setting the parallelism to 1
and the sessions per device to 1 as well. I am running 13 clients with
the same version. I am using the default for the savegroups and don't
understand what you mean about overlaping. I restarted nsr and it
appears to be working fine now.
Steve
|
169.30 | | DECWET::EVANS | Be a Point Of Light! | Thu Jan 30 1997 10:01 | 8 |
| re: savegroups overlapping...
this means that if savegroup #1 starts, and is still running (you see
saves in the ps output), and savegroup #2 starts up - you now have an
"overlap". Also, savegroup #2 cannot start until savegroup #1 has finished
doing the saveindex, *and* media verification, if applicable.
FYI
|
169.31 | for me too | BACHUS::DEVOS | Manu Devos DEC/SI Brussels 856-7539 | Fri Jan 31 1997 00:03 | 6 |
| Hi,
just to tell you that I am also getting this "device de-activated" with
DU V3.2D-1 and NSR V4.2B+P1 with two savegroups ovelapping.
Manu.
|
169.32 | | DECWET::RWALKER | Roger Walker - Media Changers | Fri Jan 31 1997 10:19 | 11 |
| This is a wild shot but could somebody with the problem set
the client timeout values to a very high number such as
2 hours, this would include the server's own client record.
Anytime savegroups overlap, each group expects paralellism
sessions to be available, but since another group is using some
of the available sessions saves are started but must wait
until the other sessions finish. This could be triping the
time out logic a causing less than clean session terminations
and getting the session counts out of wack between nsrd and
the tape daemons.
|
169.33 | noope ! | BRSDVP::DEVOS | Manu Devos DEC/SI Brussels 856-7539 | Wed Feb 19 1997 13:19 | 36 |
| Hi Rogers,
I finally manage o do the test:
NSR V4.2A + P2 & P3 on DU4.0B - 10 Groups all starting at 22:00 -
I change the Timeout to its maximum which is 1000 Minutes, so the
asavegrp command are displaying on the server as asavegrp -T 60000.
I still have the "tape deactivated during save" message andnsr hangs
up. I had to unmount and remount the tape to restart the backup.
During my test, I was able to see the message appearing in the message
window of networker, and surprisingly, the session window cleaned up,
but the tape was going on and the "throughput/Megabyte saved" were still
updated in the device window during at least ten minutes before the tape
stopped. The asavegrp processes of all the savegroups stayed visible.
Unfortunately, I had to migrate to V4.2B because the version V4.2A
with patches P1 P2 and P3 was not able to backup "All" the savesets of
a cluster service I just modified to add a second filesystem. So I took
the opportunity to change the groups to avoid overlapping of the backup
time window.
But, in summary two problems are existing:
1) V4.2A and P1 + P2 + P3 + Cluster service + All = only one saveset
is backup'ed except if you indicate all the saveset in the saveset
field.
2) V4.2A and overlapping groups and timeout=1000 minutes still give
"device deactivated by save"
Regards, Manu.
|