T.R | Title | User | Personal Name | Date | Lines |
---|
680.1 | | DECWET::ONO | Software doesn't break-it comes broken | Wed May 14 1997 16:42 | 34 |
| Scott,
NetWorker for Digital UNIX V4.3 greatly improves local backup
performance over previous versions. It will ship on the July
1997 LPCD.
Given the configuration you have, I expect that 100 GB/hour
should be achieveable. You might even hit 150 if you spread the
source data out enough.
Now, to your assumptions:
> If we can get enough file sets tokeep all 6 drives streaming we can
> expect say 6Gbytes/sec on each drive compressed.
Sounds reasonable. This assumes a 1.5:1 compression ratio.
> That the pre and post processing on a 8 CPU 8400 maight not take very
> long.
Don't know how long it takes to shut down / start up a Progress
database.
> That 4 pairs of HSZ50s can feed ~40Mbytes/sec
Sounds reasonable, given how you have the disks spread across the
adapters. You need to be pulling data from all four pairs of
HSZs though. If you overload one and underutilize the others,
you may not achieve the performance you might otherwise be
capable of.
Regards,
Wes
|
680.2 | | KITCHE::schott | Eric R. Schott USG Product Management | Fri May 16 1997 09:54 | 22 |
| >
> Assumptions:
>
> If we can get enough file sets tokeep all 6 drives streaming we can
> expect say 6Gbytes/sec on each drive compressed.
Do you think this is 6MB/sec?
My napkin says you need 30 MB/sec to achieve 100GB in 1 hour.
The other issue is what is the state of the database during
the backup?
>
> That the pre and post processing on a 8 CPU 8400 maight not take very
> long.
>
> That 4 pairs of HSZ50s can feed ~40Mbytes/sec
>
> Is the above reasonable or have I got this very wrong.
>
> Thanks & Regards Scott
|
680.3 | math facts | DECWET::EVANS | NSR Engineering | Fri May 16 1997 12:22 | 13 |
| we discovered that one needs to multiply X Mb/sec by 3.515 to get Gb/hr
since it is 1024, not 1000. I think the long form looks like:
X Mb 1024 Mb/Gb
--------- = --------------
1 sec 3600 sec/hr
X * 3600
-------- = Y Gb/hr
1024
this made it easy for us to run estimates of Gb/hr when we were doing our
performance tests.
|
680.4 | TL896 is 6 drive silo... thanks | AUSSIE::MCNAMARA | I can smile but my tail don't wag | Sun May 18 1997 20:20 | 13 |
| G'Day All,
Thanks for your replies.
My assumption is based as follows:-
The TL896 has 6 TZ89 drives. If we can get all 6 drives streaming then...
6 drives @ 6Mbytes/sec = 36Mbytes/sec
36Mbytes/sec * 3600 sec is approx 125Gbytes/Hr.
Thanks & Regards Scott
|
680.5 | | USCTR1::lexser24.lex.dec.com::Ascher | yadda | Tue May 20 1997 12:55 | 10 |
|
Is there some upper limit on what we believe networker can actually push
through a system - ie assuming that there is enough i/o bandwidth, then what
is the best peformance that can be expected from Networker - say in V4.3?
And do we have any characterization of the amount of cpu that driving 100
Gb/hr might take? 200 GB/hr? 500 GB/hr?
thanks,
d
|
680.6 | Some datapoints | DECWET::FARLEE | Insufficient Virtual um...er.... | Wed May 21 1997 10:56 | 22 |
| Yes, we do have some preliminary information on that now, and with each
round of refinement we're coming up with ways to make it more efficient.
On an 8400:
The most we've seen ANY software push through the benchmark system
( an 8x330 MHz CPU, 8GB 8400 with LOTS of I/O capacity) is just over
500 GB/Hr. I'm still trying to figure out where that limit comes from.
This includes benchmarks from NetWorker V4.3, Alexandria, Open Vision, and
several others.
At a data rate of 500 GB/Hr, NetWorker consumed approximately 27% of the
8 CPUs.
This is a local, cold backup of raw partitions.
One smaller datapoint, last week in Berlin I did a demo using an AlphaServer
1000A with a couple shleves of RZ29 disks, and a TL894 library. we achieved
data rates of approximately 50 GB/Hr using about 40% of its one CPU.
Does this help?
Kevin
|
680.7 | Another data point | DECWET::KOWALSKI | Official Beer Test Dummie | Mon May 26 1997 10:32 | 12 |
| And I did a demo at the same time as Kevin, but in Ayr.
8400 with 4 400 MHz CPUs, 4 GB memory
2 TL894's with each TZ89 drive (8 total) on a separate KZPSA
36 RZ29's, 6 drives per KZPSA
Backing up the drives as raw partitions, we achieved 260 GB/hr. The
average CPU usage was ~60%; however, this was a very quick setup of the
just-uncrated components and essentially NOTHING was done to optimize
the throughput other than the cabling implied above.
Mark
|
680.8 | interesting | USCTR1::ASCHER | Dave Ascher | Thu May 29 1997 14:04 | 27 |
| re: <<< Note 680.7 by DECWET::KOWALSKI "Official Beer Test Dummie" >>>
. -< Another data point >-
.
. And I did a demo at the same time as Kevin, but in Ayr.
.
. 8400 with 4 400 MHz CPUs, 4 GB memory
. 2 TL894's with each TZ89 drive (8 total) on a separate KZPSA
what is a TL894? in terms of speed?
. 36 RZ29's, 6 drives per KZPSA
.
. Backing up the drives as raw partitions, we achieved 260 GB/hr. The
. average CPU usage was ~60%; however, this was a very quick setup of the
. just-uncrated components and essentially NOTHING was done to optimize
. the throughput other than the cabling implied above.
.
I gather that there is some kind of widely held assumption
that backing up raw partitions provides a significant boost
in throughput for these kinds of tests? Is that part of how Netview
got its famous 500G/hr? or was that primarily due to using
a tape technology with faster transfer speeds?
What else might be done to optimize the throughput?
d
|
680.9 | | DECWET::FARLEE | Insufficient Virtual um...er.... | Thu May 29 1997 14:53 | 15 |
| >What is a TL894?
It is a successor to the TL810/812, with 4 TZ89 drives
and 48 tapes. The TZ89 will move data significantly faster
than previous generations of DLTs (6-7MB/sec and more)
>Raw partitions faster?
Raw partitions are used for two reasons: One: it accurately
simulates how a customer would back up a large database which
was built on raw partitions (which is usually Oracle's recommendation)
and two: It is more efficient because you avoid the UBC buffer cache
which is not helpful to us, and also you avoid the work of walking
and indexing the filesystem structure.
Kevin
|
680.10 | Network backup is a different story | DECWET::KOWALSKI | Official Beer Test Dummie | Thu May 29 1997 16:24 | 10 |
| And remember, these data points are for backup of the NetWorker server
by itself, NOT network client backups. The NetWorker team's objective
with V4.3 for DU was to optimize backup of servers with large
databases. The conditionals you're seeing listed (backup of the server
to itself, raw partitions (ie, Oracle hot or cold backup), minimized
file indexing) are typical for that type of enterprise-serving system.
Those conditions do happen to lend themselves to more optimal
throughput than not; however, the performance effort didn't start with
the idea that our "thumb would be on the scale" by using them.
They were used because they were the market requirements.
|
680.11 | sounds pretty good... | USCTR1::canb22.nio.dec.com::Ascher | yadda | Fri May 30 1997 08:04 | 11 |
|
After re-reading the previous notes it sounds a lot like there has been a benchmark using
nsr 4.3 that actually achieved 500G/hr... same as Alexandria and OpenVision and 'several
others'. Have we ever announced that? It's news to me... and I don't know anybody else
who can recall hearing about it.
We ALL know about the OpenVision benchmark results of 500 G /hr with fairly low cpu
utilization. It might be useful to hear what NSR's cpu utilization was in the benchmark
scenario... if that is not too sensitive.
d
|
680.12 | | DECWET::FARLEE | Insufficient Virtual um...er.... | Fri May 30 1997 11:36 | 12 |
| I announced these results at the UNIX symposia in Nashua and in Berlin.
The test results were taken the week before the Nashua symposium.
The CPU load for the cold backup (500 GB/Hr) was 27%. I am currently testing
code which may drop that by a bit. I should know by next week.
I'm sure that there will be a whitepaper forthcoming, but it takes an
amazingly frustrating time to actually get something like that out.
The last whitepaper that I wrote in November didn't see the light of day until
mid January...
Kevin
|
680.14 | sounds good to me.. | USCTR1::canb22.nio.dec.com::Ascher | yadda | Fri May 30 1997 12:07 | 7 |
| Congrats on the benchmark. Is it reasonable to assume that the hardware platform was
pretty much the same as that used for the OpenView benchmark? Is there some place I could
see what you announced at the unix symposium?
thanks much...
d
|
680.15 | | DECWET::FARLEE | Insufficient Virtual um...er.... | Fri May 30 1997 16:52 | 6 |
| Both benchmarks were done on the very same hardware.
I'll see about publishing my slides along with enough notes for them
to make sense (I don't believe in just reading slides in presentations,
so that slides aren't necessarily self-explanatory)
Kevin
|