T.R | Title | User | Personal Name | Date | Lines |
---|
3910.1 | | TP011::KENAH | Do we have any peanut butter? | Thu Jun 01 1995 12:33 | 4 |
| Of course, should you attept to access this Web location, you, like
me, might receive the response: Error 403 -- Forbidden - by rule.
andrew<
|
3910.2 | My favorite magazine... | POBOX::CORSON | Higher, and a bit more to the right | Thu Jun 01 1995 13:22 | 6 |
|
Now *that* is what I call real compelling public relations. This is
the like of testimonals which will make companies look to us FIRST,
as opposed to after our competitor's fail...
the Greyhawk
|
3910.3 | Doing the math... | HLDE01::VUURBOOM_R | Roelof Vuurboom @ APD, DTN 829 4066 | Fri Jun 02 1995 03:07 | 13 |
| 1.2 million accesses over a 7 day period averages to just about
1 access a second. Let's be generous and assume a peak of 10
accesses per second.
Now while it is nice to see that we're throwing all 64 bits
screaming along at several hundred MHz at this I'm wondering
what prevented a 50Mhz 486 with a large disk cache (limited
number of swimsuits -> limited number of files :-) and a
good networking card from cutting this? Is this press
release hyperventilating or did I fall asleep in Systems
Architecture 101?
re roelof
|
3910.4 | Both the math and the assumptions are flawed | CHEFS::RICKETTSK | Rebelwithoutapause | Fri Jun 02 1995 04:45 | 35 |
| > 1.2 million accesses over a 7 day period averages to just about
> 1 access a second. Let's be generous and assume a peak of 10
> accesses per second.
Hmm. 60 seconds/min, 60 mins/hr, 24 hrs/day, 7 days.
60 x 60 x 24 x 7 = 604,800 seconds in a week. 1.2 million accesses
averages 2 per second, not 1.
I think it would be a very poor assumption that these accesses were
anything like evenly spread over that week. They probably started off
slow, and speeded up as word got around. The geographical distribution
of those connecting was probably far from evenly spread around the
globe; I would guess a preponderance of US viewers, a smattering from
Europe and Asia, and very few indeed, in terms of actual numbers, from
elsewhere. While some hackers might have been looking at the site at
five in the morning, I bet most of the accesses came during the day and
evening in the US. The peaks would be 'smeared out' a bit because of
the number of time zones there, but far from flattened. I wouldn't be at
all surprised if the peak access rate exceeded 100 per second.
> Now while it is nice to see that we're throwing all 64 bits
> screaming along at several hundred MHz at this I'm wondering
> what prevented a 50Mhz 486 with a large disk cache (limited
> number of swimsuits -> limited number of files :-) and a
> good networking card from cutting this? Is this press
> release hyperventilating or did I fall asleep in Systems
> Architecture 101?
The peaks in demand prevented a 486 handling it. On average, I eat
say (I haven't weighed it!) 2lbs of food a day. If I started a week by
eating 1 oz on Monday, 3 oz on Tuesday etc., then tried to cram the
remaining 7lbs down at teatime on Sunday, I think I'd choke on it; even
though my *average* weekly consumption might be exactly the same as normal.
Ken
|
3910.5 | Revised Math and Assumptions, Same Conclusion? | HLDE01::VUURBOOM_R | Roelof Vuurboom @ APD, DTN 829 4066 | Fri Jun 02 1995 08:54 | 17 |
| Revising the estimates upward;
Lets assume that we limit to a (US) 8 hour day, then
we have 56x3600 which is around 200000 seconds giveing
an average of 6 accesses per second with a peak of perhaps
40. I assume very few people downloaded the movies and audio
sections but limited themselves to the JPEG/GIFs which probably
would be on average smaller than 100K. I would need an IO
thoughput of 600K per second minumum, 2Mb per second to be
on the safe side - and here we're assuming every hit is a GIF/JPEG
which it most defintely isn't. You've got normal HTML files and icons
which each represent a few hundre to a few thousand bytes.. T
he 486 shouldn't really be needing to do that much after all its
just more or less acting as a fancy I/O controller...
Now maybe a 486 may or may not cut it, my question remains...was a
64 bit, several hundred Mhz machine really needed?
|
3910.6 | | ATLANT::SCHMIDT | E&RT -- Embedded and RealTime Engineering | Fri Jun 02 1995 09:24 | 21 |
| Ultimately, any machine that could keep the pipe (between the
web server and the Internet) full would suffice. So, in my mind,
that begs two questions:
o What was the pipe between the servers and the net?
A single T1 link? Several? T3???
o How big was the entire dataset? Could a machine store
the entire thing in main memory? ...32-bit-addressed
main memory?
Personally, I'd bet that Roelph is right; an x86 with enough
main memory, bus bandwidth, and appropriate programming could
have done the job.
But in a larger sense, what does it matter? That wasn't the
solution they chose. For whatever reason, they used OUR machine
so let's trumpet that fact while it's useful. It'll be pass�
soon enough!
Atlant
|
3910.7 | | VNABRW::50008::BACHNER | | Fri Jun 02 1995 10:33 | 10 |
| � Personally, I'd bet that Roelph is right; an x86 with enough
� main memory, bus bandwidth, and appropriate programming could
� have done the job.
O.k., let's assume a x86 could handle the traffic. So why couldn't the Sparc
servers, as mentioned in the preface ?
Hans.
PS: next time we might try it with an 21086 chip to delight the x86-fans ;-)
|
3910.8 | | ATLANT::SCHMIDT | E&RT -- Embedded and RealTime Engineering | Fri Jun 02 1995 10:42 | 5 |
| Did I mention "properly programmed?" That doesn't necessarily
mean "using the standard Unix that comes with the machine."
Perhaps Unix added enough overhead as to require more CPU.
Atlant
|
3910.9 | Ahh, thats it! | HLDE01::VUURBOOM_R | Roelof Vuurboom @ APD, DTN 829 4066 | Fri Jun 02 1995 10:50 | 18 |
| > But in a larger sense, what does it matter? That wasn't the
> solution they chose. For whatever reason, they used OUR machine
> so let's trumpet that fact while it's useful. It'll be pass�
> soon enough!
Thanks for helping me realize why I brought this up. I usually
have a very good reason but I don't always know what it is :-)
The larger sense is that it _does_ matter if you promote 64
bittedness in an area where 32 bittedness will do nicely, thank
you very much. What happens is that you are then promoting a data
point that says that 64 bits _doesn't_ matter and that's what
was (subconciously) bothering me. From my viewpoint the press
release in itself was good marketing but the bit emphasizing 64 bits wasn't.
Bottom line is that we need to be selective when mentioning 64 bitsiness.
re roelof
|
3910.10 | Have you tried logging in lately?... | POBOX::CORSON | Higher, and a bit more to the right | Sat Jun 03 1995 11:58 | 12 |
|
Hold it guys -
Agency.com is handling megafiles with full-motion video, sound,
still pics, and text. My PC (Pentium at 90Mhz w/PCI) would choke on
this - not speed, but sheer MB loaded onto disk within my lifetime -
also nobody's loading ONE file, everybody's grabbing 'em all. Bet
it's a SONAT connection into the server, the pipe must be huge... the
damn thing is always busy...
the Greyhawk
|
3910.11 | | MRKTNG::SLATER | New DTN 381-2445 as of 4/24/95 | Sun Jun 04 1995 14:48 | 9 |
| I've not seen the data although I'd like to. Based on other customer sites
whose data I've seen, a more reasonable distribution of acesses would be
50% of the hits come in between 5:00pm and 9:00pm east coast time (college
kids, Digits after work).
This would compute to about 6 hits per second (1200000 / (7*2*4*3600)).
The I/O requirements and the network cards would be the limiting resources.
MS
|
3910.12 | Various limits - various solutions | SKIBUM::GASSMAN | | Tue Jun 13 1995 14:27 | 14 |
| The technical limits on a web server are several, and based on what is
being done. If a lot of connections/second are being made, the actual
TCP/IP code can be the limit. One study I saw limits TCP/IP code based
on BSD unix (probably most of them) to be 150 connections/second. If
Digital's UNIX can go faster than that, that's a bonus. The next
limiting factor is disk I/O. Bus and disk speed, as well as caching
schemes would make the difference. CPU speed is only a big factor when
there is a lot of background processing going on. Using CGI scripts
with extensive database lookups would be an example where a 64bit
processor might help. Communications can of course be a factor, but a
FDDI controller can probably get the bits out the back of the machine
fast enough, pushing the problem out to technology in the network itself.
bill
|