|
Some 'real-world' experience....
800 users per twin processor 2100...512Mb of memory...IS storage,
well, you can do the math, but remember the magical 30G database
barrier...RAID your disks.
You also want to consider space required in the event you may need to
rebuild your DB.
Cheers,
Charlie
ps You may want to factor in the addition of file/print servers for
additional storage (you mentioned no cleanup, so users should be
given an Exchange mailbox limit but they WILL need extra
storage)...ie non-exchange based data storage and file server based
PST's....the subject of domain controllers etc are not even
considered here....
pps how about the migration of all this foreign mail to
Exchange...big headache...network issues....MR to SMTP gateways(VMS
based)...the list goes on...have fun!
|
| There are many other areas which deserve some attention.
Apart from the 16GB restriction on the Private Information Store (which should
disappear in a post-5.0 release to be at least 4TB, although 16TB has been the
figure mentioned for some time) you might want to question the principles
behind large servers.
It is fairly straightforward to use the sophisticated clustering technology of
OpenVMS to build huge ALL-IN-1 clusters capable of supporting tens of
thousands of users with a reasonable element of redundancy. Unfortunately, NT
clustering technology is not really mature enough to offer the same
functionality as OpenVMS clustering: intelligent failover, distributed lock
manager; and in essence really just provides failover for disk shares. Nor is
Exchange intelligent enough at the moment to make use of any sophisticated
clustering features. All this changes in the future though.
For the moment then, we're left with the prospect of building huge servers
(leaving aside the storage issue). Your number one concern should be the
downtime exposure that's suffered if one such server like this fails. You
reduce the exposure by using smaller servers.
An alternative ploy might be to build a greater number of smaller servers
supporting not more than about 500 users per server. Put these in a site and
think of them as a reduced-function, loosely coupled cluster of Exchange
systems. Also, such hardware is more easily available and cheaper, and easier
to hold hot swaps for, so in the event of a hardware failure, you can just
swap in a new box or disk or whatever. (Easier to do that than hold a spare
TurboLaser, but I exagerate!)
And when NT clustering hits puberty, then you can just roll all of those
existing servers into a nice resilient cluster! (Of course, the hardware
might be out of date by then!)
This really knocks the wind out of an argument for Alpha systems though, so be
careful.
And apart from that sizing aspect, you'll want to consider network topology,
bandwidth, NT domain design, traffic flow patterns, user group distribution in
order to come up with the best topology for Exchange servers.
(It's fun though!)
Hope this helps.
K
+-------------------------------------------------------------+
| Kieran McCorry, Messaging Consultant |
| European Messaging Integrated Service Team, Dublin |
| Digital Equipment Corporation Tel:+44 860 293814 |
+-------------------------------------------------------------+
| mailto:[email protected] |
+-------------------------------------------------------------+
|