[Search for users] [Overall Top Noters] [List of all Conferences] [Download this site]

Conference kernel::csguk_systems

Title:CSGUK_SYSTEMS
Notice:No restrictions on keyword creation
Moderator:KERNEL::ADAMS
Created:Wed Mar 01 1989
Last Modified:Thu Nov 28 1996
Last Successful Update:Fri Jun 06 1997
Number of topics:242
Total number of notes:1855

45.0. "ULTRIX crib sheets and info." by COMICS::TREVENNOR (A child of init) Wed Jul 05 1989 13:35

    
    "Our Goal is to have the very best UNIX ... and support it heartily"
     - Ken Olsen 
    Jun 1988.
    
    
    
    One evening this week an engineer called into the building at about
    17:30 and asked for some help with retrieving an Ultrix error log.
    
    I gather that he was passed around and eventually ended up talking
    to the one person who had stayed late to work on in the Ultrix TSC.
    
    As it later turned out, all he wanted to know was that you have to
    type:
    
    uerf -o full
    
    to see the contents of the Ultrix error log.
    
    So that this doesn't happen again the replies to this note are:
    
    .1   a crib sheet (copy to whoever you like) that provides the
         available options to uerf.
    
    .2   A detailed overview of how Ultrix error logging works - including
         how you configure it and the files which are involved. 
    
    If you need any more info on Ultrix please come down and see Alwyn
    Bradley or myself. The last time we bit the heads off any babies was
    weeks ago........ 
    
    I guess we need to get this Ultrix presentation arranged....
    
    Regards
    Alan Trevennor
    PTG (Ultrix)
T.RTitleUserPersonal
Name
DateLines
45.1How to get errorlogs from Ultrix (V2.0 or later).COMICS::TREVENNORA child of initWed Jul 05 1989 13:3997
                           ULTRIX UERF crib sheet.
    

    uerf command options - examples in the comment field on the right. 

    The error log report formatter is called uerf - anyone can run it
    (ie you don't have to be god on the system).
    
    You provide a number of 'options' to the command to make it display
    selectively. An option is a '-' sign followed by one or more letters.
    
   /etc/uerf       -x           # negate all other supplied options
                                # EXAMPLE:   /etc/uerf -D -x
                                # means display all errors except disks

                   -R           # Display newest errors first
                   -n           # Display errors as they occur
                   -h           # show available uerf options

                   -H hostname  # Display errors from node hostname
                                # EXAMPLE:    /etc/uerf -H PERCY
                                # Means display all errors which have been
                                # logged which originated from node PERCY.

                   -A           # Include hardware adapter errors
                      aie       # BVP controller (BI disk controller).
                      aio       # BVP controller (BI disk controller).
                      bla       # BI LESI adaptor
                      bua       # BI to UNIBUS adaptor
                      nmi       # Nautilus memory interconnect
                      uba       # VAX UNIBUS adaptor
                                # EXAMPLE:   /etc/uerf -A bua,bla
                                # means display all bua and bla errors.

                   -c           # Select specific classes of errors
                      err       # Selects error reports from h/w or s/w
                     oper       # Selects only operational events

                   -D           # Includes disk errors.
                      rd54      # Include disk errors on RD54 type drives
                      ra80      # Include disk errors on RA80 type drives
                      ra60      # Include disk errors on RA60 type drives
                      etc       # Any other DSA compatible drive type
                                # EXAMPLE:   /etc/uerf -D ra60
                                # means display all errors logged on ra60's.
                                # EXAMPLE:   /etc/uerf -D -o full
                                # means display all disk errors in full format

                   -f filename  # Allows you to display errors from any
                                # current or archived error log file.

                   -M           # requests that only errors to do with the
                      cpu       # central hardware elements of the machine
                      mem       # be displayed. The memory and the CPU..
                                # EXAMPLE:   /etc/uerf -M mem -o full
                                # means display all memory errors in full

                   -o           # Selects output format. brief is the
                      terse     # default. Beware non-selective use of
                      brief     # full, it can generate a long listing!
                      full      #

                   -O           # Operating system detected errors.
                      aef       # Arithmetic exceptions (can be h/w or s/w)
                      ast       # async trap exception faults (h/w or s/w)
                      bpt       # Break point instruction fault (often h/w)
                      cmp       # Compatibility mode faults (h/w possibly s/w)
                      pag       # Page fault exceptions (h/w)
                      pif       # Privileged instruction faults (h/w or s/w)
                      pro       # protection faults (h/w or s/w)
                      ptf       # Page table faults (h/w or corrupt s/w)
                      raf       # Reserved address Faults (h/w or corrupt s/w)
                      rof       # Reserved Operand Faults (corrupt s/w usually)
                      scf       # System Service Call Exception faults (either)
                      seg       # Segmentation Faults (h/w usually)
                      tra       # Trace exception faults (h/w or s/w)
                      xfc       # Xfc instruction failed (h/w)
                                # EXAMPLE:   /etc/uerf -O seg -o full
                                # Means display full details of all 
                                # segmentation faults.

                   -r n         # Selects errors by record type - see manual.

                   -s  nn-nn    # Each error is assigned a number when it
                                # is logged. This option selects a specific
                                # number or range of numbers to be reported

                   -T           # Includes tape errors in report
                      tk50      # Include errors which occurred on tk50
                      tu81      # Include errors which occurred on tu81
                       etc      # Any other Digital tape drive type                    

                   -t           # Selects by absolute or range of time.



    
45.2Ultrix error logging description and overview.COMICS::TREVENNORA child of initWed Jul 05 1989 13:40423
                            Ultrix Error logging
    
    Ultrix error logging.
    
       A system which cannot provide information to assist hardware 
    engineers in fault diagnosis does little to enhance customer 
    confidence. Consequently, considerable effort was put into providing 
    full error logging for version 2.0 of Ultrix-32.  
    
      The components of Ultrix which are capable of detecting hardware 
    errors are principally the device drivers, but the fault detection 
    features built into VAX hardware can also cause an error to be logged 
    by means of an interrupt. An example of this is a hardware detected 
    memory error. 
    
      Figure one shows the overall concept of error logging under Ultrix
    and shows how the three major programs in the errorlogging suite 
    interact together. These three programs are:
    
    /etc/elcsd
    /etc/uerf
    /etc/eli
    
    The software components of the system which can detect or declare 
    errors place packets of binary information detailing the error (or 
    system event) into the kernel errorlog buffer. Lets look at these three 
    programs in more detail.
    
    The elcsd daemon.   
    
      The file /etc/elcsd (ErrLog Copy/Server Daemon) is the Ultrix error 
    logging daemon. It is run at system startup time via an entry in the 
    system startup script "/etc/rc". It takes errorlog packets placed in a 
    memory area known as the kernel errorlog buffer (identified by the 
    character special file /dev/errlog) and writes them into the error log 
    file of your choice. 
    
       Using the system timer mechanism, elcsd wakes up at regular 
    intervals and scans the errorlog buffer for new packets. Any new 
    packets found are appended to the currently selected error log file 
    (see later). The error log file need not be resident on the local node, 
    it can be on a disk attached to another node in the same network.
    
    
    Configuring error logging.
    
       So, how do you set up your error logging to best effect? The 
    operational parameters for error logging are specified by the contents 
    of a file called 
    
       /etc/elcsd.conf
    
       The contents of this file are the parameters which define the error 
    logging daemon's aims in life. Ultrix allows you to centralise on one 
    node error logging for all the Ultrix nodes in a network. Implicit in 
    this is an ability of any node to log errors; locally, from any other 
    node in the network, or to send details of errors to any other node. 
    
       Figure two shows a network of three VAX machines connected via an 
    ethernet. Let's assume that the node GORDON is a VAX 8350, and the node 
    JAMES is a microVAX II and that the node THOMAS is a VAXstation 2000 
    series machine. To simplify the task of keeping tabs on the errorlogs 
    the person who is responsible for managing the machines in this set up 
    designates GORDON as the error logging node. 
    
       To effect this, the network manager can select the option of having 
    details of all errors logged into the standard error log file on GORDON 
    which is:
    
    /usr/adm/syserr.GORDON
    
    Alternatively a separate file can be created on GORDON's file system to 
    contain the log for each node. These will be:
    
    /usr/adm/syserr.JAMES
    /usr/adm/syserr.THOMAS
    
    The way that this works is via an entry in the file:
    
    /etc/elcsd.conf
    
    When the elcsd daemon in each node starts up, it reads the contents of 
    this file and takes the third parameter in it as the pathname to the 
    primary error log file. The seventh parameter in the file specifies the 
    name of the node where this file exists. If the seventh parameter is 
    blank (as it is just after installation of Ultrix) then the file is 
    assumed to be on the local file tree.
    
       Let's now look in detail at the eight parameters in the elcsd 
    configuration file and what they mean. 
    
    1)  Status parameter one is a bitwise integer with a value from 0-7 
        where the bits may be set in any sensible combination. The bits are 
        used as follows:

 Status parameter one is a bitwise integer with a value from 0-7 where
the bits may be set in any sensible combination. The bits are used as
follows:
 
	+-------+-------+-------+
	|   4   |   2   |   1   | <Bit weight.
	+-------+-------+-------+
	    2       1       0     <Bit number. 
	    |       |	    |
	   /	    \	    \
	  /	     \	     \
	 /	      \	      \
If set, log errors     \       \___________ If set then errors occurring
to an errorlog 		|		    on the local node are to be
file on a remote	|		    logged.
system named in 	|
parameter 7.		|
			|
		If set specifies that
		errors  from   remote
		machines are to be logged.
 
Examples: Set to 5 means errors occurring on this machine are to be
	  logged to both the local error log file and the remote one
          named in parameter 7.
 
	  Set to 2 then only errors received from elscd daemons on
	  remote machines are to be logged. 
     
    
    2) Parameter two is the maximum size (in bytes) to which you wish to 
    limit the error log file. If you leave this parameter blank then the 
    error log will expand forever, or at least until the file system is 98% 
    full! The size you choose should reflect how often you intend to check 
    and clear the error log for your system or network. 
    
    3) The error log directory path. This is the pathname to which details 
    of errors will be sent for later retrieval and output. The pathname you 
    specify here can be anywhere on your file tree, but remember that when 
    you run the error log report formatter  it assumes by default that 
    error logs are in the file:
    
    /usr/adm/syserr.HOSTNAME
    
    4) Ultrix allows you to specify a secondary - or backup - error log 
    file to be used in the event that the primary file (specified in 
    parameter 3) is not usable.  If, for example, you have a two drive 
    system and your primary error log file is on one disk you could put 
    your backup error log file on the other drive. This would ensure that 
    if the first disk ever became inaccessible, the details of the error 
    would still be logged on the other disk.
    
    5) This parameter provides the pathname of a file to which errors are 
    logged when the system is in single user mode. This is different to the 
    file specified in parameter 3, because you need to ensure that the file 
    system will be mounted in single user mode. When single user mode is 
    exited any errors logged into this file are appended to the main file 
    specified in parameter 3. 
    
    6) This parameter defines where incoming error logs from remote systems 
    are to be logged. Note that this is a pathname, not a file name. The 
    file will always be HOSTNAME. So if this parameter were set to:
    
    /etc/errlogs
    
    then error logs from node THOMAS will be placed in the file:
    
    /etc/errlogs.THOMAS
    
    7) If you are sending details of errors logged on your system to a 
    central logging node, then parameter 7 should be set to the name of the 
    machine - in our previous example this will be GORDON.
    
    8) If your node is logging errors on behalf of other nodes then the 
    names of these should appear as parameters 8 through n. On the same 
    line as each node name should appear a colon, followed by either a 
    letter S or a letter R. R means that a file called:
    
    /etc/syserr.REMOTES
    
    is to be used as the errorlog file (so that all remote node errorlogs 
    can be grouped into one file. S means to create a separate file on the 
    central node for each node whose errors are being logged. In our 
    example network these files would be:
    
    /etc/syserr.PERCY
    /etc/syserr.THOMAS
    
    and parameter 8 will look like this:
    
    PERCY:S
    THOMAS:S
    
    
    The elcsd.log file.
    
     It is quite possible for the elcsd daemon to encounter problems whilst 
    going about its business. For example, a file system may become full, 
    or a disk may suddenly drop offline due to a hardware fault. Under such 
    exceptional circumstances the continuation of error logging may become 
    impossible. In order to attempt to notify the system manager of a 
    cessation of normal error logging, elcsd will try to place an 
    appropriate error message into:
    
    	 /usr/adm/elcsdlog
    
    which is elcsd's log file. This file is cleared at every system reboot. 
    The messages also go directly to the terminal of any user whose name 
    appears in the file:
    
    	 /etc/syslog.conf
    
    (reference syslog(8).) The contents of the file are split into two 
    areas, separated by a blank line. The entries above the blank specify 
    to which files the system event logs will be copied. The lower half of 
    the file consists of a list of users names. Each user named will 
    receive system messages which fall into the "alert" category. Almost 
    all the messages which elcsd issues on its own behalf have system alert 
    status. An example of a message issued by elcsd is:
    
         exceeded error log file limit size, error not logged to disk!
    
       When you get messages directly from elcsd you should take prompt and 
    effective action. If there is a severe hardware problem preventing disk 
    access then data may be being lost. If the errorlog file is full then 
    you may be logging lots of recoverable errors on a rapidly degrading 
    disk, which you ought to be trying to take a tape dump of before total 
    failure occurs. In any case elcsd is usually saying that system error 
    events are not being logged, and that what may later prove to be vital 
    information is being lost.
    
       Having looked at how the details of errors are logged by Ultrix, 
    let's now look at the methods available to collate and display this 
    information into reports. These reports will be mainly of use to your 
    field service engineer who has to rectify the problem which occurred.
    
    
    Obtaining error log reports.
    
       The program used to read the binary contents of the error log file 
    and turn it into a report is called uerf (Ultrix Error Report 
    Formatter). You do not need to be superuser to use uerf. By default 
    uerf lives in:
    
       /etc/uerf
    
    The command:
    
       /etc/uerf -h 
    
    provides a summary of the options available for use with uerf. Also in 
    the /etc directory are three supporting files. These are:
    
       /etc/uerf.bin	     # database of event information
       /etc/uerf.hlp	     # a text file giving brief help on uerf
       /etc/uerf.err	     # uerf's private error log (see later)
    
    If required, you can move uerf and the three associated files to 
    another place on your file system and still have it work successfully, 
    because it uses the following search list to locate its supporting 
    files:
    
    1) The directory where you ran uerf from
    2) The /etc directory.
    3) The pathname specified in your shell PATH variables - see setenv(8)
    
    Vanilla flavour uerf is invoked by typing: 
    
       /etc/uerf
    
    and then you get the brief format report output to your terminal. If 
    you are on a softcopy terminal then use the command:
    
       /etc/uerf | more
    
    the see the output a screen at a time. Often the default report 
    obtained using these commands will be sufficient to give an idea of 
    what has gone wrong. But what if your error log file is full of tape 
    retry logs from yesterday and you now want to see the details of an 
    uncorrectable memory error? This is where the numerous options 
    available with uerf are used.
    
       Figure three is a quick reference to the options and option 
    specifiers you can use with uerf. The following are examples of uerf 
    commands, refer to figure three for all the options.
    
    
    uerf command examples.
    
    /etc/uerf -T tk50 -t s:01-jan-1988,00:00 e:05-feb-1988,23:59
    	 	   # Display all tk50 errors logged between midnight 
    	 	   # 1st January 1988 and one minute before midnight
    	 	   # on 5th Febuary 1988. (If either time specifier 
    	 	   # omitted with the -t option defaults to today).
    
    /etc/uerf -D -t s: e:    
    	 	   # Display all disk errors logged today in default   
    	 	   # brief format. 
    
    /etc/uerf -M cpu 
    	 	   # Display all errors detected on the VAX Central  
    	 	   # Processing Unit.
    
    /etc/uerf -O cmp,seg -t s:01-aug-1988 -o full
    	 	   # Display full details of all operating system detected 
    	 	   # compatibility mode or segmentation errors which have 
    	 	   # been logged between the 1st August 1988 and today.
    
    /etc/uerf -s 233-299 -o terse
    	 	   # display terse details of errors whose sequence numbers 
    	 	   # are between 233 and 299	 
    
    
    The eli utility.
    
       The /etc/eli program provides a user interface to the error logger, 
    allowing the system manager to change the error logging parameters 
    during time sharing. Its primary use is to restart error logging after 
    changing the contents of the configuration file. To do this type:
    
    /etc/eli -a
    
    which stops elcsd and restarts it, thus forcing it to read the new 
    configuration parameters. The most useful of the other options are 
    shown below, use the -h option to obtain a full list.
    
    /etc/eli -d	   # Disables error logging by stopping elcsd
    /etc/eli -e	   # Enables error logging when the system is in multi user 
    	 	   # mode.
    /etc/eli -i	   # Flushes the kernel error log buffer
    /etc/eli -n	   # Disables error logging to disk.
    /etc/eli -s	   # Enables single user mode error logging
    
    
    

                    F I G U R E	       	         T H R E E 
    

Fig 3. uerf command options. 


   /etc/uerf       -x           # negate all other supplied options
                                # EXAMPLE:   /etc/uerf -D -x
                                # means display all errors except disks

                   -R           # Display newest errors first
                   -n           # Display errors as they occur
                   -h           # show available uerf options

                   -H hostname  # Display errors from node hostname
                                # EXAMPLE:    /etc/uerf -H PERCY
                                # Means display all errors which have been
                                # logged which originated from node PERCY.

                   -A           # Include hardware adapter errors
                      aie       # BVP controller (BI disk controller).
                      aio       # BVP controller (BI disk controller).
                      bla       # BI LESI adaptor
                      bua       # BI to UNIBUS adaptor
                      nmi       # Nautilus memory interconnect
                      uba       # VAX UNIBUS adaptor
                                # EXAMPLE:   /etc/uerf -A bua,bla
                                # means display all bua and bla errors.

                   -c           # Select specific classes of errors
                      err       # Selects error reports from h/w or s/w
                     oper       # Selects only operational events

                   -D           # Includes disk errors.
                      rd54      # Include disk errors on RD54 type drives
                      ra80      # Include disk errors on RA80 type drives
                      ra60      # Include disk errors on RA60 type drives
                      etc       # Any other DSA compatible drive type
                                # EXAMPLE:   /etc/uerf -D ra60
                                # means display all errors logged on ra60's.
                                # EXAMPLE:   /etc/uerf -D -o full
                                # means display all disk errors in full format

                   -f filename  # Allows you to display errors from any
                                # current or archived error log file.

                   -M           # requests that only errors to do with the
                      cpu       # central hardware elements of the machine
                      mem       # be displayed. The memory and the CPU..
                                # EXAMPLE:   /etc/uerf -M mem -o full
                                # means display all memory errors in full

                   -o           # Selects output format. brief is the
                      terse     # default. Beware non-selective use of
                      brief     # full, it can generate a long listing!
                      full      #

                   -O           # Operating system detected errors.
                      aef       # Arithmetic exceptions (can be h/w or s/w)
                      ast       # async trap exception faults (h/w or s/w)
                      bpt       # Break point instruction fault (often h/w)
                      cmp       # Compatibility mode faults (h/w possibly s/w)
                      pag       # Page fault exceptions (h/w)
                      pif       # Privileged instruction faults (h/w or s/w)
                      pro       # protection faults (h/w or s/w)
                      ptf       # Page table faults (h/w or corrupt s/w)
                      raf       # Reserved address Faults (h/w or corrupt s/w)
                      rof       # Reserved Operand Faults (corrupt s/w usually)
                      scf       # System Service Call Exception faults (either)
                      seg       # Segmentation Faults (h/w usually)
                      tra       # Trace exception faults (h/w or s/w)
                      xfc       # Xfc instruction failed (h/w)
                                # EXAMPLE:   /etc/uerf -O seg -o full
                                # Means display full details of all 
                                # segmentation faults.

                   -r n         # Selects errors by record type - see manual.

                   -s  nn-nn    # Each error is assigned a number when it
                                # is logged. This option selects a specific
                                # number or range of numbers to be reported

                   -T           # Includes tape errors in report
                      tk50      # Include errors which occurred on tk50
                      tu81      # Include errors which occurred on tu81
                       etc      # Any other Digital tape drive type                    

                   -t           # Selects by absolute or range of time.



    
45.3Which version of Ultrix am I running?COMICS::TREVENNORA child of initWed Jul 05 1989 13:4417
    
    Q: How do I find out what the version of my Ultrix system is?
    
    A: Log into the system and type the command:
    
    		strings /vmunix | grep Ultrix
    
                
        This says, produce a list of all the printable strings you can
    find in the currently running Ultrix Kernel, and show me any of
    them which contains the word 'Ultrix'. Strings is the command to
    scan a file for strings, and grep is the equivalent of the search
    command in VMS.
    
    
    
    Alan T.
45.4Auto - booting Ultrix from other than DU0 on a uVAX.COMICS::TREVENNORA child of initWed Jul 05 1989 13:4661
Problem:

	The customer has a dual drive  microVAX2 system. He wishes
	the system contained in partition A of drive one to be
	automatically booted when the system restarts. Currently, 
	the system attempts to boot drive zero and fails because 
	there is not an Ultrix system on it.

BACKGROUND:

	The key to this problem is understanding how the code in the
	MicroVAX ROM boots the system.

	The Virtual Memory Boot program (VMB.EXE) is used to bootstrap
	the operating system from one of the disk or tape storage
	devices on the system. The version of VMB which resides in the
	MicroVAX bootstrap ROMs differs from the version which is loaded 
	into memory from a console storage device. ROM resident VMB
	contains extra routines which search for a bootable
	storage device and, when a device is found to be ready, the
	first block is loaded from it, and then usually executed.

	Although it is not a formal standard, certain conventions are
	followed in Digital bootblocks for VAX disks. This makes it
	easy for VMB to examine a loaded bootblock to see if the disk
	looks to be bootable. If VMB cannot see any indication of a 
	valid bootstrap in Logical Block Number (LBN) zero of a disk
	it will abort its attempt to boot that disk, and proceed to 
	the next disk. 

	
		NOTE: This behaviour of VMB is only valid for MicroVAX
		      machines. The version of VMB which is loaded from
		      console or system media (eg TU58, RX50 or RA
		      Disks) require you to specify a device to boot 
		      and halt the machine if that device is not
		      bootable.  
 
 
SOLUTION:

	The following proceedure is NOT supported by Digital. It is
	suggestion as to how an individual customer could approach the 
	solution to a particular problem.

	If a disk does not contain a bootable Ultrix system the first 
	16 blocks are never used. On a bootable disk these will contain
	the bootblock in LBN 0, and the program vaxboot in LBN 1-15.
	The disk initialisation utility supplied with Ultrix always
	writes a valid bootblock - although the pointers within that
	bootblock are set to absolute default values.
 
	To make a disk 'look' unbootable to the 'sniffer' boot routines 
	of ROM-based VMB all that is required is to fill the bootblock
	with all zeroes, using some system calls in a privileged program
	to do the work.

	Extreme care is required when doing this, because if anything
	after LBN 15 is overwritten, the file systems on the disk will
	be invalidated and possibly lost. 	
    
45.5The All New(?) DS5800.KERNEL::EDMUNDSTue Oct 10 1989 12:189
    
    I recently attended an  "ISIS"(DS5800) pilot course in Paris.
    This is the new RISC Ultrix(only) system and the instructor has
    forwarded a very useful lookup guide to me.
    I am including this in the next reply,if you have any questions
    I will be happy to answer them/play dead as appropriate.
    
    Cheers
          Steve Ed.
45.6DS5800/Ultrix info.KERNEL::EDMUNDSTue Oct 10 1989 12:20664
			Kernel debugging on ULTRIX/RISC



				      by



				  Al Delorey



			  ULTRIX(tm) Engineering Group



				  abyss::afd



			     Copyright (c) 1989 by

		  Digital Equipment Corporation, Maynard, MA

							    RISC debug - afd
				Address Space

    The system is always in virtual address mode (no physical address mode)

    Address spaces

	KSEG0	- not mapped, cached; for kernel text
		  virtual: 8000 0000 -> 9fff ffff (512 MB)

	KSEG1	- not mapped, not cached; for I/O space
		  virtual: a000 0000 -> bfff ffff (512 MB)

	KSEG2	- mapped, cached; for stacks, kernel mallocs
		  virtual: c000 0000 -> ffff ffff (1 GB)

	KUSEG	- mapped, cached; for user space
		  virtual: 0 -> 7fff ffff (2 GB)



				    Stacks
    
    No interrupt stack, only Kernel and User stacks [idle stack added in 4.0]
    
    Startup Stack 	- at 0x8001f7ff (0x8002ffff in 3.0/RISC) -
		          grows downward and is used during system startup,
		          until a kernel stack is available.

    Kernel Stack	- starts at 0xffffe000 (kseg2 space)

    User Struct		- starts at 0xffffc000 (kseg2 space)

    Per cpu data base	- starts at 0xffff8000 (kseg2 space) [as of 4.0 - smp]

    User Stack		- starts at 0x7ffff000 (kuseg space)
			  (one guard page 0x7ffff000 to 7fffffff)

							    RISC debug - afd

			    Using nm

    For a system crash that gives an Exception PC (EPC) on the console,
    you can use nm(1) to determine what routine was executing.

	    nm -n /vmunix

    This command will display the name list (symbol table), in numerical
    order, of the vmunix image.  Find the address that is closest to (but
    less than) the given EPC from the crash.  That address is the starting
    address of the routine that was executing.

    You can then subtract the start address of the routine from the
    faulting PC, to determine the offset from the beginning of the
    routine where the error occured.  Then using dbx the offending
    instruction can be found.

    Sample nm output
    ----------------
    First Kernel text address: 8003,0000 (192k bytes above 8000,0000)
	    80030000 T start
	    80030000 T eprol
	    800300ac T putstr
	    80030148 T lputc
	    8003018c T cn_reset

	    ...

    First Kernel data address: is approximately 8011,0000 
	    80112030 D Sysmap
	    8011c830 D Usrptmap
	    8011f920 D camap
	    8011f930 D kmempt
	    8011f930 D ecamap
	    80123930 D Forkmap

							    RISC debug - afd
Overall system memory map (in sys area see mips/entrypt.h)

physical	kseg1			use
--------	-----			---
0x00030000	0xa0030000 upward	Ultrix kernel text, data, and bss

0x0002ffff	0xa002ffff 
to		 			additional Prom Space (64K)
0x00020000	0xa0020000

0x0001ffff	0xa001ffff 		
to					1K netblock (host/client net boot info)
0x0001fc00	0xa001fc00 	

0x0001fbff	0xa001fbff 		
to					1K Ultrix Save State area (NEW in 3.1)
0x0001f800	0xa001f800

0x0001f7ff	0xa001f7ff downward	1K Ultrix temporary startup stack
		|			(at 0x2ffff in 3.0; here in 3.1)
		v
0x0001f400	0xa001f400 

0x0001f3ff	0xa001f3ff downward	dbgmon stack (a few K less than 64K)
		|
		V
		^
		|
0x00010000	0xa0010000 upward	dbgmon text, data, and bss

0x0000ffff	0xa000ffff downward	prom monitor stack
		|
		V
		^
		|
0x00000500	0xa0000500 upward	prom monitor bss

0x000004ff	0xa00004ff
to					restart block
0x00000400	0xa0000400

0x000003ff	0xa00003ff
to					general exception code
0x00000080	0xa0000080		(note cpu addresses as 0x80000080)

0x0000007f	0xa000007f
to					utlbmiss exception code
0x00000000	0xa0000000		(note cpu addresses as 0x80000000)


							    RISC debug - afd
			Some Usefull dbx Commands

Command (alias)
---------------
alias [name[(args)]cmd]	show all aliases or define an alias
assign (a) var=value	assign a value to a program variable
stop at (b)		set a breakpoint at a given line
cont (c)		continue after breakpint
delete (d)		delete the given item from the status list
down			move down an activation level in the stack
dump			print variable info for current routine
dump .			print global variable info for all routines
file			what is the current file
file filename		set the current file to 'filename'
func (f)		set context to specified func name (selects the file)
history (h)		print history list
status (j)		show status list, shows breakpoints (journal)
list (l)		list the next 10 lines of source code
l line:range		list 'range' lines of code, starting at 'line'
next (n or S)		step specified # of lines (don't stop in calls)
nexti (ni)		step specified # of assembly lines (don't stop in calls)
print (p)		print the value of the specified expr or var
pd 			print the value of the specified expr or var in decimal
po 			print the value of the specified expr or var in octal
px 			print the value of the specified expr or var in hex
			note: Can't print register variables.
pr			print values of all registers
quit (q)		exit dbx
run [args]		run the program with specified cmd line args
rerun (r)		rerun the program with same arguments
return			finish executing the function and stop back in caller
set			show setting of dbx variables
set $var=value		set a value to a dbx var (can define a new variable)
step (s)		step specified # of lines (stopping in calls)
stepi (si)		step specified # of assembly lines (stopping in calls)
stop at (b)		set a breakpoint at a given line
stopi at <addr>		set a breakpoint at a given assembly instruction addr
u			list the previous 10 lines
unset $var		unset a dbx variable
up			move up an activation level in the stack
w			list 5 lines before and after current line
W			list 10 lines before and after current line
where (t)		where are we & how did we get here (stack trace)
			this can also be done when stopped at a breakpoint
			(this will show where/how a system crashed)
whatis <var>		show the variable/symbol definition
whereis <var>		show all versions of the specified variable
which <var>		print the default (current) version of the variable
/<regex>		search ahead in the source code for the regular expr
?<regex>		search back in the source code for the regular expr
!<history-item>		specify a cmd from the history list (by number or str)
line edit		there is an emacs like line edit capability.  To enable
			it you must set the shell environment variable LINEEDIT
			(setenv LINEEDIT)
    ^A				Move cursor to start of line
    ^B				Move cursor back one char
    ^D				Delete char at the cursor
    ^E				Move cursor to end of line
    ^F				Move cursor ahead one char
    ^H,delete			Delete char preceding cursor
    ^N				Move ahead one cmd line in hist list
    ^P				Move back one cmd line in hist list

							    RISC debug - afd

signals
-------
catch			list signals that dbx will catch
catch SIGNAL		add a signal to the catch list
ignore			list signals that dbx will ignore (not catch)
ignore SIGNAL		add a signal to the ignore list

History
-------
Ultrix/RISC dbx is (based on) MIPS dbx.
    Both Ultrix/VAX dbx and MIPS dbx are based on BSD dbx.
    Mips did a lot of work on dbx: enhanced it to work on the kernel.

There is no adb for Ultrix/RISC

							    RISC debug - afd
			Using dbx to debug the kernel

dbx -k vmunix.n vmcore.n

t			get a back trace (to show where/how the system crashed)

routine-name/ni		dump out n instructions from given routine

&<symbol>/<fmt>		print address and contents of a symbol

<address>/<cnt><mode>	print the contents of image at the given address
			valid modes are:
			  d,D	short, long decimal
			  o,O	short, long octal
			  x,X	short, long hex
			  c	a byte as a char
			  s	a null-terminated string
			  f	single precision real
			  g	double precision real
			  i	machine instructions
  example:
    If the system crashes and reports an EPC of 0x8000dead, then dbx can be
    used to determine where in the kernel that PC is located.

    0x8000dead/9i	decode 9 instructions (& show line #s) @ 0x8000dead
			Beware: that code that's ifdef'ed out will not count
				in dbx's line numbering

p gnode[n]              print the gnode struct n in the gnode table

p text[n]               print the text struct n in the text table

set $pid=n		set processs context to given pid
			can then do trace, p *up, p *up.u_procp, etc. on proc

p *up			print the u_area (of current proc)

p *up.u_procp		print the proc struct of the current pid ("$pid")


Using dbx on running vmunix
---------------------------
dbx -k /vmunix

&<symbol>/<fmt>		print address and contents of symbol

a <symbol>=<value>	to change the value of a symbol (must be root)

							    RISC debug - afd
			Examining the Exception Frame

All error traps & interrupts (except cache parity errors) generate
an "exception condition".

Exception conditions trap to VECTOR(exception) in locore.s.
Exception routine saves state in the exception frame (on stack).

For interrupts, VECTOR(VEC_int) is called, which saves additional
state on the exception frame, & calls intr() (in trap.c).
intr() calls the specific interrupt handler thru "c0vec_tbl".

For traps, the individual trap routine is called thru the "causevec",
these routines (VEC_addrerr, VEC_ibe, VEC_dbe) in
turn call VECTOR(VEC_trap), which saves additional state on the
exception frame, and calls trap() (in trap.c).

A pointer to the exception frame (ep) is passed as an argument
to the following routines:

	trap, intr, tlbmod, tlbmiss, syscall

Thus by using dbx to get a trace, you can find the address of the
exception frame (the ep argument).  You can then dump out the exception
frame with a dbx command like:

	dbx> 0xffffnnnn/41X

						(cont on next page)

							    RISC debug - afd
		    Examining the Exception Frame (cont)

The offsets within the exception frame are defined as follows (see mips/reg.h):

#define	EF_ARGSAVE0	0		/* arg save for c calling seq */
#define	EF_ARGSAVE1	1		/* arg save for c calling seq */
#define	EF_ARGSAVE2	2		/* arg save for c calling seq */
#define	EF_ARGSAVE3	3		/* arg save for c calling seq */
#define	EF_AT		4		/* r1:  assembler temporary */
#define	EF_V0		5		/* r2:  return value 0 */
#define	EF_V1		6		/* r3:  return value 1 */
#define	EF_A0		7		/* r4:  argument 0 */
#define	EF_A1		8		/* r5:  argument 1 */
#define	EF_A2		9		/* r6:  argument 2 */
#define	EF_A3		10		/* r7:  argument 3 */
#define	EF_T0		11		/* r8:  caller saved 0 */
#define	EF_T1		12		/* r9:  caller saved 1 */
#define	EF_T2		13		/* r10: caller saved 2 */
#define	EF_T3		14		/* r11: caller saved 3 */
#define	EF_T4		15		/* r12: caller saved 4 */
#define	EF_T5		16		/* r13: caller saved 5 */
#define	EF_T6		17		/* r14: caller saved 6 */
#define	EF_T7		18		/* r15: caller saved 7 */
#define	EF_S0		19		/* r16: callee saved 0 */
#define	EF_S1		20		/* r17: callee saved 1 */
#define	EF_S2		21		/* r18: callee saved 2 */
#define	EF_S3		22		/* r19: callee saved 3 */
#define	EF_S4		23		/* r20: callee saved 4 */
#define	EF_S5		24		/* r21: callee saved 5 */
#define	EF_S6		25		/* r22: callee saved 6 */
#define	EF_S7		26		/* r23: callee saved 7 */
#define	EF_T8		27		/* r24: code generator 0 */
#define	EF_T9		28		/* r25: code generator 1 */
#define	EF_K0		29		/* r26: kernel temporary 0 */
#define	EF_K1		30		/* r27: kernel temporary 1 */
#define	EF_GP		31		/* r28: global pointer */
#define	EF_SP		32		/* r29: stack pointer */
#define	EF_S8		33		/* r30: callee saved 8 */
#define	EF_RA		34		/* r31: return address */
#define	EF_SR		35		/* status register */
#define	EF_MDLO		36		/* low mult result */
#define	EF_MDHI		37		/* high mult result */
#define	EF_BADVADDR	38		/* bad virtual address */
#define	EF_CAUSE	39		/* cause register */
#define	EF_EPC		40		/* program counter */

							    RISC debug - afd
		    Examining any Process in the System

    ps -axlk vmunix.n vmcore.n	- Flags (see ps(1))
				  -a All processes (not just your own)
				  -x Even processes w/ no tty
				  -l Long format (more info given)
				  -k Kernel files given
				- get <pid> of process that you want to look at

    Back in dbx...
    set $pid=n		set processs context to given pid (in dbx)

    Can then do t (trace), p *up, p *up.u_procp, etc. on the process

    The process' stored registers in the u_area are in "exception frame format"
    and can be obtained as follows:

    	px up.u_ar0[n]

							    RISC debug - afd
			Forcing a Panic (not hung)

    dbx -k /vmunix /dev/mem	- run as root to write

    a ln_softc=0
    				- will panic on next network interrupt
				  (even works in single user mode)
				  (Don't do this if system is diskless or
				   it won't dump)

    a gnodeops=0
				- will also panic the system

    Note: don't bash the proc struct, or then dbx can't work on the image
    Note: don't bash the console structs or you won't see the panic messages

							    RISC debug - afd
	    Forcing a Memory Dump on DS3100/2100 when hung

    If you set the bootmode to 'r' (restart), then when the restart
    button (on a DS3100) is pressed, the system will do a memory dump,
    and then a reboot, as opposed to halting and clearing memory.

    Note that the dump may be silent, so be patient.

    To set the bootmode to restart use the console command:
    
	>>> setenv bootmode r



    If the system "hangs" or drops into console mode without doing
    a memory dump, the memory dump routine can be started manually.

    If a DS3100 "hangs", you can press the reset button to enter console
    mode.  The default action on the DS3100 is for the reset to re-initialize
    memory.  To prevent this (preserve memory), set the bootmode to debug
    by typing the following command in console mode (prior to debugging a
    "hang" situation):
    
	>>> setenv bootmode d

    With bootmode set this way, if the system is "hung" you can press the
    reset button to enter console mode (with memory contents preserved).
    The crash dump code can then be run by typing the "go" command with
    a special address (the kernel start address + 8) that will call the
    memory dump routine.  In Ultrix V3.0/3.1 the kernel start address is
    0x80030000, so the dump routine is started by:
    
	>>> go 0x80030008

    If the system was in multi-user mode when the reset button was
    pressed, then the dump will occur silently, no messages will be
    printed.  The memory dump will take several minutes, then the
    console prompt will re-appear.  After the dump is completed, you
    can re-initialize the system and reboot as follows:

	>>> init
	>>> auto
    
    Note: When bootmode is set to 'd' it is important to type "init" before
	  "boot" or "auto" when the system has been shutdown to console mode,
	  or reset to console mode.  Failure to use the init command may
	  cause the system boot to fail.

							    RISC debug - afd
			Forcing a Memory Dump on DS5400

    Set the break enable switch up (the dot in the circle).

    Press the break key to get the console prompt.
    
    The crash dump code can then be run by typing the "go" command with
    a special address that will call the memory dump routine:
    
	>>> go 0x80030008

							    RISC debug - afd
			Forcing a Memory Dump on DS5800

    Set the break enable switch up (the dot in the circle).

    Press the break key to get the console prompt.
    
    The crash dump code can then be run by typing the "go" command with
    a special address that will call the memory dump routine:
    
	>>> go 0x80030008

							    RISC debug - afd
		      Debugging "hung" systems

		    (Finding the real kernel stack)

When you force a dump from a "hung" system, the standard back trace done
by dbx will not be useful for the currently active process.
Dbx will get the process context out of the u_area, which is old.
That is, the u_area will have the process context for the last time
that the process was context switched out.

The kernel stack for each process in the system is located at virtual
address 0xffff,e000 in KSEG2 space.  The system has an array of NPROC
u_areas that are 8k bytes each.  Even though each user process has its
u_area and kernel stack at the same virtual address in KSEG2 address
space, each uarea/kstack maps to a unique physical address.

On context switches the first 2 entries in the TLB ("safe entries")
are set up to map the u_area and kernel stack for that user process.


Kernel Stack: 0xffff,e000	+-------+ higher addresses
				|   .	|
				|   .	|
				|   .	|	8 K bytes for kernel stack
				|   v	|	    and u area
				|	|	In Kseg2 space (see param.h)
				|_______|
				|   ^	|
				|   |	|
				|   |	|
U area: 0xffff,c000		+-------+ lower addresses

Within dbx, you can dump out the kernel stack with a command such as:

	0xffffd000/1028X

This will dump the kernel stack from low to high memory (most recent events
to oldest events).

							    RISC debug - afd
			Examining stack frames with dbx

odump(1)
    The utility odump(1) can be used to get a symbol table dump of vmunix.n

	odump -P vmunix.n > vmunix.syms
    
    See /usr/include/sym.h (struct runtime_pdr) for the format of the
    runtime procedure descriptor created by the loader.

    The "fpoff" field as shown by odump is the frame size for the particular
    procedure entry.

The general format of the stack (stack frames) is:

	high memory	+---------------+
			|    arg n	|	Space for all args, even though
			|      .	|	first 4 args passed in regs
			|      .	|
			|      .	|
	virtual frame	|    arg 1	|
	ptr ->	       /|---------------|\
		frame <	|  local vars	| \
		offset \|---------------|  \
			|saved R31 (ret)|   \
			|...............|    \
			|more saved regs|     > framesize
			|   16-23, 30	|    /
			|---------------|   /
			|  arg passing	|  /
			|     area 	| /
	stack ptr ->	|---------------|/
	(framereg)	|	.	|
			|	.	|
			|	.	|
			|	 	|
	low memory	+---------------+

Using this information, you should be able to work your way back up
the call history on the stack.

Examples of usage are in libexc: unwind.c, exception.c, exception.h

							    RISC debug - afd
		    More on Examining stack frames with dbx

You may find it equally productive to start at the top (high memory) end
of the kernel stack and look for the return address of VEC_syscall on the
stack.  This is where VEC_syscall calls "syscall" and the stack frame for
entry into "syscall" has the return address of VEC_syscall saved on the
stack. Using the following dbx command will show the instructions in
VEC_syscall, and in particular where "syscall" is called, thus you can
see the return address that will be on the stack.

    (dbx) VEC_syscall/30i
      [VEC_syscall, 0x800c3868]     	    ori     r5,r16,0x1
      [VEC_syscall:590, 0x800c386c]         mtc0    r5,sr
      [VEC_syscall:591, 0x800c3870]         sw      r2,20(sp)
      [VEC_syscall:592, 0x800c3874]         sw      r3,24(sp)
      [VEC_syscall:593, 0x800c3878]         move    r5,r2
      [VEC_syscall:594, 0x800c387c]         move    r6,r16
      [VEC_syscall:595, 0x800c3880]         jal     syscall
      [VEC_syscall:595, 0x800c3884]         nop
**>   [VEC_syscall:596, 0x800c3888]         bne     r2,r0,0x800c3810
      [VEC_syscall:596, 0x800c388c]         nop

The return address will be: 0x800c3888

Using dbx in this way and the dump of the kernel stack, you can pick and
guess your way down the stack to find where the system went.

							    RISC debug - afd
			Disassemble Utility

    dis -p routine image-file	- will disassemble a routine in the image file
				  see dis(1)

							    RISC debug - afd
			Some Usefull Console Commands
    
    Dump
	dump -w -x ADDR#CNT		- dump the contents of memory, starting
					  at given ADDR & dumping CNT locations
					  (long words in hex format)

	dump -w -x ADDR:ADDR		- dump the contents of memory, starting
					  and ending at given ADDRs
					  (long words in hex format)

	dump -w -x 0x8001f400:0x8001f800	- dump the startup stack

    Examine
	e [-(b|h|w)] ADDR		- examine byte, halfword, word;
					  ADDR is a virt addr; to examine
					  physical loc 0 use 0x80000000
	
    Go
	go [pc]				- transfer control to given entry point
    
    Help
	help [cmd]
	? [cmd]				- if no cmd given, display cmd menu

    Printenv
        printenv [evar]			- display current value of specified
					  environment variable
    Setenv
	setenv EVAR STRING		- set the specified environment
					  variable to the given string
					  environment variable
    Unsetenv
	unsetenv EVAR 			- remove the environment variable
					  from the environment variable table
    Test
	t a				- test all components and subsystems
    
    Booting
	auto				- use environment variable "bootpath"
					  to boot to multiuser: DS3100/2100 only

	boot				- use environment variable "bootpath"
					  boots to singleuser on DS3100/2100
					  boots to multiuser on other systems

	boot -s				- boot to single user (this cmd option
					  not on DS3100/2100)

	boot -f rz(CTRL,UNIT,PART)vmunix- boot the specified image to singleuser
	boot -f mop()			- boot from the network to singleuser

	boot ... memlimit=<#bytes of mem> - to artificially reduce memory size


							    RISC debug - afd
		    References For Further Info

    Header files:
	/sys/h/
	    proc.h
	    user.h

	/sys/machine/mips
	    entrypt.h
	    frame.h
	    pcb.h
	    pte.h
	    reg.h

    Crash(1M)
	Sys V "crash" program that is only partially converted to
    	understand the ULTRIX kernel data structures.
    
    
45.7More info on ULTRIX online.COMICS::TREVENNORA child of initWed Feb 07 1990 11:108
    
    In:
    
    COMICS::SYS$PUBLIC:ULTRIX*.* 
    
     are a number of ULTRIX info files which may be of interest/use.
    
    Alan T.
45.8ULTRIX_CSC notes file.COMICS::TREVENNORA child of initWed Feb 07 1990 11:1634
    
    If anyone is CSG is supporting ULTRIX and would like to become a member
    of the ULTRIX_CSC notes file (charter attached) please let me know.
    
    
    
 You asked for it, and here it is. The ULTRIX_CSC 
conference is intended as a members-only forum for use by 
two groups of people within Digital:

1) People who work with ULTRIX in a technical problem solving 
   environment - usually in a Customer Support Centre (CSC).
2) People who work in engineering functions associated with 
   ULTRIX, or those who provide high-level support. 

 The primary reason for having the conference members-only is 
to keep the volume of notes to a usable number, but also to have 
a forum where CSC and engineering staff can pool their expertise. 
The spin-off benefits are that unannounced information can be 
freely discussed, and that you can make contact with people doing
a similar job in different geographies.

CONFERENCE RULES?

1) Be relevant and accurate (If you are guessing, say so).
2) Be as brief as you can.
3) Use KEYWORDS, and make note titles descriptive of the contents
   of your note.
4) Do not ask for membership for people with a job description which
   does not fit the above profile.
5) If you discover an answer to a problem please post it here.
6) NEVER directly quote or send copies of notes in this conference to
   a customer
7) Its okay to say the U*** word 8-)
45.9KERNEL::MOUNTFORDWed Feb 07 1990 14:55153
    Moved by mod with permission.
    
              <<< KERNEL::DISK$APD1:[NOTES$LIBRARY]CSGUK_SYSTEMS.NOTE;1 >>>
                               -< CSGUK_SYSTEMS >-
================================================================================
Note 93.0                          ultrix news                           1 reply
KERNEL::WIBREW                                      144 lines   6-FEB-1990 09:48
--------------------------------------------------------------------------------

From:	20050::POEGEL "31-Jan-1990 1425" 31-JAN-1990 21:27:27.26
To:	@UONE
CC:	
Subj:	ULTRIX NEWS SUMMARY






    






		          U L T R I X   U P D A T E

		              January 26, 1990

		   System Software Marketing, Field Programs


    Editor:  Lynne Poegel	XIRTLU::POEGEL	      ZKO3-3/Y25



		          *** INTRODUCTION ***

	It is our goal to send out the ULTRIX mailing biweekly.  The 
	frequency may change depending on the flow of information.  If 
	you would like to be added to the distribution list, please 
	send an electronic mail message with your name and node to 
	XIRTLU::POEGEL.  Comments and suggestions for topics are 
	always welcome.


+-----------+
| Contents: |
+-----------+


	PRODUCT INFORMATION

		+Phase 0 Opened for DECnet/SNA ULTRIX Programming Toolkit
		+Update of Support for TSV05 and the DECsystems 5400
		

	MARKETING INFORMATION
	
		+New RISC/ULTRIX DECwindows Demos Now Available

	CUSTOMER EVENTS
		
		+ List of Upcoming UNIX Tradeshows
	




		Digital Company Confidential, For Internal Use Only

UNIX is a registered trademark of American Telegraph & Telephone Company



PRODUCT INFORMATION:
--------------------

	PHASE 0 OPENED FOR DECNET/SNA ULTRIX PROGRAMMING TOOLKIT:

	Phase 0 has opened for the DECnet/SNA ULTRIX Programming Toolkit.
	The purpose of this release is to provide the ULTRIX user a 3270 Data
	Stream Programming Interface.  Releases could include an LU6.2 
	programming interface.

	If you wish to input into the product requirements, please log into
	SSGBPM Infobase through the ACCESS Sales Communication Vehicle or
	through the VTX library and look under SSGBPM ULTRIX Liaison Program 
	for a copy of the Product Requirements Input Form.

	---

	UPDATE OF SUPPORT FOR TSV05 AND THE DECSYSTEMS 5400:

	"Support for the TSV05 on the DECsystem 5400 appears with
	V3.1C of ULTRIX. The qualification was not complete in time
	for the TSV05 to be incorporated into the Q3 Systems and
	Options Catalog and SPD 3.1C, but it will be a supported option." 



MARKETING INFORMATION:
----------------------

	NEW RISC/ULTRIX DECWINDOWS DEMOS NOW AVAILABLE:

	LDP/Science has developed and documented 10 new demonstrations.  
	Use these demos to showcase the performance of Digital's 
	RISC/ULTRIX platforms and the DECwindows user interface to 
	your customers.

	These demos illustrate several Cooperative Marketing Program 
	(CMP) partner applications as well as an integrated DECwrite 
	solutions.

	For a listing of all the available demos, enter the SSGBPM 
	Infobase through the ACCESS Sales Communication Vehicle or enter 
	the VTX library and look under SSGBPM ULTRIX Liaison Program.

	---

	
 		
	


CUSTOMER EVENTS:           
----------------                  

	EVENT		DATE		LOCATION	CONTACT
	-----		----		--------	-------

	UNIXEXPO       May 19-11      Los Angeles,CA	Judy Izzi
	/Spring
	
	DECUS/Spring   May 7-11	      New Orleans, LA	Judy Izzi

	Xhibition      May 20-25      San Jose, CA	Judy Izzi

	Usenix	       June 11-15     Anaheim, CA	Judy Izzi

 
******************************************************************************
*  If you know individuals who should receive this newsletter or be deleted  *
*  from the distribution list, please send mail to XIRTLU::POEGEL with your  *
*  name, location, and node.                                                 *
******************************************************************************

    
    
    
    
45.10KERNEL::MOUNTFORDWed Feb 07 1990 14:5574
          <<< KERNEL::DISK$APD1:[NOTES$LIBRARY]CSGUK_SYSTEMS.NOTE;1 >>>
                               -< CSGUK_SYSTEMS >-
================================================================================
Note 93.1                          ultrix news                            1 of 1
COMICS::TREVENNOR "A child of init"                  66 lines   7-FEB-1990 11:06
                      -< Don Zereski (VP CS) on ULTRIX. >-
--------------------------------------------------------------------------------

From:	NAME: DON ZERESKI                   
	FUNC: CUSTOMER SERVICES               
	TEL: 276-9625                         <ZERESKI.DON AT A1 at CSSE at OGO>
To:	See Below


         
         I recently attended the UNIFORUM Show in Washington, D.C. with 
         a few other members of CSMC.  We then had a long session with
         some of our UNIX support people and salespeople.  It was truly 
         an eye opener.  We're doing a very poor job convincing our
         customers that we are in fact serious about our UNIX strategy.
         There are a whole host of items which will soon come out in the 
         form of some minutes which we'll utilize as a set of individual
         items to help guide us during the creation of a UNIX support 
         strategy.  
         
         We will have a single topic meeting to focus on our UNIX 
         support strategy.  Tom Karpowski, along with his ASDS group, 
         will host the meeting and help establish the process.  At the 
         very minimum we will outline roles and responsibilities and the 
         action items necessary to solve the existing list of problems, 
         but more importantly we've got to outline a complete plan for 
         the long term support of UNIX.
         
         Currently, we have two cultures within Customer Services.  One 
         that's had the last 10-12 years to grow, namely the VAX/VMS 
         culture and a new culture which is the RISC UNIX culture.  
         Given the size of the organization and it's time to imbed a 
         culture, the VAX/VMS culture is well entrenched throughout the 
         entire organization.  
         
         We treat customers who have made a choice to move to UNIX and 
         RISC as if they were foreigners.  We continually try to show 
         them that they made a very bad decision by forcing them to use 
         all of our VMS front end tools to access the problem solution 
         data bases and our CSC's for RISC and UNIX.  We continuously 
         remind them that our choice for support of UNIX will be much 
         less adequate than our tools and position for VMS.  We won't 
         allow them to submit SPR's via Internet but instead require 
         paper inputs.  Our tools for supporting UNIX and diagnosing 
         UNIX systems, are well behind the times.  We support UNIX, a 
         kernel in Atlanta, but if they have a UNIX network problem with 
         TC/PIP we force them to go to Colorado.  Why am I mentioning 
         all of the above?  Clearly these are symptomatic of one very 
         large culture suppressing a new and novel one that more and 
         more customers are choosing every day, and must be allowed to
         be nurtured and grown such that it has the same level of 
         efficiency and competence as we display with VMS.  Clearly, 
         if we are to be the service vendor of choice our tools and 
         our performance for UNIX must be equal to those for VMS.
         
         I believe to approach this whole topic as a major cultural 
         change and perhaps even an organizational change within 
         Customer Services, we've got to make the necessary investments
         and ensure once a customer has made a choice, we provide equal 
         and adequate support.
         
         We will make our one day single topic meeting meet the vision 
         that we've laid out for the 90's in terms of our heterogeneous 
         distributed support.
         
         
         Regards,
         Don
    
45.11KERNEL::MOUNTFORDTue Feb 27 1990 15:17136
From:	COMICS::TREVENNOR "Ultrix Interest Mailing list:  27-Feb-1990 1435" 27-FEB-1990 14:37:13.18
To:	@DIST:ULTRIXERS
CC:	TREVENNOR
Subj:	


UK Ultrix Support Group: 
Mailshot #19

2 Items in this mailshot.


+---+          +-------------------------------------------------------+
| 1 |          | UK ULTRIX Support Strategy - latest version available |
+---+          +-------------------------------------------------------+


The latest version of the UK ULTRIX Support strategy is 
available on line on COMICS::SYS$PUBLIC:ULTRIX_STRAT.PS

Please pull it over and read through it, and try to have your 
manager read it too. 


+---+             +-------------------------------+
| 2 |             | Project Athena. One mans view |
+---+             +-------------------------------+


[Forwardings deleted....]

From Tim Hicks of the North East Sales group.
---------------------------------------------

Fellow DWT-ees:

For those of you who weren't at the Big East Workstation Training
Event in Waterville Valley, NH, I'd like to share with you something
that may make you angry, or depressed, or shocked, depending on how
much you care about workstation sales for Digital.

Suppose I told you that there exists today a software technology
for workstations that creates, for workstation users, almost
undreamed-of levels of network transparency and ease-of-use of 
distributed computing resources? This software:

	- eliminates the needs to know physical locations for 
	anything (ie. no more system:/foo/bar/this.that or 
	VAX::DUA1:[FOO]BAR.TXT)

	- user's physical files are stored anywhere on the net,
	but are available transparently to the user REGARDLESS OF 
	WHAT WORKSTATION S/HE LOGS-IN TO.

	- mail services route mail to the user REGARDLESS OF WHAT
	WORKSTATION S/HE LOGS-IN TO (no need to know the users 
	location or home system a la: SYSTEM::USER)

	- Notification of actions and events are routed to the user
	REGARDLESS OF WHAT WORKSTATION S/HE LOGS-IN TO.

	- print services for any user are always located on nearby
	convenient printers, regardless of the users ID and 
	REGARDLESS OF WHAT WORKSTATION S/HE LOGS-IN TO.

	- all of the above carried out in a COMPLETELY SECURE 
	ENVIRONMENT which utilizes a state-of-the-art user 
	identification/authentication technology.

	- implements a degree of client-server computing which
	goes significantly beyond current UNIX-type systems.

Do you know any MIS managers that care about user services, 
client-server computing or network security?  Does anyone ever ask
you about managing distributed workstation environments, maintaining
control and reducing costs?  The technology mentioned above PROVIDES
SOLID ANSWERS AND POSITIVE BENEFITS that would make any MIS person's
heart jump for joy.

Now your probably thinking, "yeah, sounds great, but it probably
belongs to some third party and isn't ported to DECstations."
If so, your wrong!  THIS SOFTWARE RUNS ON DECSTATIONS TODAY (with 
OSF/Motif thrown in to boot)!!!!  Its fully documented and has been 
in production for a year, in a network of 1200+ workstations 
(growing to 10,000)!!!

	*** NOW COMES THE SAD PART, FOLKS ***

The software described above was built by none other than Digital
Equipment Corporation in conjunction with Mass Institute of
Technology.  Its called Project Athena, and its REAL LIVE SOFTWARE
THAT IS FULLY IMPLEMENTED ON RISC/ULTRIX.  Maybe you've heard of 
Project Athena before, and thought that it was academic stuff with 
no practical purpose.  Nothing could be further from the truth!

Several of your fiercest workstation competitors are working feverishly 
to get this product ported to their platforms.  But the way things look
right now, IBM, Sun and HP will be selling this software on their 
platforms while you're just finding out about it!!!!!

THAT'S RIGHT, FOLKS!!!  IBM, Sun and HP will have the competitive 
advantage that DEC has spent $50MM building!!!  

Oh, someone in-the-know will tell you that we might be putting PARTS 
of this technology into ULTRIX in some dim-and-distant future release, 
in  the form of some "toolkits" that the customer can use to build 
their own applications (does this sound familiar?)  In other words, 
the ingenious customer could roll their own with some pieces from DEC,
or get it out-of-the-box from our competition!!!!!!

The fact is, in the ultimate expression if Not-Invented-Here syndrome,
some of our software architects pooh-pooh this technology because,
although it was built by DEC, IT WASN'T BUILT IN THE RIGHT PART OF 
DEC!!!  (Can you say "Spit Brook Road, the fount of all software 
goodness"?)

I don't know about you, but, like most of us in the DWTs, I'm breaking 
my buns trying to survive in the workstation wars, and I don't know 
what makes me more angry and depressed:

	- knowing that Athena software is built and runs on 
	DECstations but Digital will neither market, sell nor 
	support it.

	- knowing that our competition will have it.

	- knowing that someone in charge somewhere in DEC made this
	decision.

Well, I've done it now, and I'll get arrogantly-toned hate mail
from people and places I've never heard of; however, if this keeps going
on its current course, I won't want to live with myself knowing that 
I could have said something at a time when it might have made a 
difference.  (Pardon the run-on sentence).

<<< Tim Hicks >>>
45.12KERNEL::MOUNTFORDThu Apr 19 1990 16:191386
    More from Alan....
    
    
    From:	COMICS::TREVENNOR "Ultrix Interest Mailing list:  18-Apr-1990 0915" 18-APR-1990 09:22:54.70
To:	@DIST:ULTRIXERS
CC:	TREVENNOR
Subj:	


UK Ultrix Support Group: 
Mailshot #21

4 Items in this mailshot.


1 - DECstation 5000 maintenance manuals available.
2 - What IS a DECstation 5000?
3 - TURBOCHANNEL - introductory details.
4 - ULTRIX is Digital's fastest growing software product - official.

+---+      +---------------------------------------------------------+
| 1 |      | DECstation 5000 model 200 maintenance manuals available |
+---+      +---------------------------------------------------------+


In this mailshot we cover a lot about the DECstation 5000.

For all you micro's guys and gals who'd like to get ahead of the game a little
bit we have a LIMITED number of DS5000 maintenance manuals available (10
copies). These can be had by calling the Product & Technology group secretary
Tina Tillbrook on 7833 3919. First come first served - and there are no more
copies available when these are gone!



+---+      +-------------------------------+
| 1 |      | So what IS a DECstation 5000? |
+---+      +-------------------------------+

ANNOUNCING THE DECstation 5000 MODEL 200 WORKSTATION AND
THE DECsystem 5000 MODEL 200 SERVER
								Mike Savello
								Product Manager
								RANCHO::SAVELLO
								DTN: 543-6588


DIGITAL OFFERS HIGHEST PERFORMANCE RISC/ULTRIX WORKSTATION
  ============================================================================
  |                                                                          |
  |   o 60% higher performance (18.5 SPECmarks) than the DECstation 3100     |
  |                                                                          |
  |   o ECC memory capacity from 8 MB to 120 MB in a slim, desktop package   |
  |                                                                          |
  |   o Range of competitive graphics offerings including color frame buffer,|
  |     2D accelerator and high-performance 3D options                       |
  |                                                                          |
  |   o Open, high-performance TURBOchannel I/O interconnect supporting a    |
  |     range of Digital-offered options and available via no-charge         |
  |     licensing to option vendors as well as system vendors                |
  |                                                                          |
  |   o Future industry standard VME-bus expansion                           |
  |                                                                          |
  |   o Future FDDI interface with industry leadership performance           |
  |                                                                          |
  |   o New 1.0 GByte (1000 MByte) 5.25" external SCSI disk drive available  |
  |                                                                          |
  |   o New 1.2 GByte 4mm Digital Audio Tape (DAT) available for high        |
  |     capacity backup                                                      |
  |                                                                          |
  |   o Entry level configurations start at just $14,995!                    |
  |                                                                          |
  |   o Available for immediate delivery                                     |
  |                                                                          |
  |   o Full binary compatability ensures immediate availability of today's  |
  |     existing portfolio of RISC/Ultrix applications (500+)                | 
  |                                                                          |
  ============================================================================


The new DECstation 5000 Model 200 and DECsystem 5000 Model 200 utilize the 
MIPS R3000 chip set running at 25 MHz and are the fastest Digital workstations
available as well as the most expandable desktop systems in the industry.


PRODUCT DESCRIPTION

The DECstation/DECsystem 5000 Model 200 is actually a family of systems
comprised of the following:

   o  DECstation 5000 Model 200CX        - 8 plane frame buffer offers
					    grayscale or color capability
   o  DECstation 5000 Model 200PX        - 8 plane 2D accelerator
   o  DECstation 5000 Model 200PXG       - 8 or 24 plane 3D accelerator
   o  DECstation 5000 Model 200PXG Turbo - 24 plane high performance 3D
   o  DECsystem 5000 Model 200           - Server system

Each system includes the following:

	o  MIPS R3000/R3010 CPU/FPU @ 25 MHz
	o  128 KB Cache - 64 KB Instruction/64 KB Data
	o  8 MB memory expandable in 8 MB increments to 120 MB
	o  Integral SCSI - supports up to 7 external devices
	o  Integral ThinWire Ethernet controller
	o  Two 25-pin RS-232 asynchronous ports - each with full modem control

DECstation 5000 Model 200CX
- ---------------------------
The DECstation 5000 Model 200CX utilizes an 8 plane color frame buffer which is
comparable to the color versions of the DECstation 3100.  This means that there
is no special hardware to accelerate graphics performance and that all
operations are performed by the CPU.  However, the fact that the CPU is rated
at 24+ MIPS and that the color frame buffer interfaces to the very high
performance TURBORchannel I/O interconnnect, the achieved graphics performance
of the CX option is very good.  It will often exceed not only frame buffer
performance of competitive systems but often their accelerated performance as
well.

	CX Option Specifics:
		Module size		Requires one TURBOchannel slot
		Planes			8
		Resolution		1024 x 864
		Refresh			60 Hz
		Supported Monitors	19" Mono (VR262), 16" Color (VR297),
					19" Color (VR299)
		When used		Cost sensitive applications, no need
					for 1280 x 1024
		Applications		2D CAD, CASE, Tech Pubs, Financial

DECstation 5000 Model 200PX
- ---------------------------
The DECstation 5000 Model 200PX utilizes a custom graphics co-processor called
PixelStamp which accelerates 2D graphics operations such as vectors and
polygons by 200-300% compared to the CX option.

	PX Option Specifics:
		Module size		Requires one TURBOchannel slot
		Planes			8 with 8 plane double buffer
		Resolution		1280 x 1024
		Refresh			66 Hz
		Supported Monitors	19" Color (VRT19)
		When used		Customers who need high performance 2D
					or low cost 1280 x 1024
		Applications		ECAD, MCAD, Tech Pubs, Earth Sciences

DECstation 5000 Model 200PXG
- ----------------------------
The DECstation 5000 Model 200PXG utilizes the same PixelStamp graphics engine
as the PX, plus an Intel i860 microprocessor which operates at 33 MHz.  The
i860 acts as a geometry accelerator and performs all of the 3D transformations.
The PXG is available as either an 8 plane configuration with an 8 plane double
buffer or as a 24 plane configuration with a 24 plane double buffer.  A
customer can have an 8 plane PXG upgraded to 24 planes in the field by Digital
Field Service.  An optional 24-bit Z-buffer can be added to either
configuration for use by applications requiring fast hidden-line and hidden-
surface removal.  The 24-bit Z-buffer can be added either in the factory by
Digital manufacturing or in the field by the customer.

	PXG Option Specifics:
		Module size		Requires two TURBOchannel slots
		Planes			8 with 8 plane double buffer or
					24 with 24 plane double buffer
					Field upgradeable from 8 to 24 planes
		Resolution		1280 x 1024
		Refresh			66 Hz
		Supported Monitors	19" Color (VRT19)
		When used		Customers who need low cost 3D or true
					color 3D with Z-buffer plus an open
					TURBOchannel slot
		Applications		MCAD, Earth Sciences, True Color
					Imaging
DECstation 5000 Model 200PXG Turbo
- ----------------------------------
The DECstation 5000 Model 200PXG Turbo utilizes two PixelStamp graphics
engines that operate in parallel plus an Intel i860 microprocessor which
operates at 40 MHz.  The combination of the additional PixelStamp engine plus
the faster i860 geometry accelerator helps to boost 3D performance by as much
as 100% over the PXG option for certain graphics operations.  The PXG Turbo
option comes standard with 24 planes plus a 24 plane double buffer, 24-bit
Z-buffer, and 24 planes of additional offscreen memory which can be used by
the X server for pixmaps.  The PXG Turbo option is the only option which
consumes all three DECstation 5000 Model 200 TURBOchannel slots.

	PXG Turbo Option Specifics:
		Module size		Requires three TURBOchannel slots
		Planes			24 with 24 plane double buffer, 24-bit
					Z-buffer, and 24 plane pixmap memory
		Resolution		1280 x 1024
		Refresh			66 Hz
		Supported Monitors	19" Color (VRT19)
		When used		Customers who need highest performance
					3D
		Applications		MCAD, Earth Sciences, Molecular Modeling


DECstation 2D COMPARISON CHART

			DS2100		DS3100		DS5000		DS5000
							200CX		200PX
			-------		-------		-------		-------
 Processor		 R2000		 R2000		 R3000		 R3000
 Clock Rate (MHz)	 12.5		 16.67		  25		  25
 SPECmarks		  8.3		 11.3		 18.5		 18.5
 Dhrystone MIPS		 11.8		 14.9		 24.2		 24.2
 Linpack FP (SP/DP)	2.8/1.2		4.0/1.6		6.4/3.7		6.4/3.7
 2D Vec/Sec		  60K		  80K		 130K		 300K
 Price (19"C,8MB)	$11,950		$14,900		$19,000		$21,500


DECstation 5000/200 3D COMPARISON CHART

			DS5000		DS5000		DS5000		DS5000
			200CX		200PX		200PXG	    200PXG Turbo
			-------		-------		-------		-------
 3D Linked Vec/Sec	  60K		  90K		 330K		 440K
 Ind. Triangles/Sec*	   8K		  20K		  50K		  60K
 Linked Triangles/Sec*	   9K		  20K		  70K		 115K

 * Numbers represent Gouraud shaded, Z-buffered triangles/sec except CX and PX


DECsystem 5000 MODEL 200

The DECsystem 5000 Model 200 utilizes the same technology as the DECstation
5000 Model 200 without the graphics options.  This allows the DECsystem 5000
Model 200 to be utilized as a compute server, a file server, a peripheral
server, a future gateway to an FDDI backbone, etc.  Since the DECsystem 5000
Model 200 does not include graphics, all three TURBOchannel slots are available
for a variety of options from additional SCSI controllers to Thickwire Ethernet
options to VME and FDDI adapters.  It is possible that the DECsystem 5000/200
could be configured with 3 additional SCSI options, each supporting 7 devices,
for a system maximum of 28 devices.  Since one of these SCSI peripherals is
a 1.0 GByte disk, one can easily see the advantages that a DECsystem 5000
Model 200 could offer.



The DECsystem 5000 Model 200 is an effective competitor against IBM's new
RS6000 POWERserver series, offering comparable functionality to the deskside
POWERserver 520 at a price competitive with the POWERserver 320.  See the
DECstation/DECsystem 5000 Model Competitive Update article for more information.


FEATURES/BENEFITS

	Feature					Benefit
	-------					-------
o  Based on 25 MHz R3000		o  60% higher performance with full
					   binary compatibility with all Digital
					   RISC-based systems

o  ECC memory up to 120 MB		o  Highest memory capability of any
					   desktop system in industry plus
					   the reliability of ECC

o  TURBOchannel I/O Interconnect	o  Fastest desktop I/O interconnect in
					   industry fully open to system and
					   option vendors

o  Future VME-bus expansion		o  Bus of choice for technical markets
					   Leadership performance

o  Future FDDI network interface	o  Desktop access to 100Mb/sec FDDI
					   networks - Leadership performance

o  New 1.0 GByte hard disk		o  State-of-the-art storage capacity
					   at lowest price/MB

o  New 1.2 GByte 4mm DAT backup		o  4mm is emerging standard for high
					   capacity tape backups


DECstation/DECsystem 5000 MODEL 200 BUS STRATEGY

The overall strategy for the DECstation and DECsystem product line is to offer
a choice of open I/O interconnects.  TURBOchannel is an innovative, open,
high-performance I/O interconnect integral to the DECstation/DECsystem 5000
Model 200 and will also be implemented in future DECstations and DECsystems.
Industry-standard VME bus will be supported on the DECstation/DECsystem 5000
Model 200 as well as future workstations that implement the TURBOchannel.
Futurebus+ will be supported via an adaptor to TURBOchannel in 1992 as the
natural follow-on to VME.  SCSI will continue to be supported as an industry
standard, low performance I/O interconnect for disks, tapes, printers, etc.
For more information on TURBOchannel or VME I/O interconnects, please refer
to their associated article in this Sales Update issue.


ORDERING INFORMATION - REFER TO SOC FOR COMPLETE INFORMATION

All DECstation/DECsystem 5000 Model 200 systems can be configured using one
of two system configuration guides - Packaged or Custom.  Both methods are
designed to provide you with the best system to fit your needs.  Please refer
to the Systems and Options Catalog (SOC) for Custom Systems ordering
information.

Packaged Systems
- ----------------
The Packaged System concept simplifies ordering by offering many different
system designs in ready-to-ship configurations.  Packaged systems have faster
delivery times and lower costs than comparable Custom systems.



Depending on which Packaged system you choose, the system will include a
graphics controller, monochrome or color monitor, and 8, 16, or 24 Mbytes of
memory.  The server systems (DECsystem 5000/200) do not include a graphics
controller or a monitor.  A VT-style terminal should be ordered as a console
device for the server systems.

It is important to note that only field-installable options can be ordered with
Packaged systems, no additions or substitutions can be made to the packaged
systems.  If an option is needed, either order a field-installed option and 
have it installed after the Packaged system arrives at your site or order a
system with the option off the Custom menu.

Standard on all Packaged systems is:

   - U.S. keyboard (120-V systems only, workstations only)
   - Mouse (workstations only)
   - SCSI controller
   - ThinWire Ethernet port with a T-connector and two terminators
   - 6-foot power cord (wall socket to system box, 120-V systems only)
   - 3-foot convenience power cord (monitor to system box)
   - 10-foot video cable (workstations only)
   - 10-foot keyboard/mouse cable (workstations only)
   - English language user documentation
   - ULTRIX Software Licenses
   - PHIGS Runtime License is included in DECstation 5000/200PXG and PXG Turbo

To order a Packaged DECstation/DECsystem 5000 Model 200 system, simply order a 
system from Step 1. 120-V systems include a U.S. keyboard and all required power
cords.  Order software from Step 2 if necessary.

- --------------------------------------------------------------------------------
Step 1 - Packaged Systems
- --------------------------------------------------------------------------------
					Diskless	With 332-Mbyte disk
- --------------------------------------------------------------------------------
DECstation 5000 Model 200CX 19", Greyscale, 2 free TURBOchannel slots
   8 Mbyte, 120 V/240 V/S. Hemi.	PM361-BD/BE/BF	PM361-MD/ME/MF
- -------------------------------------------------------------------------------
DECstation 5000 Model 200CX 16", Color, 2 free TURBOchannel slots
   8 Mbyte, 120 V/240 V/S. Hemi.	PM361-BG/BH/BJ	PM361-MG/MH/MJ
- -------------------------------------------------------------------------------
DECstation 5000 Model 200CX 19", Color, 2 free TURBOchannel slots
   8 Mbyte, 120 V/240 V/S. Hemi.	PM361-BK/BL/BM	PM361-MK/ML/MM
- -------------------------------------------------------------------------------
DECstation 5000 Model 200PX 19", 2D Accelerator, 2 free TURBOchannel slots
   8 Mbyte, 120 V/240 V/S. Hemi.	PM362-BK/BL/BM	PM362-MK/ML/MM
   16 Mbyte, 120 V/240 V/S. Hemi.	PM362-DK/DL/DM	PM362-PK/PL/PM
- -------------------------------------------------------------------------------
DECstation 5000 Model 200PXG 19", 8-plane 3D, 1 free TURBOchannel slot
   16 Mbyte, 120 V/240 V/S. Hemi.	PM363-DK/DL/DM	PM363-PK/PL/PM
- -------------------------------------------------------------------------------
DECstation 5000 Model 200PXG 19", 24-plane 3D, 1 free TURBOchannel slot
   16 Mbyte, 120 V/240 V/S. Hemi.	PM364-DK/DL/DM	PM364-PK/PL/PM
- -------------------------------------------------------------------------------
DECstation 5000 Model 200PXG TURBO 19", High-performance, 24-plane 3D, no slots
   24 Mbyte, 120 V/240 V/S. Hemi.	PM365-EK/EL/EM	PM365-RK/RL/RM
- -------------------------------------------------------------------------------
DECsystem 5000 Model 200 Server, 3 free TURBOchannel slots
   16 Mbyte, 120 V/240 V				PM369-PY/PZ
- --------------------------------------------------------------------------------

Step 2 - Software Media and Documentation

An ULTRIX s/w kit must be ordered for the first DECstation/DECsystem 5000/200
on site.

ULTRIX kits include documentation, ULTRIX single-user kit, and DECwindows
software on TK50 or compact disc.

QA-VV1AA-H5	ULTRIX WS (RISC) SW UPD TK50 for DECstation 5000/200
QA-VV1AA-H8	ULTRIX WS (RISC) SW UPD CDROM for DECstation 5000/200

QA-VYVAA-H5	ULTRIX (RISC) SW UPD TK50 for DECsystem 5000/200
QA-VYVAA-H8	ULTRIX (RISC) SW UPD CDROM for DECsystem 5000/200

The following kits serve DECstation 5000 Model 200 networked systems from a
RISC or VAX server.

QA-VV1AB-H5	ULTRIX WS (RISC) SW SERVER (TK50)
QA-VV1AB-H8	ULTRIX WS (RISC) SW SERVER (CDROM)

The following PHIGS kits and licenses are available.  Note that
DECstation 5000 Model 200PXG and PXG Turbo 3D systems include DEC PHIGS
Runtime license.

QL-VW7AA-AA	DEC PHIGS Runtime License for DECstation/DECsystem 5000/200
QL-VW6AA-AA	DEC PHIGS Development License for DECstation/DECsystem 5000/200

QA-VW7AA-H5	DEC PHIGS Runtime s/w (TK50)
QA-VW6AA-H5	DEC PHIGS Development s/w (TK50)
- --------------------------------------------------------------------------------
DECstation/DECsystem 5000 Model 200 Options and Upgrades
- --------------------------------------------------------
In addition to what can be ordered from the Packaged and Custom menus, options
and upgrades can be added to a system unit after it has been installed at the
customer site.  They allow the customer to grow the system as needs change over
time.  All ordering steps are optional.
- --------------------------------------------------------------------------------
Step 1 - TURBOchannel and Other Options

PMAD-AB		Thickwire Ethernet TURBOchannel option card.  Requires one
		TURBOchannel slot.
PMAZ-AB		Additional SCSI TURBOchannel option card.  Supports 7 external
		devices.  Requires one TURBOchannel slot.

Note:  -  The following PMAG-xB part numbers are graphics module upgrades for
    	  the DECstation 5000/200.  If upgrading from a DECstation 5000/200CX
	  color frame buffer, a new VRT19 monitor must be ordered in addition
	  to a graphics module.  The upgrades are not available at present for
	  DECsystem 5000/200.
       -  With the exception of PMAG-GB, all other options are customer
          installable/upgradeable.

PMAG-CB		DECstation 5000/200 PX upgrade (2D accelerator).  Require one
		TURBOchannel slot.
PMAG-DB		DECstation 5000/200 PXG 8-plane upgrade (low-end 3D).  Requires
		two TURBOchannel slots.
PMAG-EB		DECstation 5000/200 PXG 24-plane upgrade (mid-range 3D).
		Requires two TURBOchannel slots.
PMAG-FB		DECstation 5000/200 PXG Turbo upgrade (high-end 3D).  Requires
		three TURBOchannel slots.

PMAG-GB		DECstation 5000/200 8-to-24-plane upgrade for PXG.  Requires
		Field Service installation.  Doesn't require a TURBOchannel slot
PMAG-HA		24-bit optional Z-buffer for DECstation 5000/200PXG only.  Does
		not require a TURBOchannel slot.
- --------------------------------------------------------------------------------

Step 2 - Mass Storage

Note: - 240-V systems require an additional power cord for each external
        expansion box.
      - xA = 120 V.  x3 = 240 V.
      - Maximum of 7 external devices can be supported by the base SCSI
	controller.  If additional SCSI devices are required, a SCSI
	TURBOchannel option card must be added (customer installable).
      - Each RZ5X, TK50Z, TLZ04, and RRD40 expansion device includes the
	following cables:
	   BC19J-1E	   18-inch, 50-pin to 50-pin SCSI cable

RZ5X-CA/C3	332-Mbyte disk drive in a BA42 expansion box.
RZ55-UK		332-Mbyte disk upgrade for BA42
RZ5X-FA/F3	665-Mbyte disk drive in a BA42 expansion box.
RZ56-UK		665-Mbyte disk upgrade for BA42
RZ5X-HA/H3	1.0-Gbyte disk drive in a BA42 expansion box.
RZ57-UK		1.0-Gbyte disk upgrade for BA42
RZ5X-GA/G3	Two 1.0-Gbyte disk drives in a BA42 expansion box.
TLZ04-FA	1.2-Gbyte 4mm digital audio tape (DAT) in external expansion box
TK50Z-GA/G3	95-Mbyte streaming tape in a BA40 expansion box.
RRD40-FA/F3	600-Mbyte compact disc drive in an expansion box.
- --------------------------------------------------------------------------------
Step 3 - Memory

MS02-AA		8-Mbyte, ECC memory module.  System supports up to 120 Mbytes
- --------------------------------------------------------------------------------
Step 4 - Optional Input Devices

Note: - The tablet is used in place of the mouse
      - The Lighted Programmable Function Keyboard (LPFK) and Programmable
	Function Dials (PFD) can be ordered as a pair or seperately.  The
	LPFK and PFD packages listed below include a Peripheral Control
	Module (PCM) which provides multiplexing of both LPFK and PFD into
	a single RS232 port.  In addition, each package includes a power
	supply, cables, and user documentation.

VSXXX-AB       	11-inch x 11-inch Tablet with a 2 button stylus and a 4 button
		puck

VSX10-AA	Combination LPFK and PFD Package - 120 V
VSX10-A3	Combination LPFK and PFD Package - 240 V

VSX20-AA	LPFK Package - 120 V
VSX20-A3	LPFK Package - 240 V

VSX30-AA	PFD Package - 120 V
VSX30-A3	PFD Package - 240 V


RZ55-BASED PACKAGED SYSTEMS

In addition to the diskless Packaged Systems available, there are also a number
of Packaged Systems that include an RZ55 in an expander box at a $2000 discount
off of sum-of-the-pieces.  The reason for offering this package is to be able
to have very attractively priced, standalone configurations available.  The
competition can sell standalone systems with 2x100MB disks vs. DECstations
which require an RZ55 to support standalone operation.  By selling a standalone
Packaged System which competes directly with competitive systems with two 100MB
disks, are total product offering becomes much more competitive.

PRICING INFORMATION

Packaged Systems, no disks		  Model Number     U.S. MLP    
- --------------------------		  --------------   -------
DS5000/200CX,19"grs,8MB,no disk		  PM361-BG/BH/BJ   $14,995
DS5000/200CX,16"clr,8MB,no disk		  PM361-BG/BH/BJ   $16,500
DS5000/200CX,19"clr,8MB,no disk		  PM361-BK/BL/BM   $19,000
DS5000/200PX,19"2DA,8MB,no disk		  PM362-BK/BL/BM   $21,500
DS5000/200PX,19"2DA,16MB,no disk	  PM362-DK/DL/DM   $27,100
DS5000/200PXG,19"3D 8pl,16MB,no disk	  PM363-DK/DL/DM   $29,100
DS5000/200PXG,19"3D 24pl,16MB,no disk	  PM364-DK/DL/DM   $36,100
DS5000/200PXG Turbo,19"3DH,24MB,no disk	  PM365-EK/EL/EM   $56,200

Packaged Systems w/bundled RZ55		  Model Number     U.S. MLP    
- -------------------------------		  --------------   -------
DS5000/200CX,19"grs,8MB,RZ55/exp	  PM361-MG/MH/MJ   $18,495
DS5000/200CX,16"clr,8MB,RZ55/exp	  PM361-MG/MH/MJ   $20,000
DS5000/200CX,19"clr,8MB,RZ55/exp	  PM361-MK/ML/MM   $22,500
DS5000/200PX,19"2DA,8MB,RZ55/exp	  PM362-MK/ML/MM   $25,000
DS5000/200PX,19"2DA,16MB,RZ55/exp	  PM362-PK/PL/PM   $30,600
DS5000/200PXG,19"3D 8pl,16MB,RZ55/exp	  PM363-PK/PL/PM   $32,600
DS5000/200PXG,19"3D 24pl,16MB,RZ55/exp	  PM364-PK/PL/PM   $39,600
DS5000/200PXG Turbo,19"3DH,24MB,RZ55/exp  PM365-RK/RL/RM   $59,700
DSYS5000/200 server,16MB,RZ55/exp	  PM369-PY/PZ	   $24,095

Ala Carte Systems, no monitor, no disk	  Model Number     U.S. MLP    
- --------------------------------------	  --------------   -------
DS5000/200CX,color frame buffer,8MB	  PM371-BY/BZ      $14,600
DS5000/200PX,2D accelerator,8MB		  PM372-BY/BZ      $17,100
DS5000/200PXG,8pl 3D,8MB		  PM373-BY/BZ      $19,100
DS5000/200PXG,24pl 3D,8MB		  PM374-BY/BZ      $26,100
DS5000/200PXG Turbo,high perf. 3D,8MB	  PM375-BY/BZ      $40,600
DSYS5000/200 server,8MB		  	  PM379-BY/BZ      $14,995

Option Decsciption			  Model Number     U.S. MLP    
- ------------------			  --------------   -------
DS5000/200 8MB ECC memory		  MS02 -AA         $ 5,600

DS5000 Thickwire Ethernet option	  PMAD -AA/AB      $   400
DS5000 SCSI Controller option		  PMAZ -AA/AB      $ 1,500

DS5000 8Pl 2D Accelerator Upgrade	  PMAG -CB         $ 3,500
DS5000 8Pl 3D Upgrade			  PMAG -DB         $ 5,500
DS5000 24Pl 3D Upgrade			  PMAG -EB         $12,500
DS5000 High performance 3D Upgrade	  PMAG -FB         $25,000

24Pl upgrade for DS5000 8Pl 3D		  PMAG -GB         $ 7,000
24-bit Z-buffer option for DS5000/PXG	  PMAG -HA/HB      $ 3,000


WARRANTY

DECstation and DECsystem 5000 Model 200 customers have four warranty options
to choose from that provide the flexibility to satisfy specific customer
support needs.  The recommended warranty level is the STANDARD Warranty
Support option which will provide DECstation and DECsystem 5000 Model 200
customers with full hardware and software support for the duration of the
warranty period, one year.  The spectrum of warranty offerings available to
DECstation and DECsystem 5000 Model 200 customers include:  Product Foundation
Warranty, Basic Hardware Support, Basic System Support (Standard), and
DECsystem Support 9/5.  (Refer to the preface of the USPL for warranty
information.)

All add-on options, including TURBOchannel upgrades and internal memory, for
prepackaged/preconfigured systems are customer installable with the exception
of the PMAG-GB 24 plane upgrade for PXG.

Post-Warranty Services
- ----------------------
For support beyond the first year, Digital offers a comprehensive range of
services that appeals to a diverse customer base--from customers who are
very price-sensitive to customers who want the highest value-added service.
Customers have the same flexibility as during the warranty with additional
options for specialized service programs to meet their business needs.  For
more information on any of the warranty offerings or continuation services,
contact your local Field Service Sales support office.


AVAILABILITY

o  Start placing orders					   April 3, 1990

o  DECstation 5000 Model 200CX ready			   April 6, 1990
   for shipment w/UWS V2.1d

o  DECsystem 5000 Model 200 ready for			   April 6, 1990
   shipment w/Ultrix V3.1d

o  DECstation 5000 Model 200PX ready for		   May, 1990
   shipment w/UWS V4.0

o  DECstation 5000 Model 200PXG and			   August, 1990
   DECstation 5000 Model 200PXG Turbo ready
   for shipment w/UWS V4.0

o  RZ57 and TLZ04 devices				   See associated
							   articles this issue

o  VME and FDDI adaptors				   See associated
							   articles this issue

o  All other options listed in Ordering			   April 6, 1990
   Information above

Graphics Upgrade Availability
- -----------------------------
The DECstation 5000 Model 200CX will be available for customer delivery at
announcement and the DECstation 5000 Model 200PX will be available within
approximately 30 days of announcement.  The PXG and PXG Turbo 3D configurations
will be available in August, 1990.  For customers who want to purchase PXG
or PXG Turbo systems, they can start with a PX and then upgrade when the 3D
options become available by purchasing an add-on module.  This way they get
to use the same VRT19 monitor that they purchased with their PX system.
And since PHIGS is the same on the 2D CX and PX as it is on the 3D PXG and
PXG Turbo, 3D applications developed on the CX and PX will migrate easily to
the PXG and PXG Turbo.


COMPETITIVE SUMMARY


DESKTOP: ENTRY-LEVEL 2D, 19" COLOR, 8MB, DISKLESS
=================================================
                     DS5000    DS3100   DS2100    SUN      H-P      IBM
                      M200                       SPCst1   834CH    RS6000*
                                                                    M320
 Processor            R3000    R2000    R2000    SPARC    HP-PA     IBM
 Clock Rate (MHz)      25       16.67    12.5     20       15       20
 SPECmarks             18.5     11.3      8.3      8.3      9.5     22
 Dhrystone MIPS        24       14.9     10.8     13       14       27
 MFLOPS (Linpack-DP)   3.7       1.6      1.2      1.4      2.0      7.4
 Price                $19K     $14.9K   $12K     $12.5K   $22K     $16.3K

DESKTOP: ACCELERATED 2D/ENTRY 3D, 19" COLOR, 8MB, DISKLESS
==========================================================
                   DS5000/200 VS3520/    SGI      SUN      H-P      IBM
                   PX and PXG   3540    4D/25   SPCst1GX  825CHX   RS6000*
                                                          825SRX    M320
 Processor            R3000     VAX      R3000    SPARC    HP-PA    IBM
 Clock Rate (MHz)      25       31       20       20       12.5     20
 VUPS                  20       5-10     11       10                22
 Dhrystone MIPS        24                16       13        8       27
 MFLOPS (Linpack-DP)   3.7       0.4      1.4      1.4      0.7      7.4
 2D Vec/sec           300K     100K   85K/200K   400K      70K      90K
 3D Vec/sec           330K     100K   85K/200K   175K      70K(SRX) 90K
 3D Triangle/sec       70K     7-10K   5K/20K     n/a       8K      10K
 Price            PX: $21.5K   $29.5K  $18.5K/    $15K  $42.5K(CHX)
                  PXG:$23.5K           $25.5K           $56.5K(SRX) $18.1K
                                     Basic/Turbo

HIGH-END 3D (DOUBLE BUFFER, Z-BUFFER, FASTEST GRAPHICS),19" COLOR, 8MB,DISKLESS
===============================================================================
                   DS5000/200 VS3520/    SGI      SUN      H-P      IBM
                   PXG Turbo    3540   4D/50GTB SPCstn   835SRX    RS6000*
                                                330GXP    Turbo     M730
 Processor            R3000     VAX      R2000    SPARC    HP-PA     IBM
 Clock Rate (MHz)      25       31        8       25       15       25
 VUPS                  20       5-10      7       14                30(est)
 Dhrystone MIPS        24                         16       14       34
 MFLOPS (Linpack-DP)   3.7       0.4      0.7      2.6      2.0     10.9
 2D Vec/sec           440K     100K     300K              240K      990K
 3D Vec/sec           440K     100K     300K      90K     240K      990K
 3D Triangle/sec       90K     7-10K     25K      5.5K     38K      120K
 Price                $45K    $40.5K    $60K      $35K  $63.5K      $75K+

SERVER, 16MB, 300MB DISK
========================
                  DSYS5000/200  MIPS     SUN      H-P      IBM
                               RC3240  SPCsrv    815S    RS6000*
                                         330               M320
 Processor            R3000    R3000    SPARC    HP-PA     IBM
 Clock Rate (MHz)      25        ?       25       10       25
 Cache Size           128KB      ?      128KB     16KB    40KB
 Memory min/max      8/120MB     ?      8/40MB   8/96MB   8/32MB
 VUPS                  20        ?       14                22
 Dhrystone MIPS        24       18       16        7       27
 MFLOPS (Linpack-DP)   3.7       ?       2.6      0.6      7.4
 VME slots            9/opt    ?/std    5/std    7/std     none
 Price               $24.1K    ~$40K    $35.9K   $38.8K   $20.4K

* All IBM diskless configurations include a 120MB page/swap disk due to the
  fact that AIX on IBM does not support diskless operation

QUESTIONS AND ANSWERS

CAN A CUSTOMER UPGRADE FROM A DECstation 2100 OR DECstation 3100 TO A
DECstation 5000 MODEL 200?

At this time, a customer with a DECstation 2100 or DECstation 3100 can re-use
and external disks, tapes, or other external SCSI peripherals that he has
attached to his existing system.  We are presently exploring the possibility
of offering a DECstation 5000 Model 200CX without a monitor so that DECstation
2100 and DECstation 3100 customers could re-use the monitor, keyboard, and
mouse as well.  If you have a customer interested in such an upgrade, please
contact Worksystems Product Management.

WILL 4Mbit DRAMS BE AVAILABLE IN THE FUTURE

It is our goal to offer 4Mbit DRAMS in the future.  This would boost memory
capacity to 480 MB.  It may not be possible to mix-and-match 8MB and 32MB
modules in the future.  Look for this capability some time in late 1991 or
early 1992.

CAN A CUSTOMER CONFIGURE A SERVER OR WORKSTATION WITH MULTIPLE SCSI OR
ETHERNET OPTIONS?

Yes, it is conceivable that a DECsystem 5000 Model 200 could support up to
4 total SCSI controllers or 4 total Ethernet controllers.

WHEN SHOULD I RECOMMEND A 2D ACCELERATOR (PX) OVER A COLOR FRAME BUFFER (CX)

We expect the 2D accelerator (PX) option to be very popular - especially in
the ECAD marketplace.  However, the graphics performance of the 8 plane frame
buffer is 60% faster than the DECstation 3100 and offers very competitive
graphics performance adequate for many applications.  Customers requiring very
fast 2D vector performance (300K) should move to the PX option.

WON'T VME PERFORMANCE BE SLOW DUE TO THE FACT THAT IT'S AN OPTIONAL ADAPTOR?

Due to the fact that our VME adaptor interfaces to our very high performance
TURBOchannel I/O interconnect, we expect to achieve excellent throughput to
VME options.  In fact, the DECstation/DECsystem 5000 Model 200 may end up
having the fastest VME interface available in the industry.  See related
Sales Update article in this issue.

WILL DIGITAL BE OFFERING OTHER TURBOCHANNEL OPTIONS IN THE FUTURE?

It is our intention to offer whatever is necessary to make our product offering
as robust as possible.  There are many options still required.  We will be
working with third parties to satisfy some needs, and where necessary will be
developing our own solutions.  If you have a customer with a specific option
need not satisfied today, contake Worksystems Product Management.

ISN'T IT A PROBLEM THAT THE HIGH PERFORMANCE PXG TURBO SYSTEM DOESN'T HAVE ANY
OPEN TURBOchannel SLOTS?

The DECstation 5000 Model 200PXG Turbo will only be sold to those few select
customers that require the absolute best 3D performance.  In most cases, that
means that they have a very specialized application and don't require optional
expandability.  For those customers who do require expandability, we can offer
the DECstation 5000 Model 200PXG in a 24 plane configuration with an optional
24-bit Z-buffer and still have a TURBOchannel slot open for expansion.


+---+      +------------------------------------------+
| 3 |      | Turbochannel Q&A - introductory details. |
+---+      +------------------------------------------+

1.  GENERAL TURBOchannel QUESTIONS:

    Q:  What is TURBOchannel and what are you trying to do with it?

    A:  TURBOchannel is Digital's new, OPEN, high performance I/O 
        interconnect integral to the DECstation 5000 Model 200, DECsystem 
        5000 Model 200, and future DECstations.  It is "open to the 
        industry", including option, system, and chip vendors.  This means 
        that the TURBOchannel design is available for license to anyone; 
        the license is no-charge/royalty-free.  

        TURBOchannel is differentiated from other buses by:
           a. industry-leading, proven high performance: 100 MB/sec peak 
	      transfer rate, with 93 MB/sec achieved on the DECstation 
              5000/200.  
           b. designed for simplicity, longevity and scalability
           c. completely open -- including option, system, and chip
              vendors; no-charge/royalty-free license


        Digital is working with leading semiconductor manufacturers to
        make interface ASICs and ASIC design services available to
        independent option vendors that are developing TURBOchannel
        options.

        The TRI/ADD Program, a new program residing in Palo Alto with the 
        RISC Workstations Engineering group, provides: recruiting, 
	technical support, and marketing support for all independent 
	hardware vendors (TURBOchannel, VME, and SCSI).  Vendors can call 
	the TRI/ADD Program at 1-800-678-OPEN immediately [as of April 
	3rd announcement] to receive an information package that contains: 
	a technical data sheet with enough information to determine if 
	they might be interested in designing a TURBOchannel product; a 
	TURBOchannel overview; information about the free technical and 
	marketing support available; and a combined TURBOchannel license 
	and TRI/ADD Program registration form.

        Digital is making available a comprehensive technical documentation
        kit which includes the TURBOchannel hardware specification, mechani-
        cal drawings, information on writing a device driver, and system 
        parameters.  This TURBOchannel Developers Kit is targetted at both 
        option and system vendors and will be available in June 1990 for a 
        $65 charge (essentially covers printing costs). Additional docu-
        mentation will be phased in over time: option and system firmware
        specifications, option and systems designers guides, and system
        mechanical drawings.


    Q:  Why is Digital offering another "proprietary" bus? [trick Q]

    A:  Digital developed TURBOchannel but is sharing the technology with
	the industry on a no-charge, no-royalties basis.  There are no 
	restrictions on who can use the TURBOchannel design to develop 
	TURBOchannel products.  There is strong interest among the option 
	and system vendors we have talked with for the high-performance 
	and concise design of TURBOchannel, with the fact that TURBO-
	channel is available today and has been designed for the long-term, 
	and the fact that there are no charges, no royalties, and no 
	restrictions on who can use the TURBOchannel design.  End users 
	also benefit from TURBOchannel's low cost since it is integral to 
        the system and no external expansion box is necessary.


        TURBOchannel was chosen as the integral I/O interconnect for the
        long-term because it has much higher performance than existing
        open buses:
   
                             proven performance
             TURBOchannel     93 MB/sec
             VME             ~37 MB/sec
	     SBus	     ~27 MB/sec
             MCA             ~13 MB/sec

        TURBOchannel brings industry-leading I/O performance to the 
	desktop TODAY, with an architecture that allows options developed 
	today to run on future DECstations and other vendors' systems that 
	incorporate TURBOchannel.


    Q:  Is TURBOchannel available in all of Digital's RISC workstations?
        What is Digital's long-term commitment to TURBOchannel?

    A:  TURBOchannel is currently available only in the DECstation 5000/200
        workstation and DECsystem 5000/200 server, however it is planned 
	for at least the next several generations of DECstations.  TURBO-
	channel will not be retrofitted on the DECstation 2100 or 3100 
       	platforms.  Digital has protected customers' investments in these 
	first members of the DECstation workstation family by ensuring 
	that applications developed on the DECstation 2100 and 3100 run 
	on subsequent family members; monitors, keyboards, SCSI peripherals, 
        and mice purchased for the 2100 and 3100 platforms can be used 
	with the DECstation 5000/200.  

        
    Q:  Will TURBOchannel also be available in VAX workstations?

    A:  The TURBOchannel design is under review for inclusion in future 
        VAX workstations and other vendors' systems, although there are 
        no commitments to report.  The TURBOchannel architecture is not 
        limited by any specific processor architecture.


    Q:  Will Digital be offering a Qbus adapter for TURBOchannel?

    A:  Digital has no current plans to develop a Qbus adapter.	However, 
 	TURBOchannel's design is available to the industry so nothing 
	prevents another vendor from developing such an adapter.


    Q:  What options are being developed to connect to TURBOchannel?

    A:  Digital-developed TURBOchannel options available on DECstation
        5000/200 and number of TURBOchannel slots used:
        -  200/CX:    1024x864, 8-plane frame buffer, operating at
		      60 Hz [1 slot]
        -  200/PX:    accelerated 2D graphics option supporting 1280x1024,
		      8 planes of color, full-page double-buffering, at
		      66 Hz; with PixelStamp rendering chipset [1 slot]
        -  200/PXG:   accelerated 3D graphics option supporting 1280x1024,
		      up to 24 planes of color, full-page double-buffering,
		      at 66 Hz; PixelStamp; Intel i860 for geometric
		      transformations; optional 24-bit Z buffer [2 slots]
        -  200/PXG Turbo:  advanced 3D graphics option adds to the PXG
	              features an additional rendering processor, an addi-
		      tional 24 planes image memory, faster i860 [3 slots]
        -  additional SCSI controller [1 slot]
        -  DEC LANcontroller 700: additional Ethernet controller (Thickwire)
                      [1 slot]
        -  VMEbus adapter: program announced to be available in 1990 
	              [1 slot]
        -  DEC FDDIcontroller 700: FDDI adapter; program announced to be
		      available in the future [1 slot]
        
        Third parties will provide the majority of TURBOchannel options.
        As part of the April 3rd announcement, the following companies
        announce their intent to develop TURBOchannel options:
        -  CSPI:          
             SuperCard/TURBO(tm) vector processor
        -  Data Translation:  
             50 MHz data acquisition board                              
             color image processing boards for real-time image analysis
        -  RasterOps Corporation:         
             high resolution 24-plane true-color frame buffer
        -  Sky Computers, Inc.:      
             SKYbolt(tm) application accelerator 



2.  OPENNESS AND FIT WITH DIGITAL'S OPEN BUS STRATEGY

    Q:  How open is TURBOchannel?

    A:  TURBOchannel is "open to the industry."  This means that anyone
        can use the TURBOchannel design that's specified in the TURBO-
        channel Hardware Specification to design a TURBOchannel option, 
        develop a system that includes TURBOchannel, or design a TURBO-
        channel interface chip.  The TURBOchannel design is licensable 
        to anyone on a no-charge, royalty-free basis.  End users benefit
        from its openness by the availability of options and systems 
        developed by both Digital and other vendors.  In addition,
        TURBOchannel is the mechanism for providing access to industry-
        standard VME and Futurebus+ on DECstations


        Among the free services that the new TRI/ADD Program offers in
        support of TURBOchannel product design efforts are: hardware and 
        software technical support, marketing support, an equipment 
        program and an online communications mechanism for vendors.
	Comprehensive technical documentation is available for a $65 
	charge which essentially covers our printing costs.  The kit
	includes: the TURBOchannel hardware specification, option 
	mechanical drawings, information on writing a device driver, and 
	system parameters.  Other documents will be phased in over time: 
	option and system firmware specifications, option and system 
	developers guides, system mechanical drawings.  Digital has put 
	a lot of emphasis on making TURBOchannel, and TURBOchannel tech-
	nical expertise, truly open to the industry.


    Q:  Do I need a license to design to TURBOchannel?

    A:  Yes.  Digital owns the technology, but anyone can be licensed
        to use the TURBOchannel design to develop TURBOchannel options, 
        systems, and chips. The license itself is very concise (1/2 page) 
	and is offered on a no-charge, royalty-free basis.  TURBOchannel
 	is open to the industry.


    Q:  Do I have to buy any parts from Digital to design to TURBOchannel?
        Where and how fast can I get a copy of the spec?

    A:  Anyone can receive a free TURBOchannel technical data sheet
        by calling the TRI/ADD Program at 1-800-678-OPEN.  It provides 
        enough of the essential information about TURBOchannel to decide 
        if it is something you want to design to: protocol description, 
	timing diagrams, option mechanical drawing, performance data, and 
	an overview ROM description.  It is part of a free information 
	package that also includes a description of the no-charge tech-
	nical and marketing support offered by the TRI/ADD Program, a 
	TURBOchannel Overview, and a combined TURBOchannel license and 
	TRI/ADD Program registration form.  The TURBOchannel Developers 
	Kit contains the spec and other technical information to help an 
	option, system or chip vendor to design to TURBOchannel.  This 
	kit is available in June for $65 which essentially covers our 
	printing costs.  We are developing a fast-order system for the 
	Developers Kit.

        In addition to offering technical documentation, Digital is
        working with semiconductor manufacturers to supply interface
        ASICs and ASIC design services to assist vendors in their
        development efforts.


    Q:  How does the announcement of TURBOchannel relate to Digital's 
        recent open bus announcement [February 14; BUSCON/West] of VMEbus
        and Futurebus+ support?  Why are you providing TURBOchannel when 
        you've just announced a commitment to VMEbus and Futurebus+?

    A:  TURBOchannel is a critical part of Digital's recently-announced
        open bus strategy.  Beginning with the DECstation 5000 Model 200,
        Digital provides a CHOICE of I/O interconnects so that customers
        have flexibility and openness:
        a.  TURBOchannel, an OPEN, high-performance I/O channel for appli-
            cations such as imaging and graphics that require the maximum 
            possible bandwidth (93 MB/sec peak DMA proven on DECstation 
            5000/200); with TURBOchannel, Digital offers industry-leading
	    I/O performance TODAY.
        b.  industry-standard, general purpose buses:  
            -  VME for applications that require an industry-standard 
               solution and access to existing VME applications. (Peak 
               VME architectural performance is 40 MB/sec.).  A VME
	       adapter connects directly into TURBOchannel; an external
	       expansion enclosure is required.
            -  Futurebus+ is being defined by an IEEE standards committee 
	       as the natural follow-on to VME.  Estimates are that it 
	       will be available in the 1992 timeframe in a *deskside* 
	       form factor.  Futurebus+ will be supported by future 
	       DECstations through a TURBOchannel adapter.

        A number of option vendors have announced [April 3rd] their intent 
        to develop TURBOchannel boards for applications that require 
        maximum possible bandwidth today.  Some of these option vendors 
        also offer VME boards for applications requiring industry-
        standard solutions.  Other option vendors have announced [April
        3rd] VME boards.  TURBOchannel, VME and Futurebus+ are
        complementary buses.


    Q:  When should a developer use a non-standard bus like TURBOchannel   
        rather than an industry standard bus like VMEbus or Futurebus+?

    A:  TURBOchannel is the best solution for high-performance, low-
        cost, desktop solutions today.  TURBOchannel offers industry-
        leading performance: TURBOchannel's peak architectural DMA is 
        100 MB/sec while VME's peak DMA is 40 MB/sec.  With the DECstation 
        workstation family, TURBOchannel brings this high performance to 
        the desktop today.  No external expansion boxes are required 
	(as with VME) which results in lower-cost solutions.  Futurebus+ 
	is still under definition by an IEEE standards committee and 
	applications are not expected to be available for another several
	years.  Futurebus+ is also being defined as a deskside implementa-
	tion.  TURBOchannel is available today in a desktop form factor 
	(integral to the DECstation 5000/200) with the flexibility to move 
	unchanged into the future.  Board vendors and system vendors with 
	whom we've shared information about TURBOchannel are very excited 
	about it and how open we've made it.


    Q:  What standards bodies are evaluating TURBOchannel to endorse it 
	as a standard?  

    A:  Digital developed TURBOchannel and is choosing to share this 
        technology, without restrictions, with the industry to make its 
	performance and cost benefits widely available.  No formal 
	standards bodies are evaluating TURBOchannel to endorse it.  
	Standards can emerge either from a standards body, such as IEEE, 
	or from market demand.  Digital believes TURBOchannel will 
	succeed based on market demand, and efforts are underway to 
	maximize this demand.


    Q:  If Futurebus+ was selected by Digital to provide boundless 
        throughput capability for the long-term, doesn't this mean that
        TURBOchannel has a very limited life?  Why bother with TURBO-
        channel?

    A:  The Futurebus+ protocol currently under definition by its IEEE
        standards committee defines a deskside ("profile B") general 
        purpose bus architecture, with a peak DMA transfer rate expected 
        to be in the range of 200MB/sec - 1.7GB/sec in 1992 (for 32-bit - 
        128-bit packet systems respectively), across the entire bus.  
        Since Futurebus+ is being defined by a standards committee, we 
        expect it will become a standard quickly.  Digital supports 
        Futurebus+ as a standard and a natural follow-on to VME.  

        TURBOchannel brings industry-leading high performance to the
        desktop today (100MB/sec peak DMA, 93MB/sec proven on DECstation
        5000/200).  Because it is defined as an asymetrical I/O archi-
        tecture rather than a general purpose bus architecture, the
        CPU and system memory are defined separately from the TURBO-
        channel architecture.  This allows the same TURBOchannel
        architecture available today to be used in future higher-
        performance systems where the CPU-to-memory interconnect is 
        optimized and where each option can have access to the entire 
        TURBOchannel bandwidth.  Such a hypothetical future system can 
        have 800 MB/sec effective I/O performance, and more, without
        changing TURBOchannel at all.  Also, as proven with other buses 
        in the marketplace, market life is longer than technological life.
 	TURBOchannel is an optimized solution -- optimized around 
        performance and simplicity -- for today and the long-term.  


    Q:  How does TURBOchannel compare to VAXBI and Qbus?
  
    A:  TURBOchannel is OPEN and Digital is licensing it to the industry 
	on a no-charge, royalty-free, no-restriction basis.  This degree 
	of openness, and the lack of licensing restrictions, contrasts 
	with VAXBI.  TURBOchannel's ability to bring new technologies to 
	the desktop and to allow current solutions to be supported in the 
	future compares favorably to Qbus.  TURBOchannel's openness, high 
	performance, and design simplicity -- coupled with excitement 
	from the vendors we have shared the TURBOchannel design with to 
	date -- positions it as a solution for the long-term.  Ultimately, 
	the industry will determine how "standard" TURBOchannel becomes 
	and how favorably it compares with the longevity of Qbus.



3.  COMMON TECHNICAL QUESTIONS

    Q:  How many open slots are available on the DECstation 5000/200?  
	Will this be changed in future platforms?

    A:  There are 3 TURBOchannel slots on the DECstation 5000 Model 200   
        for add-in boards.  Boards can be double-, or triple-width to 
	accommodate sophisticated circuitry that requires more real 
	estate than is provided by a single-width option.  Double- and
	triple-width options also guarantee electrical power to drive
	complex integrated circuits requiring more than 26 watts per
	slot.  In addition, the TURBOchannel mechanical specification
	provides a sufficient spatial envelope to allow for double-side
	surface-mount boards.  The underside of a board can accommodate
	passive components and low-profile active components.  Since the 
        TURBOchannel option and socket sizes will not change in future 
        DECstations, the number of TURBOchannel slots in future DEC-
        stations is constrained by internal box dimensions.  Other 
        system vendors using TURBOchannel as the integral I/O inter-
        connect may choose to offer different numbers of slots.


    Q:  IBM offers 4 MCA slots on their RS/6000 Model 320 yet we're
        saying that the DECstation 5000 Model 200 is more flexible.
        Please explain.

    A:  Of the 4 MCA slots, 2 are taken up by Ethernet and SCSI boards.
        The DECstation 5000/200 has integral Ethernet and SCSI.  After
        considering these 2 options, there are 2 open MCA slots and
        3 open TURBOchannel slots.  We also offer industry standard
        VME expansion through an adapter into TURBOchannel; IBM does
        not offer this.


    Q:  Is TURBOchannel's 100 MB/sec peak performance shared across all
        options or does each option get 100 MB/sec?

    A:  In the DECstation 5000/200 platform, there is a single TURBOchannel
        across all options.  Future platform architectures can assign 
        each option its own dedicated TURBOchannel so that each option 
        can have access to the entire 100 MB/sec bandwidth.  Also, since 
	TURBOchannel is defined separately from the system memory-to-CPU 
        interconnect, future systems can increase total system performance 
	by optimizing that connection without changing TURBOchannel.


    Q:  Will you be evolving the design of TURBOchannel?  Will TURBOchannel
        boards or systems that I design or buy today be obsoleted by a
        new and improved Digital design?

    A:  Digital's commitment to TURBOchannel is strong; TURBOchannel was
	designed for longevity and scalability.  The address space upper 
        limit today is 16GB while competing buses are in the 256MB (SBus) 
        - 4GB (MCA, NuBus, VME, EISA) range.  Minimums stated in the 
      	TURBOchannel specification are committed minimums, such as power 
	(26 watts/slot) and air flow (150 LFM).  Physical features will 
	remain unchanged: board dimensions (4.6" x 5.675"), connectors 
	(96-pin DIN), clock speed (12.5 MHz - 25 MHz), etc.  Architectural 
	features will remain unchanged: 4MB-512MB address space per slot, 
	1- to 128-word DMA bursts, etc.  Digital is committed to protecting 
	investments by end users, board vendors (including Digital!), 	
    	systems vendors, and chip vendors.
       

    Q:  Can you get more expansion slots by connecting an expansion box
        to TURBOchannel?

    A:  There is no architectural reason why an adapter to a TURBOchannel
        expansion box could not be implemented since the address space
        is defined as 4MB-512MB.  Such an adapter would share the slot 
        address space so this is not possible with the DS5000/200's 4MB 
        slot address space.


    Q:  Would a customer have problems interfacing non-SCSI storage to 
        TURBOchannel?  Is booting from such a storage device supported?

    A:  A customer would *not* have problems interfacing non-SCSI
     	storage to TURBOchannel, and booting from such storage devices
        *is* supported.


    Q:  Would non-SCSI storage offer increased performance?

    A:  Because TURBOchannel is open to the industry, vendors can 
 	develop any interfaces to it.  With IPI or ESDI, for example, 
	the performance inherent in such interfaces would be realized.


    Q:  Does TURBOchannel enhance my SCSI disk performance?

    A:  No. Disk performance is limited by the throughput of SCSI on
	the DS5000/200, however faster disk controllers could be
	developed for TURBOchannel.  


    Q:  What will be the throughput of a VMEbus option connecting through
        the VME adapter to TURBOchannel?  reads and writes?

    A:  Throughput is limited by the slowest transfer rate.  VME's peak
        architectural transfer rate is 40 MB/sec whereas TURBOchannel's 
	peak transfer rate is 100 MB/sec.  In this case, VME is slower 
	than TURBOchannel so VME options would never have faster throughput 
	(reads or writes) than VME.  Support for Digital's VME adapter 
	to TURBOchannel is a program announcement on April 3rd. Per-
	formance characterizations of this VME option is not available
	yet.


    Q:  How difficult is TURBOchannel to design to?

    A:  TURBOchannel is designed to be concise.  The concise protocol
        defines only 2 types of transactions:  I/O reads and writes,
        and DMA reads and writes.  TURBOchannel has 44 signal pins,
        compared with other buses: SBus, 82; VME, 106; MCA, 136; EISA,
        153; Futurebus+, 91-343.  A TURBOchannel Developers Kit is
        available and provides a concise specification (under 20 pages),
        and additional hardware and software technical documentation.
        Slave options are easiest to design.  Initial feedback from
        board vendors that have seen the TURBOchannel specification
	is that porting a board from SBus would be easiest (one estimate 
	was 2 weeks to port either a synchronous/asynchronous serial 
	line or a parallel port).  Imaging boards require synchronizing 
	signals with the X Server so the effort is greater and is 
	weighted towards software.  The TRI/ADD Program provides free 
	technical support (hardware and software) to vendors to help 
	optimize their design efforts.  In terms of interface costs, 
	estimated costs of interface silicon are: ~$20 for TURBOchannel, 
	~$25 for NuBus; ~$40 for SBus; ~$50 for VME; ~$200 for Futurebus+.  
	Digital is working with leading chip vendors to make interface 
	ASICs and ASIC design services available to option vendors 
	developing TURBOchannel options.


    Q:  What software is necessary for a TURBOchannel option?  Who
        provides this software?

    A:  Option vendors will provide their own device driver software.
        For applications that require synchronizing with X server,
        option vendors will provide additional software.


    Q:  If a Futurebus+ adapter connects directly to TURBOchannel on
        DECstations, how do I get full performance from Futurebus+?

    A:  The Futurebus+ architectural peak performance is defined as
        200 MB/sec for its 32-bit implementation, whereas TURBOchannel 
	(defined as a 32-bit architecture) has a peak architectural 	
   	performance of 100 MB/sec.  Futurebus+ options connected to a 
	single, dedicated TURBOchannel can have access to the entire 
	100 MB/sec bandwidth of that TURBOchannel.  Because Futurebus+ 
	is being defined as a general purpose bus, all options connected 
	to a single Futurebus+ will share its bandwidth.



4.  COMPETITIVE POSITIONING

    Q:  How does TURBOchannel compare with Sun's SBus?

    A:	SBus has a peak DMA equivalent to TURBOchannel (100 MB/sec)
        but much lower proven performance (27 MB/sec vs. 93 MB/sec).
        
	SBus boards are slightly smaller than TURBOchannel boards 
	(123 square cm vs. 168 square cm).  Athough not a constraint of 
	SBus itself, SPARCstation enclosures have much less head-room 
	in them than the DS500/200. Some options, therefore, must take 
	up more slots on SPARCstations than DECstations.  TURBOchannel 
	options in the DECstation 5000/200 can be double surface-mounted 
        and can accommodate daughter cards, which minimizes the number 
	of slots required by a particular option.

	Both SBus and TURBOchannel are open buses and the designs are 
	licensable to anyone.  Sun charges $300 for their technical 
	documentation kit; Digital charges $65 to cover our costs, and 
	includes additional documents such as 1:1 mechanical drawings 
	and system parameters.

	Technical support programs are offered by Sun (Catalyst) and 
	Digital (TRI/ADD Program).  Feedback from board vendors is that 
	very little, and poor quality, support has actually been provided.  
	These same vendors had a much higher expectation of support from 
	Digital and were very pleased with our description of the range 
	of TRI/ADD Program services and Digital's commitment to support.
                 

    Q:  How does TURBOchannel compare with IBM's MCA (Micro Channel 
	Architecture?

    A:  IBM's "new" MCA, announced on February 15th, has a peak archi-
	tectural DMA of 40 MB/sec, with a proven DMA of ~13 MB/sec 
	(confirmed by MCA board vendors).  This is much less than 
	TURBOchannel's peak architectural DMA of 100 MB/sec and achieved 
	DMA of 93 MB/sec.
        
        The RS/6000 Model 320 (desktop workstation) has fewer available 
	MCA slots than the DECstation 5000/200, although the RS/6000/320 
	actually has 4 MCA slots compared with 3 TURBOchannel slots on 
	the DECstation 5000/200.  Since the DS5000/200 includes integral 
	Ethernet and SCSI, and these options on the RS/6000 Model 320 
	consume 1 slot apiece, the DS5000/200 effectively has 3 available 
	slots to the RS/6000/320's 2.  The RS/6000/320 is IBM's only
        desktop system in their RS/6000 family of RISC machines.

	MCA's power per slot is 12.6 watts, compared with 26 watts per 
	slot for TURBOchannel.

        MCA is licensable to system vendors, however, the licensing 
	policies for system vendors are very restrictive and expensive: 
	a system vendor must pay very high royalties on future MCA-system 
	sales as well as a tax on past systems that used the AT-bus.  
	TURBOchannel is open to system vendors on a no-charge/royalty-free
        basis.


5.  SUPPORT FOR TURBOchannel

    Q:  What do I have to do to get technical support for TURBOchannel?

    A:  Call the TRI/ADD Program at 1-800-678-OPEN.  This program was
        developed to provide technical and marketing support for
        TURBOchannel option, system, and chip vendors.  It is open
        to anyone and is staffed to provide comprehensive hardware 
        and software technical support, including online communication
        with other vendors involved in designing TURBOchannel products.
	To help us help you, you just need to fill out the short
	program registration form (1/2-page) included with the TRI/ADD
	Program information package we sent you.  In addition to 
	TURBOchannel support, the TRI/ADD Program provides technical 
	support for VME and SCSI option development.


    Q:  What documentation does Digital provide to option and system
        designers?

    A:  A free TURBOchannel technical data sheet is available immediately
        after announcement by calling 1-800-678-OPEN.  It includes
        excerpts from the TURBOchannel Hardware Specification to
        assist in an initial assessment of TURBOchannel.  It is
        provided as part of the TRI/ADD Program's information package
        which also includes: TURBOchannel Overview, description of
        the TRI/ADD Program, and a combined TURBOchannel license and
        TRI/ADD Program registration form.

        The TURBOchannel Developers Kit is one kit targetted at both
        option and system designers, with both hardware and software
        technical information.  The initial kit, available in June
        for $65, includes:

        -  hardware specification (under 20 pages)
        -  system parameters
        -  information on writing TURBOchannel device drivers
        -  option mechanical drawings (7 C-sized drawings)

        Additional documentation will be phased into the kit over time:
        option and system firmware specifications, option and system
        designers' guides, system mechanical drawings. As a point of 
        reference, Sun charges $300 for their developers kit which does 
        not include actual-size mechanical drawings and is targetted 
        just at option vendors.  No technical documentation was widely 
        available from Sun until 5 months after announcement of an open 
        SBus in April 1989.


    Q:  What other support does Digital provide?

    A:  The TRI/ADD Program also provides marketing support, an equip-
        ment program, an online technical conference for vendors, and 
	assistance in investigating a possible business relationship with 
	Digital whereby Digital could service a vendor's TURBOchannel 
	products.  


+---+      +------------------------------------------------------+
| 4 |      | Ultrix is Digital's fastest growing software product |
+---+      +------------------------------------------------------+


                           ULTRIX FINANCE OVERVIEW
                                APRIL 1990

	* The ULTRIX S/W business will be a $100 million dollar business
	  by the end of FY'90.  This is close to a doubling of FY'89's
	  NOR of $54 million and includes revenue from license sales, 
	  updates, support, service, and education. If the UNIX market is
          growing at 30-40% we're gaining; not losing market share.

	* ULTRIX is the fastest growing of the top 20 DEC S/W products.

	* Through Feb. of FY'90, ULTRIX License MLP is 114% ahead of
	  Feb. FY'89 YTD.  This is 15% ahead of the OSG FY'90 plan and
	  10% below the corporate plan (which was driven by the PBU
	  hardware volumes rather optimistic RISC projections.)

	* VAX based Ultrix license MLP is running 100% above the OSG plan while
	  RISC based is running 30% below the OSG plan and 50% below the
	  the corporate plan.  The RISC business has been a disappointment
	  while the strength of the VAX business has been a surprise. The
          Vax based Unix business has been relatively stable the past 6    
          quarters and the RISC business is gradually improving.

	* The transition from VAX based license sales to RISC based has 
	  begun.  In Q1'FY90 30% of the business was RISC and 70% was VAX.
	  In Q2'FY90 the ratio had shifted to 50/50. By Q4'FY90 we think 
          the actuals will be 70% RISC based.

	* ULTRIX based units as a percent of total DEC unit sales have
          increased from a 9% share one year ago to a 14% share as of Q2'FY90.
          The penetration of ULTRIX on total worldwide workstation license
	  units is currently 20%. In the USA, this penetration is 26% a
	  full 10% points above the foreign penetration of 16%.

	* The investment in OSG engineering continues to grow. FY'90 Spending
	  is up approximately 25% over last years level, which was 40%
          above FY'88. The OSG organization continues to expand with the
          addition of the OZIX group last year and the VAX System V group this
          year. The total OSG  headcount has doubled in the past two years and
          now numbers approximately 450.
45.13KERNEL::MOUNTFORDThu Jun 14 1990 16:10273
From:	COMICS::TREVENNOR "Ultrix Interest Mailing list:  13-Jun-1990 1911" 13-JUN-1990 19:15:07.31
To:	@DIST:ULTRIXERS
CC:	TREVENNOR
Subj:	


UK Ultrix Support Group: 
Mailshot #22

4 Items in this mailshot.

+---+ +---------------------------------+
| 1 | | 1989 Figures on the UNIX market |
+---+ +---------------------------------+


LEADING UNIX SYSTEMS MANUFACTURERS - 1989
($M VALUE OF SHIPMENTS)

		1989		SHARE
		----		-----
H-P		  1820		 13%
SUN		  1810		 13%
DIGITAL		  1060		  8%
CRAY		    700		  5%
AT&T		    650		  5%
IBM		    640		  5%
OTHERS		  7000		 51%

TOTAL		$13680		100%

LEADING PC/WS SYSTEMS MANUFACTURERS - 1989
($M VALUE OF SHIPMENTS)

		1989		SHARE
		----		-----
SUN		 1780		 29%
HP		 1470		 24%
INTERGRAPH	  380		  6%
IBM		  300		  5%
DIGITAL		  290		  5%
OTHER		  1940		 31%

TOTAL		$6160		100%


LEADING UNIX SMALL SCALE SYSTEMS MANUFACTURERS - 1989
($10K TO $100K, 2 TO 16 USERS)
($M VALUE OF SHIPMENTS)

		1989		SHARE
		----		-----
DIGITAL		   370		 10%
AT&T		   320		  8%
NCR		   310		  8%
IBM		   230		  6%
UNISYS		   230		  6%
OTHER		 2390		 62%

TOTAL		$3850		100%

LEADING UNIX MEDIUM SCALE SYSTEMS MANUFACTURERS - 1989
($100K TO $1M, 17 TO 128 USERS)
($M VALUE OF SHIPMENTS)

		1989		SHARE
		----		-----
DIGITAL		  400		  17%
HP		  270		  11%
AT&T		  200		   8%
SEQUENT		  140		   6%
CONVEX		  120		   5%
PYRAMID		  100		   4%
OTHER		1180		  49%

TOTAL		$2410		100%


LEADING UNIX LARGE SCALE SYSTEMS MANUFACTURERS - 1989
(OVER $1M, OVER 128 USERS)
($M VALUE OF SHIPMENTS)

		1989		SHARE
		----		-----
CRAY		  700		  56%
AMDAHL		  160		  13%
IBM		  100		   8%
OTHER		  300		  24%

TOTAL		$1260		100%

SOURCE OF ALL DATA ABOVE:  IDC, APRIL 1990



+---+ +-----------------------------------------------+
| 2 | | Part Numbers for Ordering ULTRIX Support Kits |
+---+ +-----------------------------------------------+

Base Operating System
---------------------

V3.1 Support Kit part numbers:

	media and documentation:  BR-VEYAB-HW

	documentation only:       BR-VEYAB-3Z



V3.1B Support Kit part number:    BR-VEYAK-25



V3.1C Support Kit part number:    BR-VYVAM-25



ULTRIX V4.0 Support Kit:  Same part number as for the two V3.1 kits. These
                          are not yet available until V4.0 FCS. I suspect that
                          FCS will not be until early July in the USA, it is
                          currently scheduled for 20-JUN-90.





Workstations
------------



UWS V2.2  Support Kit part numbers:

	VAX media and documentation:  BR-OJQAA-HW

	RISC media and documentation: BR-VV1AA-HW


	VS3520:                       BR-OJQAF-HW

	VS3520 Update TK50:           BR-OJQAG-HW



ULTRIX/WS V4.0 Support Kit: Same part number as for the two V2.2 VAX and RISC
                            kits. These are not yet available until V4.0 FCS.








Electronic V4.0 Kits: V4.0 kits are available for copying from the network.
                      Currently you can only get field test 2 software and
                      documentation. The FCS software and documenation will
                      also be made available this way.

		     To get information on obtaining the electronic kits
                     send mail to:

				FTCODE::FTREGISTRAR
				SUBJECT: HELP

		    You will be sent an electronic application form to be
                    filled out and returned. You will then receive a reply
                    giving you information on how and where to get the kit.



+---+ +--------------------------------------------------+
| 3 | | Programming in C from ULTRIX Course in July 1990 |
+---+ +--------------------------------------------------+

***********************************************************
***                                                     ***
***   >>>>>>>>>>>>>>>>> ANNOUNCING <<<<<<<<<<<<<<<<<<	***
***                                                     ***
***   From a screen play by Roy Schweiker 		***
***                                                     ***
***   Based on an original idea by Ken Thompson and 	***
***				 Dennis Ritchie		***
***							***
***   >>>>>> Utilising ULTRIX Features From C <<<<<<	***
***							***
***********************************************************


Prerequisites:	1) A knowledge of ULTRIX or UNIX(tm) to utilities
--------------	   and command level (with subsequent practical 
		   experience)

		2) Proficiency in the "C" programming language


Target audience:1) (System and application) programmers wishing 
----------------   to utilise ULTRIX system calls within (C) programs

		2) System designers and system managers wishing to 
                   gain insight into the features of ULTRIX

		3) Those planning to attend an "ULTRIX
		   Internals" course

		This course will be open to customers and employees
		---------------------------------------------------	


Description:	This course covers the use of ULTRIX system calls
------------	and data structures for purposes such as: File and
		Terminal IO; Controlling and creating processes;
		and interprocess communication using a range 
		of techniques.

		A fuller topic list can be found at the end 
		of this note


Location:	Shire Hall, Reading, UK
  & date	23-27 July 1990

Bookings:	DTN 830-8020
    	(EY 2286 - Using ULTRIX Features From C)


For further information on course content, contact

	Michael Kerrisk  	DTN: 830-4079
				Mail: DOOZER::KERRISK

------------------------------------------------------------------------

Course Outline
==============

1. Compiling C programs
	The preprocessor, compiler, assembler and linker

2. Program development tools
        Object libraries, make, sccs

3. Debugging
        With the dbx debugger  

4. Allocating Memory

5. Controlling and inspecting the process envvironment
        UIDs and GIDs, Process Ids, Process Groups

6. Ordinary Files
        System calls for file IO, file and record locking

7. Creating and Controlling Processes
	fork(), exec(), wait() and exit()

8. Signals

9. Terminal IO
	setting terminal characteristics; non-blocking and signal 
        driven IO

10. Pipes

11. Sockets
    	In the UNIX and INTERNET domain: stream and datagram sockets

12. System V Interprocess Communication Facilities
        Shared Memory, Semaphores, Message Queues



45.14KERNEL::MOUNTFORDThu Jun 28 1990 00:2171
UK Ultrix Support Group: 
Mailshot #22

1 Item in this mailshot.



+---+      +--------------------------------------------+
| 1 |      | Geoff Shingles on the need for UNIX growth |
+---+      +--------------------------------------------+

<Forwards deleted>

From:	NAME: GEOFF SHINGLES                
	FUNC: BOARD OF MANAGEMENT             
	TEL: (7)830-3238                      <SHINGLES AT A1_CHEFS AT SUBURB AT REO>
Date:	25-Jun-1990
Posted-date: 26-Jun-1990
Precedence: 1
Subject: UNIX
To:	See Below
CC:	See Below



** PLEASE COULD YOU PASS A COPY OF THIS NOTE TO ALL SALES, EIS AND **                        
                       CUSTOMER SERVICES STAFF


I just wanted to take a couple of minutes of your time to tell you how 
I feel about UNIX as I know it is a topic of major interest to 
everybody.  It is today's biggest growth area!

I think that the Corporation has now really got its act together. Its 
put one of our best people, Dom Lacava, in the driver's seat.  He is 
enthusiastic and active.  He came and chatted to us at the European 
Management Team meeting in Geneva and I was very impressed.  He says 
that our present plans are not nearly aggressive enough and won't get 
us there.  He believes that we must have at least a third of our 
business coming out of UNIX within the next two or three years and he 
is hell bent on doing that.  This means doubling our market share by 
the end of FY91.

This is being supported in Europe by putting one of the best 
Marketeers we have in charge of the UNIX programme.  This is Barry Nay 
and his new role has just been announced.  

I have charged Tony with the responsibility for getting one of the 
very best people that it is possible to get to run the UNIX efforts 
for us here in the UK.  Incidentally, UNIX was put at the top of our 
list of Marketing Programmes for FY91, so its funding is assured and 
its focus is assured by having it at the head of the Marketing 
priority and investment list.  We will invest in major retraining 
efforts and do all we need to protect our existing UNIX skills.

I am committed.  We have started to change.  We have started to get 
some very good orders coming in.  We have a long long way to go but we 
have good products on which to build our efforts.  During the next 
twelve months every one of us has to put in the right amount of effort 
to make sure that everybody understands inside and outside Digital 
that we mean business.

This is our number one Marketing issue!  That's what our customers 
will see at DECworld.

Regards,

To Distribution List:

<deleted>
45.15KERNEL::MOUNTFORDWed Jul 25 1990 09:46116
Latest from Alan Trevennor:
    


      osf   osf   unix     posixposix  xopenosf   osf  osf        osf
      osf   osf   svid     posixposix  osfwopen   osf  unix      unix
      osf   osf   unix        svid     osf  osf   osf   unix    unix
      osf   osf   svid        unix     osfosf     osf    posixposix
      osfosfosf   unixunix    svid     osf  osf   osf   svid    svid
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf



UK Ultrix Support Group: 
Mailshot #24

1 Items in this mailshot.



+---+      		+----------------------------+
| 1 |      		| Ultrix V4 summary and info |
+---+      		+----------------------------+

From:	NAME: Juliet Oshea                  
	FUNC: Product & Technology            
	TEL: (7)833 3786                      <OSHEA_J AT A1_KERNEL @THESUN @UVO>
To:	See Below


ULTRIX-32 V4.0 is a functional enhancement and replacement for ULTRIX 
V3.1.  This information describes the new features of ULTRIX V4.0.  
The support for ULTRIX V4.0 remains in the ULTRIX SSD Unit.   If you 
require any further information regarding this new version, please 
contact Juliet O'Shea,(833-3786) Alan Trevennor (833-3286) or Alwyn 
Bradley (833 3922). 
----------------------------------------------------------------------

Digital    ******* Company  Confidential ******   Product & Technology

                          ** New Version **


ULTRIX-32 V4.0	     	     		   Ref:UKSP/jo/20-Jul-90 Rev A

  FCS:		   3-Aug-90

  Description:	   V4.0 is an enhancement release of the ULTRIX-32 
                   (UNIX-based) operating system.  V4.0 functionality 
                   includes SMP, LMF, VM changes for memory 
                   utilisation, C2 security, NCS RPC, and support for 
                   1159.  It also includes NFS performance 
                   enhancements; support for 96 RA devices; MIT Athena 
                   Network Services -- Kerberos, Hesiod, Ntp; SNMP 
                   Network Management Agent; Update Installation 
                   option; Multi-architecture DMS and RIS, and 
                   Consolidated Distribution.  Languages, Tools & 
                   Utilities will be enhanced for I18N compatibility, 
                   and performance.  For support, there will be a 
                   multi-architecture crash analysis tool, DBBR for 
                   SCSI, and the updated Kernel Messages Manual will 
                   ship with the customer documentation set.  Hardware 
                   supported in this release includes:  Calypso/Rigel, 
                   ISIS, Mipsfair, 3MAX/R3000, and Teammate II.  
                   Supported mass storage devices are: ESE20, TA90, 
                   TQ70L, HSX, XSA, and RZ56.  New communication and 
                   printer support includes LPS40 and LN0*R postscript 
                   and software features.

  Impact:	   High
  Class: 	   ULTRIX, UNIX

  Volume:          For UK Sales assume 1/3 EUR.

              FCS                                         
              QTR    Q+1     Q+2    Q+3   Q+4    Q+5      
    EUR.       76     76     118    118   118    118


  Logist/ESDC:	   WMOD206

  Support Flow:	   CSC -> ULTRIX SSD ->
  		   	     => Corporate CSSE

  Service:	   Standard Offerings Expected
  PTG PFE:	   Alwyn Bradley, Alan Trevennor
  PTG Mgr:	   Mike Smith 
  Starter Kits:	   Planned
  Part Numbers:	   Q*-VEY**-**.   SPD # 26.40.XX

--------------- L o c a l    I n f o r m a t i o n ----------------

  CSSE:		   Linda Trowbridge

  Comments:	   ULTRIX V4.0 is the largest, most funtionally rich 
                   ULTRIX release developed to date.  More CPUs, 
                   peripherals, standards, and features are supported 
                   in Version 4.0 than ever before.  It is the result 
                   of 24 months of planning, engineering and testing. 
                   
  		   When V4.0 was announced back in April,  the Press 
                   was unanimous in applauding the software efforts 
                   that Digital (OSG) undertook and the results of our 
                   efforts. The announcement, coupled with the new 
                   RISC product line announcement, caused the leading 
                   analysts to change their Digital stock ratings from 
                   hold, to buy. This is something to be proud of, 
                   their change in position was based primarily on new 
                   ULTRIX and RISC products.  Version 4.0 coupled with 
                   our RISC H/W platform will keep Digital as a leader 
                   in the open systems marketplace for the near 
                   future.  The cumulation of our efforts is already 
                   paying off; sales are considerably up and more ULTRIX 
		   orders have been booked than ever before.

45.16Olsen on VMS on RISCCOMICS::TREVENNORA child of initTue Aug 14 1990 19:0067
    
    
    
    
    
    
    
    
    
    
    
    
    
    
    From the front page of the latest DEC Computing - mostly accurate.
    
    Alan T.
    
    
      
             Olsen confirms plans to put VMS on Risc


DEC president Ken Olsen laid out plans for the next generation of 
VAX's in an interview given at the DECworld show in Boston last week.  
He claims a Risc version of the VMS machines and improved operating 
system and compiler technology will enable DEC to maintain its annual 
price/performance growth rates for years to come.

Olsen said that even the VAX 9000 "is very close to being superseded".  
DEC is clearly looking at a VAX architecture based on its own Risc 
technology, and at an open - if not portable - version of VMS.  The 
machine dubbed the E-VAX by analysts will be based on multiprocessing 
Risc CPUs and will appear to the user just like existing VAX machines, 
but will offer similar price/perfomance to a Unix workstation.

But Olsen's commitment to Unix is, as ever, ambiguous and he now seems 
to be thinking in terms of opening up VMS rather than gunning 
wholeheartedly for Unix.  "The industry focuses too much on operating 
system kernels," he said.  "The kernel, in one sense, you could say is 
unimportant.  If you follow the OSF standards for any operating system 
you should have transportable software."

Olsen claims DEC has rethought its commitment to Unix as a commercial 
operating system, and that while it is "beautiful" for its traditional 
scientific purposes, it will never be as robust as VMS.  "CPUs run 
faster with Unix, but we believe VMS will catch up next year."

Olsen is also sceptical of the trend towards ever larger desktop 
workstations, and remains respectful of the traditional terminal.  "I 
think mainframes and servers will have a lot fewer mips because 
bandwith problems become so expensive to solve," he commented.  "Power 
is also in disk speed, number of mounted disks and bandwidth as well 
as mips.  I think most desktop computers will be simple dumb 
terminals, maybe with windows."

With client/server architectures being DEC's theme for the show, Olsen 
said:  "The minicomputer is dead.  We have servers now.  Actually, 
they do the same thing.  We've always had servers."  But he admitted 
that the VAX 4000 is the first DEC machine specifically designed as a 
server.

Olsen also confirmed that DEC would offer PCSA software for the 
DECstations this autumn.

    
45.17KERNEL::MOUNTFORDFri Dec 14 1990 11:1271
    More goodies from Alan.........
    
    
    
    From:	COMICS::TREVENNOR "Ultrix Interest Mailing list:  13-Dec-1990 1914" 13-DEC-1990 19:18:39.84
To:	@DIST:ULTRIXERS
CC:	TREVENNOR
Subj:	




      osf   osf   unix     posixposix  xopenosf   osf  osf        osf
      osf   osf   svid     posixposix  osfwopen   osf  unix      unix
      osf   osf   unix        svid     osf  osf   osf   unix    unix
      osf   osf   svid        unix     osfosf     osf    posixposix
      osfosfosf   unixunix    svid     osf  osf   osf   svid    svid
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf



UK Ultrix Support Group: 
Mailshot #25

1 Item in this mailshot.


+---+      		+-------------------------------------------+
| 1 |      		| Dom Lacava (Customer Services VP) on UNIX |
+---+      		+-------------------------------------------+



 Digital - UNIX "earthquake" shakes industry
	{Livewire, 27-Nov-90}
   At the UniForum exhibition held recently in London, Digital's commitment to
 UNIX was underscored in Dom LaCava's keynote address:

                      UNIX "earthquake" shakes industry 

   Computer companies will need to undertake radical restructuring in order to
 satisfy the market demand for open computing. In a keynote speech delivered at
 UniForum's Open System's '90 conference in London, Dom Lacava, vice president,
 UNIX-based Systems and Software, explained the ways in which vendors such as
 Digital are adapting to the changing demands of the industry. He stated that
 in order to offer truly open systems, computer vendors are having to undergo
 significant change, which will undoubtedly affect corporate culture.
   "Today UNIX is driving one of the most important changes in computing," 
 commented Dom. "It's rumbling through the computing industry like an
 earthquake. Users want UNIX and more importantly, open systems. Vendors can
 see it as the tremendous opportunity it is, or they can deny that it's
 happening. Part of seizing the opportunity is adapting the businesses to the
 changing demands."
   According to Dom, the immediate impact of open systems on vendors is the 
 need to create an organizational structure which will keep pace with the 
 rapidly changing market. For Digital this meant combining all the individual
 parts of the company which were working on RISC and UNIX into one worldwide
 organization, investing heavily in technical training and support, and
 building UNIX resource centers in key cities to help customers develop and
 port applications.
   "Internationally, nationally and within our industry, the old ways are being 
 swept away faster than we ever imagined possible," he added. "Today, the 
 computer industry is a completely new game. Many of the rules which applied 
 only a few short years ago no longer apply, and every vendor is struggling 
 with the changes this has brought about. The vendors who will thrive are the
 ones who, like Digital, can recognize the need to change the way they work.
 They will apply the resources it takes to capitalize on the opportunities
 offered by the promise, and move to open systems."
 ---

45.18KERNEL::MOUNTFORDFri May 03 1991 10:25130
From:	COMICS::TREVENNOR "Ultrix Interest Mailing list:  02-May-1991 1255"  2-MAY-1991 13:00:19.63
To:	@DIST:ULTRIXERS
CC:	TREVENNOR
Subj:	




      osf   osf   unix     posixposix  xopenosf   osf  osf        osf
      osf   osf   svid     posixposix  osfwopen   osf  unix      unix
      osf   osf   unix        svid     osf  osf   osf   unix    unix
      osf   osf   svid        unix     osfosf     osf    posixposix
      osfosfosf   unixunix    svid     osf  osf   osf   svid    svid
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf
      xopenunix   xopenosf    unix     osf  unix  osf  osf        osf



UK Ultrix Support Group: 
Mailshot #26

1 Item in this mailshot.


+---+      		+-----------------------+
| 1 |      		| Training opportunity  |
+---+      		+-----------------------+


    Please find attached the course description for 'Technical Introduction 
    to UNIX and Open Systems Environments'. EY-G242, as announced in UCA 
    171 (March).
    
    There are at least 10 slots available for the course which starts on 
    May 7th in Reading.
    
    The course will not be run again in the South UK before Q2FY92.
    
    The course aims to give a basic technical understanding and positioning 
    of Digital's capabilities in the Open Systems environment. 
    
    It is appropriate to Sales, EIS and Customer Services, so please pass 
    this on to your colleagues.



                         COURSE DESCRIPTION
	                 ******************

COURSE TITLE: 	Technical Introduction to the UNIX and Open Systems 
                Environment
                
COURSE OBJECTIVES:
    
   The main focus will be on the practical business of responding to a
   UNIX ITT and the knowledge necessary to position Digital's Unix
   strategy, strengths and offerings at customer presentations. 
   
   To provide the "high level" technical awareness necessary to enable 
   Unix ITTs and proposals to be understood and processed. Candidates 
   completing the course should be able to handle a significant 
   percentage of the UNIX based work and planning locally. The aim of the 
   course is NOT to provide an in depth technical training in the subject 
   areas.  


   The course will contain presentations by Specialists and Consultants
   with everyday experience of working in the UNIX environment.


PREREQUISITE

   The candidates will be EIS Technical and Account specialists tasked 
   with supporting Sales on Digital's Unix strategy, positioning and 
   offerings.  Unix awareness would be an advantage but not essential. 
   Sales specialists may also wish to attend. 
   


COURSE DESCRIPTION

   UNIX (Operating system and Networking)
     - Customers view of the UNIX world 
          - their typical positions and misconceptions,
          - how to position the Digital difference.
     - Ultrix Version 4 training.
     - Unix Networking & Communications. 
     - ITT experience
     - Case studies
          - example configurations.
          - SUK Common Unix Environment using the version 4 
            features of Bind and Hesiod.
       
   OFFICE
     - Uniplex.
   
   PC INTEGRATION 
     - What is an integrated PC?
     - The Main Contenders in the market.
     - LAN Manager - What is it and why is it important.
     - Digital's Solutions.
     - Graphical User Interfaces.
     - Where to go for in-depth information and knowledge.
          
   CASE
     - Cohesion (the UNIX fit).
     - UNIX CASE products (to include FUSE and VUIT).
     
   DATABASE
     - Overview of Database architecture and terms.
     - ULTRIX/SQL V1.0 and V2.0.
     - What you buy from Ingres.
     - Distributed Access.

   RISC PRODUCTS
     - Products - Workstations, Servers, Graphics Options.

   WINDOWS and GRAPHICS
     - X Windows, MOTIF, DECwindows, Competition
     - Graphics - GKS and PHIGS

   OPEN SYSTEMS
     - OSF1, Positioning and Major Differences
     - Generic Open Systems and Standards
     - Digital Offerings

   UNIX ITT EXPERIENCE and CASE STUDIES

   INTRUDER OVERVIEW and COMPETITION

45.19Colourbook crashes at UC of Wales.KERNEL::JAMESAlan James CSC BasingstokeThu Jul 18 1991 11:3463
	Below is info on how to recognise Colourbook Software problem on
	University College of Wales 5800. The present version of Colourbook
    	is field test and known to cause crashes.

REM> t

Entering TRANS mode [protocol off] (^A to exit)...
[?1l>
[?1h=

aberdb 09:23:21 # Ov   [?1l>         !!!!! ALREADY LOGGED IN
[?1h=

aberdb 09:23:40 #  cd /usr/adm/crash[?1l>    !!!! GO TO CRASH DIRECTORY
[?1h=

aberdb 09:26:07 # ls -l vm*[?1l>             !!!!!   LOOK AT CRASH FILES
-rw-r--r--  1 root      2940123 Jul 10 16:21 vmcore.83.Z
-rw-r--r--  1 root      2887025 Jul 13 15:08 vmcore.84.Z
-rw-r--r--  1 root     134180864 Jul 17 21:00 vmcore.85   !!!!  85 IS LATEST
-rw-r--r--  1 root      1548443 Jul  6 14:27 vmunix.81.Z
-rw-r--r--  1 root      1548443 Jul 10 16:19 vmunix.83.Z
-rw-r--r--  1 root      1548443 Jul 13 15:06 vmunix.84.Z
-rw-r--r--  1 root      3227444 Jul 17 20:58 vmunix.85    !!!!
[?1h=

aberdb 09:26:21 # dbx -k vmunix.85 vmcore.85[?1l>    !!!! GO TO DBX
dbx version 2.0
Type 'help' for help.
reading symbolic information ...
[using memory image in vmcore.85]
(dbx) t                             !!!!!    GET STACK TRACE
>  0 boot(paniced = 0, arghowto = 0) ["../../machine/mips/machdep.c":544, 0x8011be04]
   1 subr_prf.panic(s = 0x80182e7f = "trap") ["../../sys/subr_prf.c":1165, 0x8009d810]
   2 kn5800_trap_error(ep = 0x801190e4, code = 2148639496, sr = 2148635012, cause = 805306376, signo
 = 0xffffdc1c) ["../../machine/mips/kn5800.c":1655, 0x80118ea0]
   3 trap.trap(ep = 0xffffdc40, code = 8, sr = 63508, cause = 805306376) ["../../machine/mips/trap.c
":447, 0x80124e14]
   4 VEC_trap() ["../../machine/mips/locore.s":781, 0x8011a464]
   5 .block580 ["../../netiso/net_uif.c":900, 0x80105e80]
   6 net_read(dev = -2144354320, uio = 0xffffdec8) ["../../netiso/net_uif.c":900, 0x80105e80]
   7 isoread(dev = 1, uio = 0xffffdec8) ["../../netiso/iso_uif.c":209, 0x800fb0e4]
     ^^^^^^^
	!!!!! ISOREAD ONLY USED BY COLOURBOOK SOFTWARE AT PRESENT.

   8 spec_rwgp(gp = 0x802d5d0c, uiop = 0xffffdec8, rw = UIO_READ, ioflag = 0, cred = 0xc96fd800) [".
./../fs/specfs/spec_vnodeops.c":289, 0x80081208]
   9 rwgp(gp = 0x802d5cd0, uio = 0x8013c030, rw = UIO_READ, ioflag = 0, cred = 0xc96fd800) ["../../f
s/gfs/gfs_gnodeops.c":267, 0x8003f1d4]
  10 gno_rw(fp = 0x802e245c, rw = -2146610140, uio = 0xf801) ["../../fs/gfs/gfs_gnodeops.c":204, 0x8
003efe0]
  11 rwuio(uio = 0xffffdec8, rw = UIO_READ) ["../../sys/sys_generic.c":290, 0x8009e974]
  12 read() ["../../sys/sys_generic.c":108, 0x8009e4c4]
More (n if no)? 
  13 syscall(ep = 0xffffdf5c, code = 3, sr = 1113916, cause = 32) ["../../machine/mips/trap.c":1070,
 0x801261f8]
  14 VEC_syscall() ["../../machine/mips/locore.s":836, 0x8011a4ec]
  15 entry.start() [0x4154d4]
(dbx) quit
[?1h=

aberdb 09:29:29 #