T.R | Title | User | Personal Name | Date | Lines |
---|
483.1 | ATMWorks 351 (Meteor) Evaluation Plan | EVMS::BOULAY | | Mon Mar 10 1997 09:52 | 221 |
| QTV Project Evaluation for OpenVMS ATMworks 351 (Meteor) Port Driver
I. General Information
Name of project: OpenVMS ATMworks 351 (Meteor) Port Driver
Author of evaluation: Kassandra Boulay
Date of this evaluation: 2/04/97
Engineering contact: Rod Payne
Purpose of Project: Follow-on product to the ATMworks 350 port driver
II. Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in. Put an X preceeding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement complete
____ Investigation complete
____ Draft Plan & Design complete
____ Detailed Project Planning complete
____ Detailed Design/Functional Spec not planned
_x__ Implementation 70% complete
____ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
_F__ Investigation Report piton.zko.dec.com\~lan\ir-meteor.html
____ Product Requirements n/a
_F__ Project Plan piton.zko.dec.com\~lan\pp-meteor.html
____ Functional Specification n/a
____ Design Specification n/a
_F__ Development Plan Page 4 of project plan
_F__ Defect Containment Plan Page 6 of project plan
_F__ Unit Test Plans Part 5 of project plan
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
What types of development effort does this project involve?
(estimate the % of each type that comprise the total project effort)
95% Code design & development within OSSG
5% OSSG porting of code from an external source
____ OSSG porting of code from an internal source
____ Subcontracted development or porting by external vendor
____ OSSG modification & integration of a third party product
____ OSSG resale of a third party product
____ Proactive maintenance
____ Other (describe)
III: Defect Containment Status
Activity Estimated/Planned Completed to date
-------- ----------------- -----------------
Defect Identification Goals
%code inspected: 3/14/97
%code unit testing: 4/07/97
%code integration tested 4/08/97 (checked in)
%code(features?) validated 4/08/97
Defect data:
#Defects found in Design reviews/inspections 0
#Defects found in code reviews/inspections
#defects found in Unit Test
#defects found in Integration Test
#defects found in Validation (includes Field Test)
Total number of defects found pre-release:
How were estimates derived?
1 defect/100 lines of code; 2500 lines of ATM driver code, NIPG code
not included (external); 25 defects estimated
How were actual values measured?
Describe any other methods used to remove defects prior to integration
(i.e. design reviews, etc)
IV. Integration & Testing Status
What percentage of code for this project been integrated with other
system components?
None. The code is self-contained.
How was/will readiness for integration with other system components
be determined?
Passes driver and other LAN tests
List & describe all tests currently available for testing this project.
Stress, XXDriver, LANT
Are any existing tests run on a regular basis against the project's
code base? By whom?
Not by the development group. QTV runs regression tests on the
entire Raven code base for each baselevel announced. Part of the
regression test suite, the comm_driver tests (soon to be LANT),
cover testing for LAN common and driver code.
When will the code integration for this project be completed?
4/08/97
V. Configurations
Indicate which platforms & operating systems this project will
support by placing VERSION NUMBER(s) in the appropriate boxes.
---------------------------------------------------
| INTEL || DIGITAL |
|-------------||------------------------------------|
| NT | W95 || Alpha NT |OVMS Alpha|OVMS VAX|UNIX |
-------|-----|-------||----------|----------|--------|-----|
|Client | | || | | | |
| | | || | | | |
|-------|-----|-------||----------|----------|--------|-----|
|Server | | || | | | |
| | | || | V7.1-1H1 | | |
-----------------------------------------------------------
Describe any concerns, issues, restrictions or testing considerations that
must be paid to specific
hardware platforms: PCI-based Alpha systems
firmware revisions: none
network transports: none
mixed architecture clusters: \Latent support for clusters
mixed version clusters: /
software products &/or versions: V7.1-1H1 or higher
VI. QTV Requirements for the Project
Check all areas that QTV's involvement could significantly impact the
quality and timely delivery of this project, and estimated dates when
resources should be assigned.
Activity Date Resource Needed?
-------- ---------------------
____ Defect Containment Consulting
____ Size Estimation
____ Moderating Inspections
____ Participation in Code Inspections
____ Use of SLIM control (consulting or implementation)
____ Problem Management
____ Metrics &/or Doneness Criteria
_x__ Pre-integration Defect Identification TBD
_x__ Post-Integration Defect Identification 4/08/97
_x__ Final Qualification TBD
____ In-Use Testing only
Describe all testing activities that you believe are required of QTV
for this project? (i.e. test development, installation testing,
configuration matrix testing, etc)
Installation testing, Data integrity testing (i.e. CTM, stress,
cluster testing, etc.)
What skills are required of QTV to successfully perform the testing
described above?
Basic understanding of LAN products (i.e. installation, setup, & use)
as well as LAN test tools
Describe all known hardware requirements of this project that have not
already been specified above.
2 Meteor adapters are the minimum needed for testing and use.
VI. Dependencies & Risks
Identify & describe any dependencies this product has on any other
project development efforts (DIGITAL or 3rd party), and state who is
managing each dependency.
None, it is self-contained.
Explain all known risks, issues and concerns with the quality of this
product or it's integration for delivery by OSSG to DIGITAL customers.
Explain what is being done to mitigate the risk and/or what
contingency plans are in place should the risk unfold. (If nothing,
then state that.
There may be a problem with hardware availability of the Meteor
adapter for testing. The LAN group currently has two, and are
having a hard time obtaining more. QTV will need at least two
Meteor adapters to complete the validation testing. Nothing has
been planned as of now to lower this risk.
Are there concerns about impact on reliability or interoperability
with existing software, hardware &/or firmware?
None.
List specific areas which need special attention.
Attention should be focused on the performance of the driver.
List other risks such as: insufficient time, too few people, limited
exposure and/or use and hardware concerns.
Hardware availability is the only risk right now.
|
483.2 | HSZ70 Evaluation Plan | STAR::S_MARCHESANO | | Tue Mar 11 1997 09:12 | 267 |
|
QTV Project Evaluation for HSZ70
Purpose: To assess the state of this project, determining:
- Quality risks & focus areas
- estimate of QTV resources & effort
- Inputs to the Validation Plan
- to document a common understanding between QTV and the
development organization of the current state in terms
of quality, and the intended final state of the project
Deliverables: - Answers to all questions
- Copies sent to manager & release project leader
- Include as an appendix to the Project Test/Validation Plan
Process: This template is to be completed by the assigned QTV engineer
responsible for the evaluation. Information should be obtained
through face to face dialogue with the development project
leader of the project being evaluated. The purpose of this is
to allow each party to ask questions and raise concerns so
that a common understanding of the current state and desired
state of the project is achieved, and a clear understanding
of areas where QTV's efforts would most benefit the project.
Note: Please make sure all of the questions are answered. If they
are not applicable indicate that, but make sure they have been asked
and answered.
I. General Information
Name of project: HSZ70
Author of evaluation: BOB YOUNG
Date of this evaluation: 1/27/97
Engineering contact: BERT MARTENS
Purpose of Project: The hsz70 is the ultra scsi version of the hsz scsi
storageworks family. The Hsz70 will offer improved
performance and 6 Ultrascsi wide device ports. The
Hsz70 will include an Ultrascsi Wide Differential
host port.
II. Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in. Put an X preceeding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
____ Implementation
__X_ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report
____ Product Requirements
____ Project Plan
____ Functional Specification
____ Design Specification
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
__X_ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
What types of development effort does this project involve?
(estimate the % of each type that comprise the total project effort)
N/A
____ Code design & development within OSSG
____ OSSG porting of code from an external source
____ OSSG porting of code from an internal source
____ Subcontracted development or porting by external vendor
____ OSSG modification & integration of a third party product
____ OSSG resale of a third party product
____ Proactive maintenance
____ Other (describe)
III: Defect Containment Status
Activity Estimated/Planned Completed to date
-------- ----------------- -----------------
Defect Identification Goals
%code inspected:
%code unit testing: N/A
%code integration tested
%code(features?) validated
Defect data:
#Defects found in Design reviews/inspections n/a
#Defects found in code reviews/inspections n/a
#defects found in Unit Test n/a
#defects found in Integration Test n/a
#defects found in Validation (includes Field Test) 2 (see note)
Total number of defects found pre-release
** NOTE: I searched the vms qar data base for gryphon and zeta
no see if we had assigned any specific quar to problems
on hszXX components and did not fid any. I also talked
with some engineers who could not recall any.
I am making a guess here as to the number of problems
QTV should expect to find. Normally a device like this is
looked at as an "In-Use" test but qtv has been asked
by enginnering to target some specific functionalty
testing. Based on this I estimate 2 problem to be found.
How were estimates derived?
Research in qar data bases and experience with other
similar storage units.
How were actual values measured?
Describe any other methods used to remove defects prior to integration
(i.e. design reviews, etc) Engineering Debug.
IV. Integration & Testing Status
What percentage of code for this project been integrated with other
system components?
None
How was/will readiness for integration with other system components
be determined? This was determined by the hardware engineering group.
Multiple IFT sites.
List & describe all tests currently available for testing this project.
All disk I/O tests used by QTV.
Are any existing tests run on a regular basis against the project's
code base? By whom?
Yes. testing performed by all ift sites.
When will the code integration for this project be completed?
N/A
V. Configurations
Indicate which platforms & operating systems this project will
support by placing VERSION NUMBER(s) in the appropriate boxes.
---------------------------------------------------
| INTEL || DIGITAL |
|-------------||------------------------------------|
| NT | W95 || Alpha NT |OVMS Alpha|OVMS VAX|UNIX |
-------|-----|-------||----------|----------|--------|-----|
|Client | | || | v6.2-1h3|v6.2-1h3|v3.2g|
| | | || | v7.1 | v7.1 | &up |
|-------|-----|-------||----------|----------|--------|-----|
|Server | | || | same | same |same |
| | | || | | | |
-----------------------------------------------------------
Describe any concerns, issues, restrictions or testing considerations that
must be paid to specific
hardware platforms: As of now the HSZ70 is not supported with the Tlaser
ITIOP.
firmware revisions: All scsi adapters will recieve new firmware code.
network transports: -
mixed architecture clusters:
mixed version clusters:
software products &/or versions: V7.1 testing. If V6.2 testing is required
then this decision will have to be made
by Sue marchesano or Sue lindeberg.
VI. QTV Requirements for the Project
Check all areas that QTV's involvement could significantly impact the
quality and timely delivery of this project, and estimated dates when
resources should be assigned.
Activity Date Resource Needed?
-------- ---------------------
__X_ Defect Containment Consulting
__X_ Size Estimation
____ Moderating Inspections
____ Participation in Code Inspections
____ Use of SLIM control (consulting or implementation)
__X_ Problem Management
__X_ Metrics &/or Doneness Criteria
__X_ Pre-integration Defect Identification
__X__ Post-Integration Defect Identification
__X__ Final Qualification
____ In-Use Testing only
Describe all testing activities that you believe are required of QTV
for this project?
- Installation
- validation testing of Hsz70 hardware
- validation testing of Hsz70 firmware
- Target testing
- Matrix testing
- qual testing
- assisting is problem resolution
What skills are required of QTV to successfully perform the testing
described above?
Scsi knowledge of testing and configuring
and setting up Hsz's.
Describe all known hardware requirements of this project that have not
already been specified above.
Ultra Scsi drives for hsz70.
VI. Dependencies & Risks
Identify & describe any dependencies this product has on any other
project development efforts (DIGITAL or 3rd party), and state who is
managing each dependency.
This project will gate on ultra scsi drivers developed
for the ultra scsi controllers and the stability of the hsz70
and q-logic firmware.
Explain all known risks, issues and concerns with the quality of this
product or it's integration for delivery by OSSG to DIGITAL customers.
Explain what is being done to mitigate the risk and/or what
contingency plans are in place should the risk unfold. (If nothing,
then state that.
Stability of q-logic firmware.
Are there concerns about impact on reliability or interoperability
with existing software, hardware &/or firmware?
Each scsi controller will be recieving firmware upgrades.
List specific areas which need special attention.
Functionality testing requested of QTV per matrix list in
Bert Martens qual plan.
Specific HSZ70 testing that requires specific drive
configuring within the hsz (ie, mirror sets, shadow sets, stripe
setes)
List other risks such as: insufficient time, too few people, limited
exposure and/or use and hardware concerns.
Project management to determine if qtv will test the hsz70 specifically
with v6.2-1h3. our plan will be with 7.1. If 6.2-1h3 is requested than
require addition man power to focus specifically on both.
|
483.3 | UltraSCSI Evaluation | STAR::S_MARCHESANO | | Tue Mar 11 1997 09:13 | 249 |
|
QTV Project Evaluation for UltraSCSI
Purpose: To assess the state of this project, determining:
- Quality risks & focus areas
- estimate of QTV resources & effort
- Inputs to the Validation Plan
- to document a common understanding between QTV and the
development organization of the current state in terms
of quality, and the intended final state of the project
Deliverables: - Answers to all questions
- Copies sent to manager & release project leader
- Include as an appendix to the Project Test/Validation Plan
Process: This template is to be completed by the assigned QTV engineer
responsible for the evaluation. Information should be obtained
through face to face dialogue with the development project
leader of the project being evaluated. The purpose of this is
to allow each party to ask questions and raise concerns so
that a common understanding of the current state and desired
state of the project is achieved, and a clear understanding
of areas where QTV's efforts would most benefit the project.
Note: Please make sure all of the questions are answered. If they
are not applicable indicate that, but make sure they have been asked
and answered.
I. General Information
Name of project: ULTRA SCSI/HSZ70
Author of evaluation: BOB YOUNG
Date of this evaluation: 1/27/97
Engineering contact: Bob Silver
Purpose of Project: Implement Ultra Scsi into the gryphon stream.
II. Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in. Put an X preceeding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
____ Implementation
__X_ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report
____ Product Requirements
____ Project Plan
____ Functional Specification
____ Design Specification No plans available..
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
___ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
What types of development effort does this project involve?
(estimate the % of each type that comprise the total project effort)
_70% Code design & development within OSSG
____ OSSG porting of code from an external source
_30% OSSG porting of code from an internal source
____ Subcontracted development or porting by external vendor
____ OSSG modification & integration of a third party product
____ OSSG resale of a third party product
____ Proactive maintenance
____ Other (describe)
III: Defect Containment Status
Activity Estimated/Planned Completed to date
-------- ----------------- -----------------
Defect Identification Goals
100% code inspected:
90% code unit testing:
%code integration tested
%code(features?) validated
Defect data:
#Defects found in Design reviews/inspections
n/a
#Defects found in code reviews/inspections
1
#defects found in Unit Test
1
#defects found in Integration Test
n/a
#defects found in Validation (includes Field Test)
none
Total number of defects found pre-release
2
How were estimates derived? given by devo on implementing fixes.
How were actual values measured? Debug compiler found bug and code
inspection also proved valuable.
Describe any other methods used to remove defects prior to integration
(i.e. design reviews, etc) none
IV. Integration & Testing Status
What percentage of code for this project been integrated with other
system components?
1% or less
How was/will readiness for integration with other system components
be determined? This was determined by the hardware engineering group.
4 sites testing.
List & describe all tests currently available for testing this project.
Scsi I/O load tests already available in qtv.
Are any existing tests run on a regular basis against the project's
code base? By whom?
Bob Young, Bill Clogher, Chuck Poat.
When will the code integration for this project be completed?
Done
V. Configurations
Indicate which platforms & operating systems this project will
support by placing VERSION NUMBER(s) in the appropriate boxes.
---------------------------------------------------
| INTEL || DIGITAL |
|-------------||------------------------------------|
| NT | W95 || Alpha NT |OVMS Alpha|OVMS VAX|UNIX |
-------|-----|-------||----------|----------|--------|-----|
|Client | | || | v6.2-1h3|v6.2-1h3| |
| | | || | v7.1 | v7.1 | |
|-------|-----|-------||----------|----------|--------|-----|
|Server | | || | same | same |same |
| | | || | | | |
-----------------------------------------------------------
Describe any concerns, issues, restrictions or testing considerations that
must be paid to specific
hardware platforms:
firmware revisions:
network transports:
mixed architecture clusters:
mixed version clusters:
software products &/or versions:
- v6.2 -1h3 needs modified PKQdriver.
with 5.01 firmware.
- v6.2 not supporting taget mode.
- v6.2 not support scsi hi id #'s.
- v7.1 needs modified PKQdriver.
VI. QTV Requirements for the Project
Check all areas that QTV's involvement could significantly impact the
quality and timely delivery of this project, and estimated dates when
resources should be assigned.
Activity Date Resource Needed?
-------- ---------------------
__X_ Defect Containment Consulting
__X_ Size Estimation
____ Moderating Inspections
____ Participation in Code Inspections
____ Use of SLIM control (consulting or implementation)
__X_ Problem Management
__X_ Metrics &/or Doneness Criteria
__X_ Pre-integration Defect Identification
__X__ Post-Integration Defect Identification
__X__ Final Qualification
____ In-Use Testing only
Describe all testing activities that you believe are required of QTV
for this project?
- Installation
- validation testing of Ultra Scsi hardware
- validation testing of Ultra Scsi firmware
- validation testing of Ultra Scsi drivers
- Target testing
- Matrix testing
- qual testing
- assisting is problem resolution
What skills are required of QTV to successfully perform the testing
described above?
Scsi knowledge of testing and configurations.
Describe all known hardware requirements of this project that have not
already been specified above.
Need Ultra Scsi drives for hsz70 and Kzpba B-Prime Q-logic
modules.
VI. Dependencies & Risks
Identify & describe any dependencies this product has on any other
project development efforts (DIGITAL or 3rd party), and state who is
managing each dependency.
Stable firmware for Q-logic
Explain all known risks, issues and concerns with the quality of this
product or it's integration for delivery by OSSG to DIGITAL customers.
Explain what is being done to mitigate the risk and/or what
contingency plans are in place should the risk unfold. (If nothing,
then state that.
Devo concerned about Q-logic firmware
Metting constantly going on with Q-logic engineering.
Are there concerns about impact on reliability or interoperability
with existing software, hardware &/or firmware?
See above.
List specific areas which need special attention.
Multiple host testing with shared bus and Q-logic focus
with Hsz70.
List other risks such as: insufficient time, too few people, limited
exposure and/or use and hardware concerns.
Project management needs to determine if QTV will test
V6.2-1h3 with V7.1. If this is the case, QTV will need
to allocate people resource
|
483.4 | Hailstorm Contents, etc. | STAR::S_MARCHESANO | | Tue Mar 11 1997 09:18 | 305 |
| From: STAR::SLINDEBERG "24-Feb-1997 2229 -0500" 24-FEB-1997 22:35:02.26
To: @HAILSTORM
CC: SLINDEBERG
Subj: Notes from 2/21 Hailstorm planning meeting
Notes for Hailstorm (V7.1-1H1) planning meeting held:
Friday, February 21
9:00-10:00
Participants:
Jeff Huber, Jim Janetos, Shay Johnson, Martin King, Susan Lindeberg
Susan Marchesano, Chuck Poat, Pat St.Laurent, Joe Schuster, Bob Segal
Bob Silver, Dick Stockdale
=============================================================================
Based on the discussion at the meeting, below is the list of possible projects
and their priorities depending on whether Hailstorm is a Q4 or Q1 deliverable.
It was agreed we will proceed to plan Hailstorm as a Q1 deliverable.
Project Q4 priority Q1 priority
-----------------------------------------------------------------------------
UltraSCSI P1 N/A
EOP, single host
UltraSCSI P2 P0
SSB/FRS, multi host
Takara EV56 P1 P1
Digital Server MIB P1 P1
MX5 EV56 P2 P0* (latent)
Network adapters:
- DE500-BA boot support P1 P1 (P0 if -AA goes EOL)
- ATMworks 351 P1 P1
- Racore PCI Token Ring P2 P1
- DE450 P1 P1
Rawhide EV5/EV56-1/EV56-2 P0 P0
SMP mixed CPU TIMA kit
Memory Channel Q3 TIMA kit P0** P0**
Memory Channel V2.0 P2 ?
UltraSCSI
---------
o Includes P2SE+, P1SE+, KZPBA-xx, HSZ70 and infrastructure
o Single host support only, if necessary, for the Early Order Program (EOP).
Would provide (very)pre-SSB baselevel. If possible, we will pursue
satisfying EOP requirements with either the PKQdriver that has been
made available for testing on V6.2 and/or with Gryphon plus the correct
qlogic firmware.
o SCSI cluster support at SSB
Takara
------
o EV56 single board computer, follow on to K2/Lego
o Still requesting May, 1997 FRS. Considering "SHIP" if this really can beat
Hailstorm schedule.
o They (whatever TOEM is called now) want to do seed early in April. I'd
like Bill Hanley involved in any special arrangement/agreements we need to
have in place for this if we're going to support this.
Digital Server MIB (DSM)
------------------------
o ServerWorks infrastructure
o Discrepancy in understanding of which platforms will be supported. Not
all AlphaServers are to be supported, as it turns out. Only three will
be supported.
AI: (Jeff Huber) Inform Susan L. and Bill Hanley as to which platforms
are to be supported.
STATUS: Done. The four platforms are Rawhide, TurboLaser, Noritake
and Corelle. See the attached mail message.
MX5
---
o EV56 personal workstation
o FRS Q2FY1998. The most we can plan for is latent support. However, this
opens the possibility that severe problems will be found after we ship.
* Contention over whether this is a P1 in Q1 or a P0. P1 says we'll try our
hardest to keep it in the release, but will ship without if it isn't
ready (enough, since we're talking latent here). P0 says we would hold
up the release for it. I don't think we would realistically hold up the
release for latent support, unless there were other factors affecting the
decision. This should be sorted out over the coming weeks.
o We don't know field test plans yet.
DE500-BA boot support
---------------------
o We have runtime support in Gryphon already
o Martin is especially concerned about the -AA going EOL and whether
Hailstorm will be ready in time for -BA support to compensate
ATMworks 351 (Meteor)
---------------------
o The 350 is expected to go EOL as well. This involves lower volumes, so
is not as much a concern about being ready at the same time.
Racore PCI Token Ring
---------------------
o Bug fix needed for existing token ring takes care of the problem with this
too. So not anticipating further changes for this. I assume we need to
make sure this code gets into the release (?).
DE450
-----
o Replacement chips. We are not expecting any SW changes to be needed, but
we are keeping this on our list for now.
Rawhide EV5/EV56-1/EV56-2 SMP mixed CPU TIMA kit
------------------------------------------------
o This is (being?) done now. We anticipate having to include Rawhide routines
due to the various adapters in Hailstorm, so this will be included too.
[Actually, we expect to have to include support for most of the existing
platforms due to the adapters in this release]
Memory Channel Q3 TIMA kit
---------------------------
** Some contention on whether to include this or not. We'll need to sort this
out in the coming weeks or so. The question to resolve is, if we have no
other reason to include memory channel, should we include this anyway?
o No bug fixes were required after all for V1.5 support
Memory Channel V2.0
-------------------
o We don't know yet if SW changes will be needed for V2.0 support
o The earliest we'll see cards is March and hub, in July. Given these
dates, inclusion in Hailstorm, if changes are needed, is unlikely. We'll
keep watching what happens here.
Tincup
------
o EV56 Rawhide inside Mikasa sheet metal; FRS July
o Since this is supposed to be DSR on 6.2-1H3 and 7.1, I've been asked
to keep it off the list of possible projects as notices go out to other
organizations.
Tlaser/622s
-----------
o Since this is supposed to be DSR on 6.2-1H3 and 7.1, I've been asked
to keep it off the list of possible projects as notices go out to other
organizations.
Projects not under consideration for Hailstorm:
-----------------------------------------------
(Watch out Raven!) For the record...
o CIPCA
- nothing in the works at the moment
o Symbios UltraSCSI adapters
- Program direction and schedule is unclear
o SCSI device multiport failover
- Schedule has slipped to what will likely be the Raven timeframe.
o CMD array controller (Anaconda)
- We may decide to not bother to support at all
- If we do proceed, it will be post Hailstorm timeframe.
o Ethernet/ISA adapter (DE305)
- Probably Raven timeframe
o DE500-CA
- Probably Raven timeframe
o ATMworks 370
- Probably Raven timeframe
o Universal Serial Bus (USB)
- Post Raven
o RNS2300 Quad-Fast Ethernet
- Possibly Raven timeframe
ISSUES/QUESTIONS:
-----------------
o Project plans for projects
Most don't have any yet. (exceptions include one for ATMworks 351 and
it looks like we have a pointer below to something for DSM)
Many of the plans out of the hardware groups are still sketchy.
o Testing and Qual plans for projects
From the HW groups, QTV. What is planned? What are the testing
requirements? What is needed from QTV? What does QTV need?
o Should this be a maintenance release?
Given all the adapters, there will be a lot of drivers. In addition,
nearly all the servers will need to be included. Certainly, if this
were a maintenance release, it would do away with the risks associated
with the handcrafting we now have to do. However, we'd be including
all kinds of changes then and it would change the testing and Qual
scope for. We'd be in the same boat as functional releases of
balancing the SW need for stability with the last minute changes that
have to be accepted for HW. Further, Fred K. is proposing to various
OSSG managers a different approach to shipping new HW support that as I
understand it includes maintenance. This obviously could have
ramifications for Hailstorm, if accepted in some form.
AI: (SUSAN LINDEBERG) Get into that meeting.
STATUS: Done. I'll be attending.
o Field test administrator
If this is needed, Susan Marchesano will also act in that role.
Currently, the requirements are unclear.
o DECevent
Hardware engineering is responsible for working this out with the
DECevent group. The project leaders (for the CPU or whatever) here
are responsible for nudging the HW groups, making sure they take care
of this. And the project leaders in Hailstorm can expect some nudging
from me.
We expect something will need to be done for Takara, MX5 and Memory
Channel. Bob Silver believes nothing new is needed for UltraSCSI stuff.
o SCSI device naming
A serious problem has been found in Gryphon. Will we need to/want
to include whatever fixes are made for this in the HW release as
well (assuming we keep to a limited HW release)?
AI: (BOB SILVER) Look into what's involved and possible ramifications
for Hailstorm.
o FW
Roger Allen is the keeper of the CD schedule now. Sounds like Bob Segal
and I need to get on his distribution. CD version to watch will
probably be V4.0.
A present issue in the FW space is that Reuters is used to using VAX
and doing unattended installs. They want to continue their same
practices. This may be addressed with FW and/or SW changes. If SW,
it would be something included in SYS$EXAMPLES.
o Release Project Leader
So far we have none. Note, we *do* have a builder...Kim Rivers.
Don plans to make an offer to someone for the project leader role.
We may have someone soon, but this person will be new and will need
everyone's support to get up to speed.
o Using MultiHelp instead of QAR
Chris Shaver is working out funding issues. Ideally that and training
will be taken care of soon if we're going to go with it. Folks at
this meeting were receptive to trying out MultiHelp. We may need
a good import (from QAR) capability and if there are problems, a
good export (to QAR) capability.
Speaking of QARs I have not yet opened up a QAR database for
Hailstorm. If this gets to be an issue while MultiHelp and release
project leader issues are still unresolved, please let me know.
o Project team and distribution
I'll talk with folks off line to get names of folks in our organization
and in the hardware groups to be included on the team or in the
distribution.
o Hailstorm stream
Is open. But nothing being accepted yet and no builds yet. We should
have a project leader in place first. I am currently receiving notices
of checkins into the 7.1 remedial stream to observe checkins that may
be of note for Hailstorm.
o Future meetings
Stuff will be worked off line for the next few weeks before detailed
scheduling of the release is done.
!
!distribution:
!
nm%jhuber ! Jeff Huber
nm%janetos ! Jim Janetos
nm%kleinsorge ! Fred Kleinsorge
nm%sjohnson ! Shay Johnson
nm%hanley ! Bill Hanley
nm%king ! Martin King
nm%stockdale ! Dick Stockdale
nm%jstickney ! Jay Stickney
nm%payne ! Rod Payne
nm%s_marchesano ! Susan Marchesano
nm%poat ! Chuck Poat
nm%st_laurent ! Pat St.Laurent
nm%rsilver ! Bob Silver
nm%coughlan ! Tom Coughlan
nm%schuster ! Joe Schuster
nm%jareth::bsegal ! Bob Segal
nm%slindeberg ! Susan Lindeberg
From: STAR::JACOBI "Paul A. Jacobi - OpenVMS Development" 24-FEB-1997 16:04:19.47
To: slindeberg,hanley,rraymond
CC:
Subj: FWD: Platforms that support the Server Systems MIB
The Server System MIB will support Rawhide, TurboLaser, Noritake and Corelle
platforms. Complete details are available in the functional and design specs.
-Paul
From: STAR::JACOBI "Paul A. Jacobi - OpenVMS Development" 20-FEB-1997
16:53:26.80
To: @swkrnl
CC:
Subj: Updated Spec on OpenVMS Kernel Support for the Server Systems MIB
I have completed a major update of the functional and design
specifications for OpenVMS Kernel Support of the Server Systems MIB.
This spec reflects what has now been inplemented in a private OpenVMS
baselevel for Rawhide systems.
WORD V7.0, Postscript and HTML version of the document are available on
the following URL at:
http://bulova.zko.dec.com/people/jacobi/pajhsm.htm
Other documents related to the project are also available at the same
location.
-Paul
|
483.5 | Hailstorm QTV resources | STAR::S_MARCHESANO | | Tue Mar 11 1997 11:04 | 111 |
| From: MACHU::POAT 6-MAR-1997 08:35:29.96
To: SUSAN
CC: CHRIS,MICK,PETE,POAT
Subj: Quick review of the projects in Hailstorm
Susan,
This is a quick 10000 foot view of the projects in Hailstorm.
Evaluations need to be done on P1SE+, P2SE+, DE500-BA, Takara and MX5.
We also need to decide where projects will be done in Hailstorm. Because of
limited resources QTV engineers may be responsible for more than
one project in Hailstorm. This may not be as bad as it sounds because of
the inuse only projects, but we'll have to be careful and not over commit
any one cluster.
We should come up with some type of plan to cover the inuse only projects.
This can be as simple as a check list that is tied to a project schedule.
We need a means of tracking what is done in the release with inuse only
projects. I am going to write a boiler plate test plan for inuse platforms.
Memory Channel Glen's cluster
Glenn currently doing V1.0 testing.
Size of MC effort for Hailstorm not clear at this time.
Ultra SCSI
KZPBA/HSZ70 Bob's cluster
Evaluation completed
Validation plan being written
Testing has started.
P2SE+ Needs a home!
Evaluation needed
Validation plan may be required based on the evaluation
P1SE+ Needs a home!
Evaluation needed
Validation plan may be required based on the evaluation
Concerns:
The amount of SCSI cluster testing required for Hailstorm can not be done
in a single cluster. We need to determine where the other adapters will
be tested and who will coordinate the test efforts.
Takara Chuck's cluster
Evaluation: Needed
Waiting for project plan. Chuck will do the evaluation
I expect that Takara will be an inuse project.
Takara may be a SHIP candidate.
MX5 Chuck's cluster
Evaluation: Needed.
Waiting for project plan Chuck will do the evaluation
MX5 may require a validation plan because of the work around
for problems with the CPU chip set.
DE500-BA Needs a home!
Evaluation: Needed
Driver wasn't qualed and has latent support in Gryphon.
The boot driver is new code.
Validation plan may be required based on the evaluation
ATMworks 351 Chuck's cluster
Evaluation Completed.
Validation Plan: Inuse testing
ATM equipment will be moved to Chuck's cluster as soon as possible.
Racore PCI Token Ring Needs a home!
Evaluation: Not required
Validation Plan: Inuse testing.
No changes to SYS$IWDRIVER.EXE required for support.
Changes required for autoconfigure only.
DSRs
Tincup
I don't think we'll see Tincup.
Tlaser/622
Not sure if anything will be required of QTV. If these are available
we should plug them into a Tlaser because of the potential problems
with Tlaser speed changes.
Rawhide EV5/EV56 Chuck's cluster
The EV56 modules are available now from the systems group. I will install
them in my cluster (EBJB18) for a 2 to 4 week inuse only test with 7.1.
|
483.6 | ultra scsi validation plan | EVMS::BYOUNG | | Fri May 02 1997 15:37 | 1052 |
|
Bob Young
Quality Test and Verification
21-MAR-1997 Revision A
01-MAY-1997 Revision B (include changes from QTV review meeting)
Reviewers/Approvers:
Pete Michaud
Chuck Poat
Sue Marchesano
Tom Coughlan
Curt Spacht
Glen Glatfelter
Bob Silver
Validation Plan for Ultra_SCSI Clusters.
Goal:
The goals of this effort is to validate the Ultra SCSI bus, KZPBA,
P1SE+, P2SE+, with HSZ70 under OpenVMS V7.1 HAILSTORM release. The
variations of KZPBA adapters QTV will be qualifying include the CA,
DA, and DB. The focus and priority of QTV's Hailstorm testing will
be with KZPBA-DB and KZPBA-CA, and the KZPBA-DA will be done later
when it becomes available. We will focus on identifying must fix
software defects, and make a statement of quality relative to the
quality goals and expectations for these projects under the software
release listed here.
Finding defects is planned because defects are the #1 detraction
from ensuring a quality product. Defect estimations for each of
the products are detailed in Appendix B. Based on the calculations,
to date, there have been 3 defects identified in the new code. In the
process of doing some previous testing with the driver the driver,
approximately 5 driver defects have been found that are not related
to the new code. In addition, a significant number of defects in
the firmware have also been uncovered. Based on this, QTV's goal will
be to identify another 7 defects in the driver and up to 2 additional
firmware defects.
Non-goals:
- Performance guarantees.
- Battery backup testing.
- SCSI tape or CD-ROM on shared bus.
- Although DECNET will be used our focus will not be
on DECNET testing.
Problem Statement:
Although the PKQDRIVER has been checked into the GRYPHON release,
there have been some modifications since. A list of these modications
and new hardware are listed below.
- PKQDRIVER
- New Firmware for Q-Logic adapters. (old and new)
- New Q-Logic adapters with Ultra support.
- New Ultra SCSI HSZ70 and Ultra SCSI disks.
- New HSZ70 Firmware.
- Support for Multihost Q-Logic.
- New DOC module for BA356 Ultra wide support.
- New cables.
Quality Goals:
The following list of quality goals will be used as guidelines to
measure the behavior of the ULTRA SCSI VMScluster environment
during the validation testing with KZPBA and HSZ70.
o Correctness - Ultra SCSI: Doc modules configure properly and
SCSI cluster operation are unaffected
by the presence of Ultra SCSI
components.
KZPBA: Ensure that disks connected to KZPBA
perform as documented and that no
unusual circumstances occur when the
KZPBA is subjected to the testing
that is targeted in this plan.
HSZ70 : Ensure that devices configure, boot,
and operate properly according to
the SCSI guidlines.
o Reliability - Under the test conditions described in this plan
we will ensure that the code enhancements for
the KZPBA, or firmware in the HSZ70 do not have
an adverse effect on each other or other cluster
components in the dedicated cluster environment.
o Interoperability - While performing normal cluster transitioning
such as server failovers, node removals from
the cluster, multiple satellite boots, adding
disk bricks with auto configure, node crashes
etc., we will ensure that none of these
effects the normal exchange of information
between systems and disks in the Ultra SCSI
environment.
Verify that mixed version on Hailstorm, V6.2,
and V6.2-1h3 can coexist in the same cluster
environment. Not as shared nodes but within
the same cluster group.
o Integrity - Ultra SCSI: Ensure that devices maintain unique space
within the Ultra SCSI environment. Disk
drives can be shared properly, there is
cluster wide access via direct shared
bus connectivity and MSCP server access,
and that cross node and cross controller
shadowing can be maintained.
KZPBA/HSZ70: Ensure that drives will mount and operate
in a secure manner with regards to booting,
crashing, backups, shadowing (when fixes
are available), autoconfiguring, data
transfering etc.
o Efficiency - Ultra SCSI: Ensure that there are no visable
performance issues with disk drives
or nodes configured as servers and
satellites. Make sure that server
and disk fail over occur in a timely
fashion and no cluster hangs or
excessive timeouts occur.
KZPBA: Functions performed with KZPBA will be
compared to the those we experienced
our testing of KZPSA and look for
any noticeable performance issues as
well.
Overall Approach:
The overall approach will be to test Ultra SCSI functionality,
execute load and stress tests, and perform target testing that is
specific to SCSI and Ultra SCSI. For target testing we will be using
the information supplied from the SCT Notes to best determine the
correct method of testing.
We have also been solicited by the HSZ70 people to target some
specific functionality as outlined in Bert Martens HSZ70 Qual
plan and also outlined in Section B of this validation plan.
Although the major thrust of our validation testing will be with
V7.1 Hailstorm, listed below are the software versions, drivers
and firmware that must also be verified through a subset of testing.
OpenVMS version firmware code Driver Adapter type
--------------- ------------- ------ ------------
V6.2 V5.X stock 1020 (fast-10)
V6.2-1H3 V5.X stock 1020 (fast-10)
* 1040B'(fast-20)
V7.1 V5.X stock 1020 (fast-10)
* 1040B'(fast-20)
V6.2-1H3 V5.X new driver 1020 (fast-10)
v7.1 (Hailstorm) 1040B' (fast-20)
Note: Included in the 1020 (fast-10) adapter testing will be P1SE,
P2SE, ITIOP, and KZPDA. These are older adapters and will
have to have a subset of testing performed to verify new
firmware.
* This configuration will be used for booting and loading new
adapters only.
Tactics:
The Validation testing will be performed in the areas of Ultra SCSI
Cluster testing, multiple network circuits (Ethernet,FDDI), system
load testing, multiple disk I/O stress testing, UETP, system
services, RMS, and peripheral device testing. Failover testing will
be performed with MSCP server and boot server nodes. These tasks will
be carried out with a variety of test tools either developed by or
acquired by the QTV group and will be performed on the Ultra SCSI
dedicated cluster. Each configuration will have testing specifically
targeted to test the new functionality of the KZPBA driver and
HSZ70 firmware that could effect the stability and reliability of
the Ultra SCSI cluster and the HAILSTORM release. The Q-LOGIC support
will be tested on a KZPBA fast-20 wide Ultra SCSI PCI controller.
also present on this bus will be a BA356 with the Ultra SCSI DOC
personality module installed and an HSZ70 that will be configured
Dual Redundant. We are sure that the HSZ70 will undergo a number of
firmware upgrades and a certain amount of regression testing will be
needed. A detail of the Ultra SCSI Cluster configuration that QTV
will be using can be found in Appendix A.
As mentioned previously, QTV has been directed to perform some
functionality testing of the HSZ70. QTV will focus on some of the
specific functional enhancments of the HSZ70 in order to help the
engineering group gain test coverage on all areas of the HSZ70
accross multiple groups. The specific functions we will cover can
be found in Section B of this plan.
Additionally, we are asked periodically to target test areas of the
operating system which have recently undergone change in order to
support new devices and features. Also, part of this process is to
insure that SDCI was adhered to. The SDCI information will be used to
target test.
A list of the validation guidlines for Ultra SCSI KZPBA and HSZ70
are listed below:
A. Configurations
- Configure 7 node MAC with DWZZB, DOC modules, and BA356.
- 3 Shared SCSI hosts will be configured with highest priority
SCSI ID numbers.
- Test with 20 meter cable length on one of the buses.
- Test with HUB on one of the Q-LOGIC busses
- KZPBA with old Fast-10 devices.
- With and without port allocation classes set.
- Test with bad port allocation classes configured to create
failures.
- Dump off system disk.
- Configure with DECNET+.
- Mixed versions of VMS within the cluster. Hailstorm on
shared SCSI bus Alpha nodes and satellite nodes, with
Alpha V6.2-1H3 and Vax V6.2 satellites.
- Although we will be using a number of mixed type fast-10
wide and narrow disks throughout our configuration,
the specific ULTRA Disk storage units used during testing
are listed below.
* RZ1BB-BA 2 gb wide Ultra SCSI disk
* RZ1CB-BA 4 gb wide Ultra SCSI disk
* RZ1DB-BA 9 gb wide Ultra SCSI disk
- HSZ70
* Quorum Disk
* Stripe set
* Mirror set
* Stripe/Mirror set
* Raid set
* Dual-Controller fail over
- Shared bus configuration with 3 nodes.
* Configuration #1 - Multi Host Ultra SCSI Bus
Single Host Ultra SCSI Bus
* Configuration #2 - Multi Host Non Ultra SCSI Bus
Single Host Non Ultra Bus and
KZPSA interaction.
* Configuration #3 - Multi Host/Multi Adapter Ultra
Bus.
* COnfiguration #4 - P2SE+ Satellite configuration.
(a detailed description of these configurations can)
be found in Appendix A.
- All descrete drives on KZPBA bus. This seems to
be worst-case for arbitration.
- HSZ40 and HSZ70 on same bus as KZPBA.
- Shared Node as boot server.
- Shared node as disk server.
- Listed below are the Adapters to be targeted during our
Ultra SCSI qualification testing and the configurations
they will be tested on:
| TEST CONFIGURATIONS |
+--------------------+--------+-----------+-----------+-----------+---------+
| | m-host | m-host | Sngl host | Sngl host | Mutiple |
| | ultra | not ultra | ultra | not ultra | adapter |
+--------------------+--------+-----------+-----------+-----------+---------+
| KZPBA-CA (note 2) | | | X | X | X |
+--------------------+--------+-----------+-----------+-----------+---------+
| KZPBA-DA (note 3) | X | X | X | X | X |
+--------------------+--------+-----------+-----------+-----------+---------+
| KZPBA-DB | X | X | X | X | X |
+--------------------+--------+-----------+-----------+-----------+---------+
| P1SE+ | | | X | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
| P2SE+ | | | X | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
Note: 1. QTV does not plan to do Mutiple P1SE+ and P2SE+ adapter
testing within the system, we will however have multiple's
in our cluster.
2. Not clustered.
3. Not available for Hailstorm schedule. Testing will be after
Hailstorm release.
| TEST CONFIGURATIONS |
+--------------------+--------+-----------+-----------+-----------+---------+
| | m-host | m-host | Sngl host | Sngl host | Mutiple |
| | ultra | not ultra | ultra | not ultra | adapter |
+--------------------+--------+-----------+-----------+-----------+---------+
| KZPDA | | | | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
| P1SE | | | | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
| P2SE | | | | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
| ITIOP | | | | X | |
+--------------------+--------+-----------+-----------+-----------+---------+
Note: 1. Because firmware changes have been made to these devices, QTV
will load new firmware to the devices and check boot paths.
2. An I/O load will also be selected as a verification test. We
do not plan to do a full set of testing with these adapters.
B. HSZ70 Functional Testing
SCHEDULE: Start- , Finish- (8 days)
While testing the HSZ70 we were asked to perform some specific
functional matrix testing as called out in the HSZ70 qual plan
sent out by Bert Martens. A list of the these tests are included
here.
Configuration #2
----------------
- Testing of different Ultra SCSI drive types with DILX.
- Select bandwidth or I/O request rate within the HSZ70
controller utilizing Arbitrator Management via CLI.
- Remove Cache policy switches.
- Exclusive mapping for Host LUN's.
- Error logging enhancments.
- Large transfer optimization.
- Setting device port burst size.
- Setting host SCSI speed.
- Setting device SCSI speed.
- Dual Redundant controller failover.
C. I/O Loading and Stress Testing
SCHEDULE: Start- , Finish- (16 days)
Configuration #1, #2, #3, #4
----------------------------
- UETP testing.
- LBN testing.
- DCHECK testing.
- DIOE testing.
- CTM (HIO + FILEX)
- SITP
D. Shadow Testing
SCHEDULE: Start- , Finish- (8 days)
Configuration #1
----------------
- Shadow HSZ70 Ultra disk and BA356 ultra disk.
- Shadow accross private and shared bus
- Serve Shadow sets.
- Backup/Physical/Compare
Configuration #3
----------------
- Shadow accross multiple SCSI adapters
E. Transition Testing
SCHEDULE: Start- Finish- (10 days)
* Transition testing performed on all Configurations.
---------------------------------------------------
- Remove and add a cluster member with and without a quorum
disk and with and without enough votes for cluster continue.
- Remove and add member to a shadow set member (also as system
disk).
- Simultaneous booting of cluster members.
- Simultaneous shutdown of cluster nodes.
- Simultaneous dump writing of cluster nodes.
- Boot a node and shutdown a node simultaneously.
- MSCP server failover.
- SYSMAN to AUTO reboot a node and the entire cluster after
shutdown.
- Alpha Boot server (KZPBA) to VAX Satalite.
SCHEDULE: Start-, Finish- (3 day)
- Power down a node in the cluster.
- Control P HALT a node connected to the shared bus SCSI.
- Initialize one node while the others are up.
- Cold cluster power-up with auto boot.
SCHEDULE: Start- , Finish- (3 day)
- HSZ70 controller failover.
- Disk server and boot server failover on shared SCSI.
- Boot from KZPAA controller and direct dumps to KZPBA disk.
- Prefered Path switching.
- Copy, Merge Shadow set.
- MOUNT/DISMOUNT shadow set.
- Boot HSZ70 cluster common system disk.
F. Exception Testing (Faulty Towers etc.)
SCHEDULE: Start- , Finish- (3 days)
* Exception testing performed on all Configurations.
---------------------------------------------------
- Crash Connection test.
- Crash Port test.
- Crash Virtual Circuit test.
- Mount Verify test.
- Hog SCS Locks test.
- Message Credit Exhuast test.
- RSPID Exhuastion test.
- Move Clusterwide Locks test.
- Full Rebuild test
- Exaust BDT,CDT
- Maintenece resets
G. Hot Plugging
SCHEDULE: Start- , Finish- (4 days)
* Hot Plug testing performed on all Configurations.
---------------------------------------------------
- Remove/Replace an Initiator (host). Done at host side of bus.
- Remove/Replace DWZZB and DWZZC with power applied.
- Remove/Replace shared bus SCSI disk.
- Remove/Replace HSZ40 and HSZ70 from active shared bus.(Power
applied)
- Remove/Replace DOC cable, DWZZB cable, and DWZZC cable.
Deliverables:
- Reviewed Ultra SCSI Validation plan with and KZPBA QLOGIC (PKQDRIVER)
and HSZ70.
- Ultra SCSI Validation plan to project distribution.
- Weekly status on validation testing.
- Problem reporting to MULTIHELP QAR data base.
- Statement of quality.
- Attend HSZ70 status meeting via phone hook-up.
Schedule:
Based on the validation testing in this plan, QTV will start
testing on 4/28/97 and complete its testing on 7/14/97 (this does
not factor in target testing). There are an estimated 55 days of
testing planned and we estimate 4 hours of testing per target test.
Although we have done some boot path and I/O stress testing with
Ultra SCSI, we will need to reconfigure the cluster and target
some specific confugurations as a result of the REV A review of
the validation plan. The calendar dates are dependent on the
stability of the KZPBA Q-LOGIC controller and driver, Ultra SCSI
hardware , and the HSZ70 firmware code. Any problems found during
target testing will also impact QTV's schedule. Target testing will
most likely occur throughout this schedule for areas specifically
effecting SCSI.
Staffing:
- 3 person required to complete the work:
Bob Young
Glen Glatfelter
Chuck Poat
Hardware requirements:
- KZPBA controllers.
- P1SE+ and P2SE+ adapters.
- Wide ultra SCSI disks.
- Ultra wide cables and terminators.
- HSZ70 with all appropriate cables and connectors.
- Doc prsonality modules for BA356 boxes.
- Turbo Laser Alpha machine.
- P1SE and KDPDA. (Bill Clogher has agreed to loan us this)
- DWZZC
Dependencies & Risks:
Along with the specific areas of risk, the routines which are not
called out as being at risk will be exercised to ensure correct
operation (APB boot drivers, net drivers, etc.).
- Quality Console Firmware delivered in a timely fashion
- Stability of KZPBA QLOGIC and PKQDRIVER.
- Any problems that cannot be fixed in a timely manner.
- HAILSTORM schedule.
- Any HAILSTORM show stoppers.
- KZPBA hardware modules and appropriate cables and DWZZC's.
Appendix A:
Dedicated SCSI Cluster Configuration
------------------------------------
+------------------+
| DecConcentrator |
+----+----+---+----+
FDDI | | | FDDI
+-----+ |F +-----------------+
| |D |
| |D |
| |I |
| | ETHERNET |
T--+--------(-+--------(-+----------+--------(-+----------+----------+--T
| | | | | | | | | |
+--+---+ ++-+---+ ++-+---+ +--+---+ ++-+---+ +--+---+ +--+---+
|Rhide | |Sable | |Lynx | |VAX4K | |flam5 | |cobra | | vax |
| | | | | | | | | | | | |3100 |
+--+---+ +--+-+-+ +--+-+-+ +--+---+ +------+ +------+ +------+
| | | | | |
| | | | | |
| | +--------|-+--------+
| | DSSI | DSSI
|S |S |S
|C |C |C
|S |S |S
|I |I |I
T--+----------+----------+----------------------T
SCSI (3 rails)
DSSI CONFIGURATION
------------------
+---------+ +---------+ +---------+
| SABLE | | SABLE | | VAX4K |
| (SISKO) | | (FLAM10)| | (CHWOWA)|
| | | | | |
| | | | | DIA203 |
| 7 | | 5 | | 0 |
+----+----+ +----+----+ +----+----+
| | |
| | |
| | |
| | |
T------+---------------+---------------+------+----T
|
|
+---+---+
| DIA2 |
| DIA3 |
| DIA4 |
| DIA6 |
+-------+
* Actual system names and detailed disk name will be provided is
status reports.
- CONFIGURATION #1 -
Multi-Host Ultra Bus
--------------------
Single Host not Ultra
---------------------
Ultra Ultra
Doc/disk Doc/disk
| |
| |
+-----------+ +-----------+ | +-----------+ |
| | | +2--+ | +2--+
T----+ kzpba-ca +----------+ kzpba-da +1---------+ kzpba-db +1--hsz70--+
| | | | | | |
| | | | | | Doc/ultra
T----+ ncr810 +----------+ ncr810 +----------+ ncr810 | ba356
| | | | | |
| | | | | |
T----| kzpsa +----------+ kzpsa +----------+ kzpsa |
| | | | | |
| | | | | |
T----+ p1se+ +---+ | | T---+ p1se+ +---+
| | | | | | | |
+-----------+ | +-----------+ +-----------+ |
| |
| |
Not Ultra Ultra
disks disks
Note: 1. P1SE+ Ethernet connection will be utilized for cluster
SCS traffic.
2. Ultra SCSI Single Ended to Differential controllers will
utilize DWZZC-DA.
- CONFIGURATION #2 -
Multi-Host Non Ultra Bus
------------------------
and KZPSA Interaction
---------------------
Non Ultra Non Ultra
| |
| |
+-----------+ +-----------+ | +-----------+ |
| | | +2--+ | +2--+
T----+ kzpba-ca +----------+ kzpba-da +1---------+ kzpba-db +1--hsz70--+
| | | | | | |
| | | | | | hsz40
T----+ ncr810 +----------+ ncr810 +----------+ ncr810 |
| | | | | |
| | | | | |
T----| kzpsa +----------+ kzpsa +----------+ kzpsa |
| | | | | |
| | | | | |
T----+ p1se+ +---+ | | T---+ p1se+ +---+
| | | | | | | |
+-----------+ | +-----------+ +-----------+ |
| |
| |
Not Ultra Ultra
disks disks
Note: 1. P1SE+ Ethernet connection will be utilized for cluster
SCS traffic.
2. Ultra SCSI Single Ended to Differential controllers will
utilize DWZZC-DA.
- CONFIGURATION #3 -
Multi-Host/Multi-adapter Ultra Bus
----------------------------------
Ultra Ultra
Doc/disk Doc/disk
| |
| |
+-----------+ +-----------+ | +-----------+ |
| | | +2--+ | +2--+ (note 2)
T----+ kzpba-ca +----------+ kzpba-da +1---------+ kzpba-db +1--hsz70--+
| | | | | | |
| | | | | | Doc/ultra
T----+ ncr810 +----------+ ncr810 +----------+ ncr810 | ba356
| | | | | |
| | | | | |
T----| kzpsa +----------+ kzpsa +----------+ kzpsa |
| | | | | |
| | | | | |
T----+ kzpba-ca +----------| kzpba-da +1---------+ kzpba-db +1----(note 2)
| | | +2---+ | +2--+
+-----------+ +-----------+ | +-----------+ |
| |
| |
Non-Ultra Ultra
disks disks
Note: 1. Ultra SCSI Single Ended to Differential controllers will
utilize DWZZC-DA.
2. Connection will be swapped into DWZZH-03 hub and a
quick check with a BA356 HUB.
- CONFIGURATION #4 -
Ultra SCSI P2SE+ Satalite Node
------------------------------
Non Ultra Ultra
| |
| |
+-----------+ | |
| +2--+ |
T----+ p2se+ +1-----------------+
| |
| |
T----+ ncr810 +------internal disks
| |
| |
| |
| |
+-----------+
Note: 1. P2SE+ ethernet connection will be configured into
the Ultra SCSI cluster so Satalite disks will be
served to node via the targeted P2SE+ adapter.
Ultra SCSI HUB Utilized in Configuration #3
-------------------------------------------
+--------+ +---------+
| | ID #7 | SISKO |
| +-------+KZPBA-db |
| | +---------+
| |
|DWZZH-03| +---------+
| HUB | ID #6 | FLAM10 |
| +-------+KZPBA-DB |
| | +---------+
| |
| | +---------+
| | | HSZ70 |
| tri \ +-------+ |
| link / +---+ +---------+
| | |
| | | +---------+
+--------+ | | HSZ70 |
+---+ |
+---------+
Notes:
1. SW800 consists of the following disks - DKD101
DKD102
DKD201
2. SW300 consists of the following disks - DKD302
DKD303
DKD305
DKD307
DKD401
Appendix B:
Project Evaluations
-------------------
QTV Project Evaluation for ULTRA SCSI/HSZ70
Purpose: To assess the state of this project, determining:
- Quality risks & focus areas
- estimate of QTV resources & effort
- Inputs to the Validation Plan
- to document a common understanding between QTV and the
development organization of the current state in terms
of quality, and the intended final state of the project
I. General Information
Name of project: ULTRA SCSI/HSZ70
Author of evaluation: BOB YOUNG
Date of this evaluation: 1/27/97
Engineering contact: Bob Silver
Purpose of Project: Implement Ultra Scsi into the HAILSTORM stream.
II. Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in. Put an X preceeding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
____ Implementation
__X_ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report
____ Product Requirements
____ Project Plan
____ Functional Specification
____ Design Specification No plans available..
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
What types of development effort does this project involve?
(estimate the % of each type that comprise the total project effort)
_70% Code design & development within OSSG
____ OSSG porting of code from an external source
_30% OSSG porting of code from an internal source
____ Subcontracted development or porting by external vendor
____ OSSG modification & integration of a third party product
____ OSSG resale of a third party product
____ Proactive maintenance
____ Other (describe)
III: Defect Containment Status
Activity Estimated/Planned Completed to date
-------- ----------------- -----------------
Defect Identification Goals
100% code inspected: 26-Jan-1996
90% code unit testing:
%code integration tested
%code(features?) validated
Defect data:
#Defects found in Design reviews/inspections
n/a
#Defects found in code reviews/inspections
1
#defects found in Unit Test
1
#defects found in Integration Test
n/a
#defects found in Validation (includes Field Test)
none
Total number of defects found pre-release
2
How were estimates derived? given by devo on implementing fixes.
How were actual values measured? Debug compiler found bug and code
inspection also proved valuable.
Describe any other methods used to remove defects prior to integration
(i.e. design reviews, etc) none
IV. Integration & Testing Status
What percentage of code for this project been integrated with other
system components?
1% or less
How was/will readiness for integration with other system components
be determined? This was determined by the hardware engineering group.
4 sites testing.
List & describe all tests currently available for testing this project.
Scsi I/O load tests already available in qtv.
Are any existing tests run on a regular basis against the project's
code base? By whom?
Bob Young, Bill Clogher, Chuck Poat.
When will the code integration for this project be completed?
Done
V. Configurations
Indicate which platforms & operating systems this project will
support by placing VERSION NUMBER(s) in the appropriate boxes.
---------------------------------------------------
| INTEL || DIGITAL |
|-------------||------------------------------------|
| NT | W95 || Alpha NT |OVMS Alpha|OVMS VAX|UNIX |
-------|-----|-------||----------|----------|--------|-----|
|Client | | || | v6.2-1h3|v6.2-1h3| |
| | | || | v7.1 | v7.1 | |
|-------|-----|-------||----------|----------|--------|-----|
|Server | | || | same | same |same |
| | | || | | | |
-----------------------------------------------------------
Describe any concerns, issues, restrictions or testing considerations that
must be paid to specific
hardware platforms:
firmware revisions:
network transports:
mixed architecture clusters:
mixed version clusters:
software products &/or versions:
- v6.2 -1h3 needs modified PKQdriver.
with 5.01 firmware.
- v6.2 not supporting taget mode.
- v6.2 not support scsi hi id #'s.
- v7.1 needs modified PKQdriver.
VI. QTV Requirements for the Project
Check all areas that QTV's involvement could significantly impact the
quality and timely delivery of this project, and estimated dates when
resources should be assigned.
Activity Date Resource Needed?
-------- ---------------------
__X_ Defect Containment Consulting
__X_ Size Estimation
____ Moderating Inspections
____ Participation in Code Inspections
____ Use of SLIM control (consulting or implementation)
__X_ Problem Management
__X_ Metrics &/or Doneness Criteria
__X_ Pre-integration Defect Identification
__X__ Post-Integration Defect Identification
__X__ Final Qualification
____ In-Use Testing only
Describe all testing activities that you believe are required of QTV
for this project?
- Installation
- validation testing of Ultra Scsi hardware
- validation testing of Ultra Scsi firmware
- validation testing of Ultra Scsi drivers
- Target testing
- Matrix testing
- qual testing
- assisting is problem resolution
What skills are required of QTV to successfully perform the testing
described above?
Scsi knowledge of testing and configurations.
Describe all known hardware requirements of this project that have not
already been specified above.
Need Ultra Scsi drives for hsz70 and Kzpba B-Prime Q-logic
modules.
VI. Dependencies & Risks
Identify & describe any dependencies this product has on any other
project development efforts (DIGITAL or 3rd party), and state who is
managing each dependency.
Stable firmware for Q-logic
Explain all known risks, issues and concerns with the quality of this
product or it's integration for delivery by OSSG to DIGITAL customers.
Explain what is being done to mitigate the risk and/or what
contingency plans are in place should the risk unfold. (If nothing,
then state that.
Devo concerned about Q-logic firmware
Metting constantly going on with Q-logic engineering.
Are there concerns about impact on reliability or interoperability
with existing software, hardware &/or firmware?
See above.
List specific areas which need special attention.
Multiple host testing with shared bus and Q-logic focus
with Hsz70.
List other risks such as: insufficient time, too few people, limited
exposure and/or use and hardware concerns.
Project management needs to determine if QTV will test
V6.2-1h3 with V7.1. If this is the case, QTV will need
to allocate people resource
|
483.7 | MX5 Evaluation | STAR::S_MARCHESANO | | Mon May 05 1997 14:59 | 237 |
| MX5 EVALUATION
==============
1.0 General Information
Name of project: MX5
Date of this evaluation: 25-MAR-1997
Engineering contact: KEN FOLLIEN
Evaluator: Chuck Poat
Purpose of Project:
This project will will provide OpenVMS Alpha support for the MX5
personal workstation.
2.0 Hardware Description
- Is this a new piece of hardware or platform? Yes
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are?
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an inuse project?
The MX5 project will require a validation effort.
- Project Priority for this release.
MX5 is a P1 for the Hailstorm release.
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
__X_ Implementation
____ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report
____ Product Requirements
__F_ Project Plan star::docd$:[evax.miata.specs]pp_miata_x01.ps
____ Functional Specification
____ Design Specification
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
error_routines_1e05.c cpu1e05 Miata Error handler N
io_support_1e05.c cpu1e05 Miata I/O support N
intr_disp_1e05.c cpu1e05 Miata Iterrupt support N
init_1e05.bli cpu1e05 Init sys$cpu_routines_1e05 N
mchk1e05def.sdl cpu1e05 Machine check Def file N
io_support_1e.c sysloa Common I/O support N
pyxis_support.c sysloa PYXIS host support N
boot_support_pyxis.c bootdriver I/O support routines N
sys$icbm_1e.c iogen Autoconfig routines N
lan_boot.bli apb LAN boot mode M
pyxis.sdl lib PYXIS Cheip Def File N
hwrpbdef.sdl lib HWPRB Def file M
dcdef.sdl statlet Adapter Type Def File M
>NOTE: Code is still preliminary and a count of the LOC would not
accurate at this time.
6.0 Defect Containment Status
Briefly describe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implemented please indicate that also. To estimate the project
defects use the following formula:
projected number of defects = # uncommented source code lines
-------------------------------
100
NOTE: Defect containment information will be completed once the code for this
project is finalized.
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected:
%code unit testing:
%code integration tested
%code(features?) validated
Defect Data
Estimated number of defects for this project:
# Defects found by
------------------
Design reviews/inspections
Code reviews/inspections
Unit Test
Integration Test
# of defects found to date
7.0 Configurations
Briefly describe or list the supported configurations/options for this
project.
- How will this project be supported?
The MX5 will be supported as a Client in a OpenVMS cluster
and as a standalone personal workstation.
There are no current plans for the MX5 to be supported as a server.
Mixed Architecture Cluster Impact:
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
- Are there any firmware dependencies for this project?
Supported Options list:
Note this is a preliminary list and it is based on what UNIX is currently
supporting. I would expect this list to change before FRS.
Memory
32Mb MSP01-HB
64Mb MSP01-HC
129Mb MSP01-HD
Cache Memory
2Mb cache card MSC01-AA
Storage Adapters
PBXKP-BA
KZPAA-AA Fast Narrow SCSI
KZPBA-CA Ultra SCSI
TZK20-LK 2.3 Gb SCSI tape drive
TLZ09-LK 4.0/8.0 SCSI DAT drive
Network Adapters
PBXDA-AA
DE435-AA PCI Ethernet
DE450-CA PCI Ethernet (TW,AUI,TP)
DE500-XA,AA,BA,FA PCI 10/100 Fast Ethernet
DEFPA-AA PCI FDDI
Racore Tokin Ring
8.0 Risks
List any risks or that may hinder this project during this release.
Hardware:
There are currently 2 know problems with the PYXIS chipset
which will require workarounds in software and console.
TBIA bug will require a software workaround for a
scatter/gather bug in the PYXIS chipset. There are no plans to
correct this bug in the chipset.
8K Page Boundary Bug. Console must provide a "Supported Device"
table which contains valid devices that can be configured in front
of the bridge on MX5 (primary slots).
Runtime driver support for the Ethernet and PHY chip mix
will require support for the following combinations:
Tulip Phy Chip
21142 DP83840
21142 DP83840A
21143 DP83840
21143 DP83840A
Since the Tulip is on the mother board QTV will require 2 platforms
one with the 21142 Tulip and 1 with the 21143 Tulip.
Software:
The code required for this project is still preliminary. The
Hailstorm release cycle is short and it will be difficult for the
MX5 project maintain this schedule.
Resources:
This project is at risk during Hailstorm because of the limited
number of available QTV Engineers, availability of Ultra SCSI hardware
and the number of Hailstorm projects.
9.0 Dependencies
List any dependencies that may have a negative impact on this project
during this release.
Hardware:
QTV will need Ultra SCSI adapters and disks for the MX5 validation.
Software:
Resources:
|
483.8 | Racore PCI Token Ring Evaluation | STAR::S_MARCHESANO | | Mon May 05 1997 15:00 | 204 |
| RACORE EVALUATION
=================
1.0 General Information
Name of project: RACORE PCI Token Ring
Date of this evaluation: 4-24-1997
Engineering contact: Dick Stockdale
Evaluator: Chuck Poat
Project Priority for this release: P1
Purpose of Project:
Insure there is no adverse effects when the RACORE PCI Token Ring Adapter
is used in place of the current Thomas Conrad TC4048 adapter.
2.0 Hardware Description
- Is this a new piece of hardware or platform? Yes
The RACORE adapter is identical to the Thomas Conrad TC4048 except
that a new PCI ID identifies the adapter as a RACORE rather than
TC4048 adapter.
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are? NO
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an inuse project?
The RACORE project will be an INUSE PROJECT.
- If this is an inuse and does not require a validation plan, briefly
list the areas that will be target tested.
+ Mixed Token Ring configuration Thomas Conrad TC4048 and RACORE
adapters
+ RACORE module recognized by OpenVMS
+ Run UCX
+ run sys$lan_vcitest
+ DECnet phase IV and DECnet PLus if supported
Page 2
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement Complete
____ Investigation Complete
____ Draft Plan & Design NA
____ Detailed Project Planning Complete
____ Detailed Design/Functional Spec Complete
____ Implementation Complete
_X__ Validation Testing 0%
____ Assess Doneness 0%
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
_F__ Investigation Report STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
_F__ Product Requirements STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
_F__ Project Plan STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
_F__ Functional Specification STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
_F__ Design Specification STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
_NA_ Development Plan
_NA_ Defect Containment Plan
_NA_ Unit Test Plans
_NA_ Test/Validation Plan
_NA_ Integration Test Plan
_NA_ Final Qualification Plan
_F__ Doneness Criteria STAR::DOCD$:[EVMS.PROJECTS]IR-RACORE.TXT
____ Others (specify)
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
No changes to any OpenVMS facilities required for support.
6.0 Defect Containment Status
Briefly describe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implemented please indicate that also. To estimate the
project defects use the following formula:
projected number of defects = # uncommented source code lines
-------------------------------
100
Page 3
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected: NA
%code unit testing: NA
%code integration tested NA
%code(features?) validated NA
Defect Data
Estimated number of defects for this project:
# Defects found by
------------------
Design reviews/inspections NA
Code reviews/inspections NA
Unit Test NA
Integration Test NA
# of defects found to date 1
This bug fix is the result of a problem reported by UNIX on a Rawhide.
7.0 Configurations
Briefly describe or list the supported configurations or options for this
project.
- How will this project be supported?
Mixed Architecture Cluster Impact:
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
Are there any firmware dependencies for this project?
Supported Options/Configurations list:
8.0 Risks
List any risks for this project during this release.
Software:
Hardware: There is only a single RACORE module available for both
development and validation efforts. This will impact the
amount of time that the hardware is available to QTV.
Resources: Hailstorm is under staffed for the amount of work required
if we try and complete all the P0 and P1 currently planned for
this release.
9.0 Dependencies
List any dependencies that may have a negative impact on this project
during this release.
Software:
Hardware:
Resources:
Page 4
10.0 Inputs to be considered for the Statement of Quality.
Use the following list of quality factors as a guide when writing
the statement of quality for this project.
efficiency - How well does the product respond? Is the
performance consistent with other (similar) aspects
of the system? If there are any explicit performance
requirements, are they met?
reliability - How reliable is the product when operated under
load or stress conditions? Does the product perform
consistently over an extended duration? Does the
product perform as expected under negative conditions?
(no data corruption, no innocent bystanders effected)
correctness - Does the product meet its requirements? Are
all supported functions present and working as expected
based on the product description and on the user
documentation?
integrity- Is access to data provided when appropriate? Is
access to data prevented when appropriate? Are there
any explicit security requirements on the product?
Are they met?
usability - How easy to use is the product? Does the ease of
use match expectations (based on experience with
similar products)? How does the product match up to
any explicit ease-of-use requirements?
interoperability - How does the product interact with other
production systems? Do all explicit interoperability
aspects of the product work as expected? How does the
product perform in a Mixed Architecture Cluster?
|
483.9 | DE450 Evaluation | STAR::S_MARCHESANO | | Mon May 05 1997 15:00 | 204 |
| DE450 EVALUATION
================
1.0 General Information
Name of project: DE450
Date of this evaluation: 4-24-1997
Engineering contact: Jay Stickney
Evaluator: Chuck Poat
Project Priority for this release: P1
Purpose of Project:
The current DE450 module uses components from a different vendor. There
are no functional changes to this module. The inuse project will verify
there are no adverse affects to network drivers or operating system with
this version of DE450.
2.0 Hardware Description
- Is this a new piece of hardware or platform? NO
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are? No enhancement
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an inuse project?
Inuse only
- If this is an inuse and does not require a validation plan, briefly
list the areas that will be target tested.
The following areas will be targeted during Hailstorm release.
DECnet Phase IV
DECnet PLUS
Mop boots
ISL boots
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement NA
____ Investigation NA
____ Draft Plan & Design NA
____ Detailed Project Planning NA
____ Detailed Design/Functional Spec NA
____ Implementation NA
Page 2
____ Validation Testing NA
____ Assess Doneness NA
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report NA
____ Product Requirements NA
____ Project Plan NA
____ Functional Specification NA
____ Design Specification NA
____ Development Plan NA
____ Defect Containment Plan NA
____ Unit Test Plans NA
____ Test/Validation Plan NA
____ Integration Test Plan NA
____ Final Qualification Plan NA
____ Doneness Criteria NA
____ Others (specify)
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
There were no changes made to any VMS facilities for this project
6.0 Defect Containment Status
Briefly describe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implemented please indicate that also. To estimate the
project defects use the following formula:
projected number of defects = # uncommented source code lines
-------------------------------
100
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected: NA
%code unit testing: NA
%code integration tested NA
%code(features?) validated NA
Defect Data
Estimated number of defects for this project:
Page 3
# Defects found by
------------------
Design reviews/inspections NA
Code reviews/inspections NA
Unit Test 0
Integration Test NA
# of defects found to date 0
7.0 Configurations
Briefly describe or list the supported configurations or options for this
project.
- How will this project be supported?
There are no changes in current DE450 support.
Mixed Architecture Cluster Impact:
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
There should be no risks to mixed architecture clusters with
this version of the DE450
Are there any firmware dependencies for this project?
No
Supported Options/Configurations list: No Changes
8.0 Risks
List any risks for this project during this release.
Software: None
Hardware: None
Resources: Hailstorm is under staffed for the amount of work required
if we try and complete all the P0 and P1 currently planned for
this release
9.0 Dependencies
List any dependencies that may have a negative impact on this project
during this release.
Software: None
Hardware: None
Resources: None
Page 4
10.0 Inputs to be considered for the Statement of Quality.
Use the following list of quality factors as a guide when writing
the statement of quality for this project.
efficiency - How well does the product respond? Is the
performance consistent with other (similar) aspects
of the system? If there are any explicit performance
requirements, are they met?
reliability - How reliable is the product when operated under
load or stress conditions? Does the product perform
consistently over an extended duration? Does the
product perform as expected under negative conditions?
(no data corruption, no innocent bystanders effected)
correctness - Does the product meet its requirements? Are
all supported functions present and working as expected
based on the product description and on the user
documentation?
integrity- Is access to data provided when appropriate? Is
access to data prevented when appropriate? Are there
any explicit security requirements on the product?
Are they met?
usability - How easy to use is the product? Does the ease of
use match expectations (based on experience with
similar products)? How does the product match up to
any explicit ease-of-use requirements?
interoperability - How does the product interact with other
production systems? Do all explicit interoperability
aspects of the product work as expected? How does the
product perform in a Mixed Architecture Cluster?
|
483.10 | DE500-BA Evaluation | STAR::S_MARCHESANO | | Wed May 14 1997 12:04 | 217 |
| From: EVMS::GLATFELTER "Glenn C. 381-1340 ZKO3-4/4Z05/S23 OpenVMS QTV 14-May-1997 1042" 14-MAY-1997 10:45:01.04
To: STAR::S_MARCHESANO
CC:
Subj: DE500-BA Eval
1.0 General Information
Name of project: DE500-BA ETHERNET ADAPTER
Date of this evaluation: 04-Apr-1997
Engineering contact: Jay Stickney, EVMS::JSTICKNEY
Evaluator: Glenn Glatfelter, QTV
Purpose of Project:
The purpose of this project is to test the DE500-BA as a replacement
for the DE500-AA (which is near EOL) with an in-use validation and
target-test boot support for 10mbit and 100mbit operating options.
2.0 Hardware Description
- Is this a new piece of hardware or platform? Yes
* Supported on OpenVMS Alpha Only
* DE500-BA PCI Ethernet NIC (21143PB Chip)
* 10mbit or 100mbit Fast Ethernet
* Supports NWAY (IEEE 802.3u Autonegotiation) with control registers
internal to 21134 chip requiring a different implementation of
the NWAY state machine in the driver when compared to the DE500-AA.
* 100Base-TX media connections (RJ-45) with maxium distance of
100 meters.
* New SYS$EWDRIVER_DE500BA.EXE driver for V7.1 (DE500AB.MAR)
* Added Boot Support to TPBTDRIVER.MAR (TULIP Bootdriver)
10mbits half-duplex and full duplex
100mbits half-duplex and full duplex
NWAY
{Console environment EW*0_MODE read from 21134 CFDA register}
{test NWAY with cable pulling, changes in link partner}
* Modifications to CPU Routines for APB
* SYS$EWBTDRIVER and APB modifications
{test mulitple bases PCI platforms and NIC's supported by this
driver DE500-AA,DE435}
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are?
N/A
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an in-use project?
In-use with target-testing of boot support.
In-use specifies the DE500-BA as the NI network interface
for cluster communications and in the case of a network failover
should be the default NI device.
Boot support testing:
10mbit half and full duplex
100mbit half and full duplex
NWAY
boot APB to infoserver and install V7.1-1H1 kit
boot and install console firmware
- Project Priority for this release?
P1
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement 100%
____ Investigation 100%
____ Draft Plan & Design
____ Detailed Project Planning 100%
____ Detailed Design/Functional Spec 100%
_X__ Implementation 100%
____ Validation Testing 0%
____ Assess Doneness 80%
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
_F__ Investigation Report DE500BA_LOP, Jay Stickney
____ Product Requirements
_F__ Project Plan DE500BA_LOP, Jay Stickney
_F__ Functional Specification Available from Brent Hiller, Network
Adapter Business Group
_F__ Design Specification Available from Brent Hiller, Network
Adapter Business Group
____ Development Plan DE500BA_LOP, Jay Stickney
_F__ Defect Containment Plan DE500BA_LOP, Jay Stickney
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
_F__ Doneness Criteria DE500BA_LOP, Jay Stickney
____ Others (specify)
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
SYS$EWDRIVER_DE500BA.EXE LAN DE500-BA Exe New V7.1
DE500BA.MAR LAN DE500-BA Source Modified
SYS$EWBTDRIVER.EXE BOOTDRIVER BOOTDRIVER Exe Modified
TPBTDRIVER.MAR BOOTDRIVER BOOTDRIVER Source Modified
APB.EXE CPU0* APB APB CPU Routines Modified
6.0 Defect Containment Status
Briefly descibe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implimented please indicate that also. To estimate the
project defects use the following formula:
DE500BA.MAR = 150
TPBTDRIVER.MAR = 350
CPU0*.* = 10
---------------------
Total = 510
projected number of defects = # uncommented source code lines
-------------------------------
100
projected number of defects = 510
--- = 5
100
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected: 0%
%code unit testing:
%code integration tested
%code(features?) validated
Defect Data
Estimated number of defects for this project:
# Defects found by
------------------
Design reviews/inspections
Code reviews/inspections
Unit Test
Integration Test
# of defects found to date
7.0 Configurations
Briefly describe or list the supported configurations or options for this
project.
- How will this project be supported?
Alpha PCI based platforms
Mixed Architecture Cluster Impact:
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
By using an EtherWorks Switch 8T/TX a Mixed Architecture
VMScluster can be configured since 8 ports are available
that support 10mbit, 100mbit, NWAY, half or full duplex.
Are there any firmware dependencies for this project?
A new console will be required and can be found at:
may21::wrk:[mopload]t49_*update.sys
(* = SB for Sable, SL for LYNX, etc)
Supported Options list:
8.0 Risks
List any risks that may hinder this project during this release.
Software:
Hardware:
Resources:
9.0 Dependencies
List any dependencies that may have a negitive impact on this project
during this release.
Software:
Hardware: Mixed Architecture Cluster tests can only be done if
the EtherWorks 8T/TX is loaned to QTV by Network Eng.
(Jay Stickney).
Resources:
|
483.11 | K2 Testing in Hailstorm | STAR::S_MARCHESANO | | Fri May 16 1997 14:40 | 58 |
| From: EVMS::POAT 16-MAY-1997 13:23:17.73
To: SUSAN
CC: BOB,GLENN,POAT
Subj: K2 tests for hailsrom
K2 was a Gryphon project but because of problems in Gryphon with K2 and
common code in Hailstorm with TAKARA, QTV will perform the target tests
and validation tasks listed below. These tasks will be used as a means to
verify the functionality and stability of the K2 platform.
QAR Target Tests
GRYPHON QARs
QAR # St Sev Pub Cat Maintainer Component T Entered-by Date in Answer
----- -- --- --- --- ------------ ---------- - ------------ ----------- ------
02913 CL S Yes FN ADLER K2 M ADLER 8-NOV-1996 11-NOV
<66> CANNOT BOOT OR MOUNT CD IN RRD45 DRIVE ON K2/LEGO
QAR # St Sev Pub Cat Maintainer Component T Entered-by Date in Answer
----- -- --- --- --- ------------ ---------- - ------------ ----------- ------
02987 CL H Yes FN ADLER K2 M ADLER 4-DEC-1996 23-DEC
<67> K2/LEGO CRASHES WHEN BOOTING INTO A CLUSTER
RAVEN QARs
QAR # St Sev Pub Cat Maintainer Component T Entered-by Date in Answer
----- -- --- --- --- ------------ ---------- - ------------ ----------- ------
00681 OP H Yes UI YHONG K2 I QAR_INTERNAL 22-JAN-1997 30-JAN
FAIL RECOGNIZING EXTERNAL DEVICE VIA SCSI CONTROLLER/PCI
K2 Hailstorm validation Tasks
Standalone K2 Testing
- Build a local system disk from CD when bootable CD is available.
- ISL system disk builds with NI and FDDI
- DECnet Plus
- Configure K2 system with available options from the supported
options list. Test adapters in front of and behind the bridge.
- QTV Regression tests
- CPU Hoops
- I/O and CPU/memory test with local devices
TEG TOOLS
CTM
SITP
Faulty Towers Testing with CTM
K2 Client Testing
- Add K2 to existing cluster
- Use both NI and FDDI to boot into cluster
- Boot/crash dump testing over NI and FDDI
Configure NI and FDDI adapters in front and behind bridge
- I/O load testing
MSCP/TMSCP serve disks and tapes to cluster members
- Tape and disk backups/restores
|
483.12 | DSM Evaluation | STAR::S_MARCHESANO | | Mon May 19 1997 09:59 | 270 |
| From: STAR::OVERMAN "Mimi Overman DTN:381-1380 ZK03-4" 16-MAY-1997 15:55:30.75
To: S_MARCHESANO
CC: OVERMAN
Subj: DSM evaluation
1.0 General Information
Name of project: OpenVMS DSM MIB Server for Hailstorm
Date of this evaluation: 4/23/97
Engineering contact: Tony Payne Mgr. (STAR::TPAYNE)
Eric Abis MIB
Paul Jacobi ($GETSYI)
Evaluator: Glenn Glatfelter (Mimi Overman)
Purpose of Project:
The purpose of this project is to provide an in-use validation for the
DSM MIB server in an OpenVMS V7.1-1H1 mixed cluster environment. Usage
will include installation on supported platforms and testing using
available network browsers such as Netview and/or ServerWorks.
2.0 Hardware Description
The Digital Server Extended Management Information Database (DSM MIB)
provides a means to remotely manage hardware using a DSM MIB Subagent
that interfaces to the SNMP Agent over a UCX V4.1 (DEC TCP/IP)
network.
The DSM MIB Server for OpenVMS Alpha is a port from UNIX and will
support 151 MIB Objects out of a total of 191 supported by UNIX.
A DSM SUBAGENT runs as a detached process and interfaces to the
SNMP AGENT using socket connections. An SNMP REQUEST for a DSM Object
is provided by the DSM SUBAGENT. The DSM SUBAGENT maintains and
reports object information. The DSM SUBAGENT responds using the
new $GETSYI routine.
- Use ServerWorks and Netview to test supported DSM MIB Objects
- UCX V4.1 will be modified to include the DSM MIB Server.
- Run DSM MIB Server Regression Tests (if available).
- Is this a new piece of hardware or platform?
No, this is a software application but, needs to be tested on
TurboLaser, Correlle, Rawhide, Noritake
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are?
NA
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an in-use project?
In-Use
- Project Priority for this release?
P1
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
_X__ Implementation 80% Complete
____ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
(*) documents located at \\star\pccommon\dsm
____ Investigation Report
____ Product Requirements
_F__ Project Plan (*)
OpenVMS Digital Server MIB Project Plan
Revision/Update V1.3
Sept ??, 1996
V1.3 Update by Tony Payne March 28, 1997
_F__ Functional Specification (*)
OpenVMS Digital Server Extension MIB Subagent
Function Specification Revision/Update V1.1
04/03/97
_F___ Design Specification (*)
OpenVMS Digital Server MIB Design Specification
V1.0 Sept 27, 1996
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
____ Doneness Criteria
_F__ Others (specify)
(*) FRU Management
July 20,1993
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
SVRSYSTEM_METHOD.C MIB HAILSTORM_RESD:[MIB] PORTED
SVRSYSTEM_METHOD_FRU.C MIB PORTED
SVRSYSTEM_MIB.C MIB PORTED
SVRSYSTEM_TBL.C MIB PORTED
SVRSYS_FRU.C MIB PORTED
SVRSYS_FRU_OSS.C MIB PORTED
SVRSYS_FRU_PARSE.C MIB PORTED
ESNMP.H MIB PORTED
FRU_TABLE.H MIB PORTED
SVRBASESYSTEM_TBL.H MIB PORTED
SVRSYSTEM.H MIB PORTED
SVRSYSTEM_TBL.H MIB PORTED
SVRSYS_FRU.H MIB PORTED
$GETSYI New
6.0 Defect Containment Status
Briefly describe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implimented, please indicate that also. To estimate the
project defects use the following formula:
5 defects
projected number of defects = #_uncommented_LOC X ------------
1000
Code Defect Factors
New Code = 20 = $getsyi
Ported Code = 5 = mib
MIB Code Info (excluding $getsyi)
Total Lines of code: 11107
Total Lines of Uncommented Code: 6037
projected number of defects = 6037 * 5 / 1000 = 30
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected: 0%
%code unit tested: 0%
%code integration tested 0%
%code(features?) validated 0%
Defect Data - Estimated number of defects for this project:
# Defects found by
------------------
Design reviews/inspections 0
Code reviews/inspections 0
Unit Test 0
Integration Test 0
# of defects found to date 0
7.0 Configurations
Briefly describe or list the supported configurations or options for this
project.
TURBOLASER
CORELLE
NORITAKE
RAWHIDE
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
No
Are there any firmware dependencies for this project?
All platforms should be at the latest firmware release.
Supported Options list:
UCX V4.1/SNMP
ServerWorks (Supplied on Server CD)
Netview
151 DSM MIB Objects
8.0 Risks
List any risks that may hinder this project during this release.
Software:
Hardware: New console code for TurboLaser not ready yet.
Resources: May or may not make it into HS release due to completion
of MIB code.
9.0 Dependencies
List any dependencies that may have a negative impact on this project
during this release.
Software:
ServerWorks
Netview
DSM MIB Regression Tests? (Paul Jacobi)
UCX V4.1
Hardware:
TLASER, CORELLE, NORITAKE, RAWHIDE Platforms
Resources:
Mimi Overman will be the DRI to conduct the in-use testing
and platforms will be made available.
MIB Facility Lines of Code
--------------------------
FACILITY MIB
Module Ext Total LOC LOC w/o Comments
---------------- --- --------- ----------------
SVRSYSTEM_METHOD C 1094 570
SVRSYSTEM_METHOD_FRU C 2783 1406
SVRSYSTEM_MIB C 323 159
SVRSYSTEM_TBL C 614 469
SVRSYS_FRU C 1576 474
SVRSYS_FRU_OSS C 108 22
SVRSYS_FRU_PARSE C 1112 627
--------- ----------------
SubTotal 7610 3727
ESNMP H 391 197
FRU_TABLE H 329 261
SVRBASESYSTEM_TBL H 1167 835
SVRSYSTEM H 290 121
SVRSYSTEM_TBL H 1167 835
SVRSYS_FRU H 160 61
--------- ----------------
SubTotal 3504 2310
--------- ----------------
Total 11114 6037
|
483.13 | HAILSTORM Validation Schedule | STAR::S_MARCHESANO | | Thu May 29 1997 15:04 | 46 |
| Below is the QTV Validation schedule for Hailstorm. If you have any
questions, please contact Susan Marchesano.
QTV HAILSTORM Validation Schedule
1 Pre-EFT Validation Testing 31d 5/9/97 6/20/97
2 Limited P1SE+/P2SE+ Validation 19d 5/9/97 6/4/97 BYoung
3 Ultra SCSI Val. (config #1) 19d 5/9/97 6/4/97 BYoung
4 TAKARA Initial Checkout 5d 5/13/97 5/19/97
5 TAKARA Standalone Validation 7d 5/20/97 5/28/97 4 Glatfelter
6 MX5 Inital Checkout 6d 5/9/97 5/16/97 Poat
7 MX5 Standalone Val. 25d 5/19/97 6/20/97 6 Poat
8 K2/Lego Validation 1d 5/13/97 5/13/97
9 DSM Validation 25d 5/13/97 6/16/97 Overman
10 Racor Token Ring Val. 15d 5/13/97 6/2/97
11 BL3 Regression Testing 5d 5/19/97 5/23/97 Elser
12 EFT Matrix Planning 10d 5/19/97 5/30/97 Jancsy
13 EFT Qual Cycle 22d 5/16/97 6/16/97
14 EFT Code Freeze 0d 5/16/97 5/16/97
15 EFT Build 5d 5/16/97 5/22/97 14
16 EFT Baselevel Available 0d 5/23/97 5/23/97 15
17 EFT Regression Testing 5d 5/23/97 5/29/97 16
18 Freeze EFT Qual Config's 0d 5/23/97 5/23/97 16
19 EFT Pre-qual 6d 5/23/97 5/30/97 18 "BYoung,Glatfelter,Poat,Jancsy"
20 EFT Qual 11d 6/2/97 6/16/97 19 "BYoung,Glatfelter,Poat,Elser,Jancsy"
21 Post EFT Validation 32d 6/17/97 7/30/97 20
22 Receive DOC Mod's & Ultra Disks 0d 6/17/97 6/17/97
23 P1SE+/P2SE+ Validation 24d 6/17/97 7/18/97 22 "Glatfelter,Poat"
24 Ultra SCSI Val. Config #2 13d 6/17/97 7/3/97 BYoung
25 Ultra SCSI Val. Config #3 13d 7/4/97 7/22/97 24 BYoung
26 TAKARA Cluster val. 9d 6/17/97 6/27/97 Glatfelter
27 ATMworks 3.51 ? 1d 7/28/97 7/28/97 "Glatfelter,Poat"
28 MX5 Cluster Val. 26d 6/23/97 7/28/97 Poat
29 DE500 Validation 1d 7/30/97 7/30/97
30 Target Test Post-EFT Checkins 19d 6/24/97 7/18/97 "BYoung,Glatfelter,Poat"
31 BL? Regression Testing 5d 7/9/97 7/15/97 Elser
32 SSB Matrix Test Planning 10d 7/9/97 7/22/97 Jancsy
33 SSB Qual Cycle 20d 7/18/97 8/14/97
34 SSB Code Freeze 0d 7/18/97 7/18/97
35 SSB Kit Build 3d 7/18/97 7/22/97 34
36 Freeze SSB Qual Config's 0d 7/22/97 7/22/97 35
37 SSB Pre-Qual 5d 7/23/97 7/29/97 36 "BYoung,Glatfelter,Poat,Jancsy"
38 SSB Kit Final Qual 11d 7/30/97 8/13/97 37 "BYoung,Glatfelter,Poat,Jancsy,Elser"
39 Target Test late SSB checkins 20d 7/18/97 8/14/97
|
483.14 | MX5 Validation Plan | EVMS::POAT | | Tue Jun 03 1997 11:47 | 724 |
|
Chuck Poat
OpenVMS Quality Test & Validation
22-May-1997
Reviewers:
Susan Marchesano
Glenn Glatfelter
Bob Young
Pete Michaud
Curt Spacht
Ken Follien
Revisions: 1.0 2-Jun-1997
X.1 Initial Version
1.0 Updated with inputs from MX5 Validation Plan Review
MX5 Validation Plan for OpenVMS V7.1-1H1
1.0 Goal:
The goal of this effort is to validate the quality of the MX5
system components, identify must fix software defects, and make a
statement of quality relative to the quality goals for the MX5
project and the V7.1-1H1 release.
It is estimated that there are 5 defects introduced by this
project, and the goal of this plan is to identify 3 of these
defects. This number was determined by tracking and projecting the
project size along with the number of defects to be identified.
Refer to the STAR::ALPHA_VERIFICATION notesfile, note 354.5 for the
Defect Projection process.
Projected number of defects = 5
5238 (# uncommented source code lines)
---- = Projected Defects
1000
To date, 2 must fix defects have been identified during the MX5
effort.
2.0 Problem Statement:
Successful completion of this project will provide OpenVMS support
for the MX5 Personal Workstation Platform.
The MX5 is a Personal Workstation based on and EV56 CPU. There are
no plans at this time to support the MX5 as a server.
Page 2
Support for the MX5 in V7.1-1H1 required modifications to existing
routines and new routines. These routines include:
New Code
error_routines_1e05.c cpu1e05 Miata Error handler
io_support_1e05.c cpu1e05 Miata I/O support
intr_disp_1e05.c cpu1e05 Miata Iterrupt support
init_1e05.bli cpu1e05 Init sys$cpu_routines_1e05
mchk1e05def.sdl cpu1e05 Machine check Def file
io_support_1e.c sysloa Common I/O support
pyxis_support.c sysloa PYXIS host support
boot_support_pyxis.c bootdriver I/O support routines
sys$icbm_1e.c iogen Autoconfig routines
pyxis.sdl lib PYXIS Cheip Def File
Modified Code
lan_boot.bli apb LAN boot mode
hwrpbdef.sdl lib HWPRB Def file
dcdef.sdl statlet Adapter Type Def File
2.1 Quality Goals:
The following list of quality goals will be used as guidelines by by
QTV to judge the quality of the MX5 platform during the Hailstorm
validation effort.
Reliability - How reliable is the MX5 when operated under
load or stress conditions? Will the MX5 perform
consistently over an extended duration?
Correctness - Are all supported functions present and working as
expected based on the product description and on the
user documentation?
Interoperability Are there any interoperability problems between
the MX5 CPU and it's supported options?
Integrity Does the MX5 and the options listed in sectiom 5.0
introduce any data integrity problems during normal
operation or during fault recovery?
Usability - How easy to use is the MX5? Does the ease of
use match expectations (based on experience with
similar products)?
Efficiency - Is the
performance consistent with other (similar) aspects
of the system? If there are any explicit performance
requirements, are they met?
Page 3
2.2 Areas of Risks:
- Lan Driver Support
The MX5 will support 2 Ethernet chip configurations 21142 and 21143.
ETHERNEt Chip Physical Chip
21142 DP83840 or DP83840A
21143 DP83840 or DP83840A
The boot and runtime drivers need to be verified for all four
combinations during the validation.
- Adequate MX5 testing with external cache either 2MB or 4MB.
QTV's current MX5 proto does not have external cache installed.
Based on prior platform testing, CPU cache configurations have
been problematic.
- Adequate testing with 500Mhz MX5 CPUs. MX5 will support 500Mhz
in Hailstorm. QTV's current proto is the 433Mhz CPU configuration.
- MAX memory testing with 384MB. Current configuration has 64MB.
- The MX5 schedule and test requirements are based on 2 platforms,
a 21142 and a 21143 based platform. QTV's proto is a 21142, delayed
deliver of the second platform may extend validation efforts past
the Hailstorm SSB date.
2.3 Known Problems With Workarounds Inplace
TBIA Bug
The TBIA bug is a scather/gather mapping problem with the PYXIS
chipset. A software workaround is place for this problem.
The workaround will be be verified during the validation using
Platform Configurations #1 and #2 and I/O stress loads.
8K Page Boundary Bug
The console workaround for the 8K Page Boundary Bug is a "SUPPORTED
DEVICE Table" in console. This structure will be used by console to
determine valid devices that can be configured in front of the
bridge in the primary PCI slots.
The workaround will be target tested by configuring supported and
unsupported devices in front of an behind the bridge using
test configurations shown on Page 7.
2.4 Non-Goals:
The OpenVMS QTV group will not be responsible for the following
areas during this validation effort:
- Layered product support testing
- Benchmarking and OpenVMS Alpha performance measurements
- Configuration testing of all MX5 options
- Valid device matrix testing for MX5 Console support
- Graphics Testing
- ISA device testing
- MAX I/O adapter configuration testing
Page 4
3.0 Tactics:
The MX5 validation will be done in a building block fashion where
each configuration tested builds a stable foundation or environment
for the next test configuration. The make up of the
configurations will depend on hardware availability and the
requirements of MX5 program. The planed configurations and
validation tasks for each configuration are detailed below.
Detailed MX5 system configurations, console revision levels, adapter
revision levels and cluster configurations used during the validation
will be tracked in the ALPHA_VERIFICATION notes file.
DECnet Phase IV and DECnet Plus will be used during the validation.
- DECnet Phase IV will be used with sections 3.1 and 3.2.1
- DECnet Plus will be used with sections 3.2.2 and 3.3.1
- LANCP and DECnet Phase IV will be used for the boot testing in
described section 3.3.2
3.1 Initial MX5 Testing
The initial MX5 testing will be done using the I/O and boot
configuration shown on page 7. The basic test functions that will
be validated are listed below.
3.1.1 AdHoc test basic console functionality
3.1.2 System disk builds from InfoServer
Tasks:
a. ISL boot with 21142 network adapter to KZPSA-AA disk PCI slot #4
ISL boot with 21143 network adapter to KZPBA-CA disk PCI slot #5
3.1.3 UETP Run
Tasks:
a. Overnight run with UETP Device, Load, and DECnet Tests
3.1.4 QTV Regression tests
Tasks:
a. Regression tests
- SO_Run
- KTHREADS
- Sys_tester
- Systest
b. CPU_Hoops Level 2
3.1.5 Boot/crash Tests
Tasks:
a. System boots in front of and behind the bridge:
Boot KZPAA-AA system disk in PCI slot #3
Boot KZPBA-CA system disk in PCI slots #2 and #5
b. Crash dumps in front of and behind the bridge:
KZPAA-AA System disk
Write dump to system disk PCI slot #3
DOSD to KZPAA-AA disk slot #1
DOSD to KZPBA-CA Ultra SCSI disk slots #2
Page 5
KZPBA-CA System disk
Write dump to system disk slot #5
DOSD to KZPAA-AA SCSI disk slot #3
DOSD to KZPBA-CA Ultra SCSI disk slot #2
3.2 Standalone System Testing CPU/Memory and I/O
The Standalone System test configuration uses I/O and Boot
configuration shown on Page 7. The focus of the standalone testing
will be with a KZPBA-CA system disk. A cursory system level test
will be done with a KZPAA-AA system disk.
3.2.1 KZPBA-CA system Disk
Configuration:
o Shadowed system disk across PCI bus #2 and #5
o Page and swap file on PCI bus #3
o Shadowed data disks across PCI bus #3 & #1 and
PCI bus #2 & #5.
o TLZ09 configured on PCI bus #3
Tasks:
a. System stress testing. CPU/Memory and I/O
- CTM
- SITP
- TEG Tools
- XQPXR
b. Image backups disk to disk across PCI buses
c. Image backups/restores disk to tape across PCI buses
d. Image backups/restores tape to disk across PCI buses
e. Verify shadow set members by doing a backup/physical/compare
after extended runs
f. Exception tests
Faulty towers with CTM back ground load
SCSI bus reset program during periods of I/O loading
3.2.2 KZPAA-AA system disk
Configuration:
o Shadowed system disk across PCI bus #3 and #4
o Page and swap file on PCI bus #2
o Shadowed data disks across PCI bus #3 & #4 and
PCI bus #2 & #1.
o TLZ09 configured on PCI bus #4
Tasks:
a. System stress testing: I/O CPU and Memory
o CTM
b. Verify shadow set members by doing a backup/physical/compare
after extended runs.
c. Exception tests
Faulty towers with CTM back ground load
SCSI bus reset program during periods of I/O loading
3.3.0 MX5 cluster Testing
The MX5 will be configured as a client ONLY during the cluster
phase of the evaluation. Configurations with a local system disk
and a MSCP served system disk with local page and swap will be tested.
Page 6
3.3.1 Cluster member local system disk
Test configuration using I/O and Boot configuration on Page 7.
Configuration:
o Shadowed system disk on PCI bus #5 and #2
o Page and swap file on PCI bus #3
o MSCP served shadow sets across PCI bus #3 & #1
o MSCP served data disks PCI buses #2, #3, and #4
o TMSCP served TLZ09 configured on PCI bus #3
o cluster inter-connect 21142 and 21143
Tasks:
a. System stress testing CPU/Memory and I/O
CTM
SITP
TEG Tools
XQPXR
b. Verify shadow set members by doing a backup/physical/compare
after extended runs
c. Exception tests
Faulty towers with CTM back ground load
d. ATM test suites
3.3.2 Cluster member served system disk with local Page and Swap
Test configuration using Cluster system configuration
on Page 7
Configuration:
o Local Page and swap on PCI bus #2
o MSCP served shadow sets across PCI bus #3 & #1 and
either PCI bus #2 or #5.
o MSCP served data disks PCI buses #2, #3 and either
PCI slots #1 or #5
o TMSCP served TLZ09 configured on PCI bus #3
o cluster inter-connect DE500-xx and DEFPA configured in
PCI slots #1 and #5
o cluster inter-connects with native 21142, and 21143.
Tasks:
a. System stress testing: I/O CPU and Memory
CTM
SITP
TEG Tools
XQPXR
b. Verify shadow set members by doing a backup/physical/compare
after overnight runs
c. Exception tests
Faulty towers with CTM back ground load
SCSI bus reset while running I/O loads
d. ISL boot with DEFPA in PCI slots #5 and #1 and disk on
PCI bus #3
e. ISL boot with DE500-xx in PCI slots #5 and #1 and disk on
PCI bus #2
f. Remote image backups/restores with TLZ09 while under cluster
I/O load
g. System boots during periods of cluster I/O loading
with DEFPA, DE500-xx, 21142, and 21143.
h. Crash dumps during periods of cluster I/O loading
with DEFPA, DE500-xx, 21142, and 21143.
Page 7
4.0 MX5 Test Configurations
The MX5 PCI bus layout is shown below along with the two system
configurations that will be used for the validation effort.
Configuration 1 will be used for standalone I/O and boot
configuration testing. Configuration 2 will be used for system
testing in a cluster environment.
MX5 Bus Layout
shared
Slot SBU Riser width with ISA
---- ----------------- ----- --------
1 secondary, 32-bit half no
2 secondary, 32-bit half yes
3 secondary, 32-bit half yes
4 primary, 64-bit full yes
5 primary, 64-bit full no
Configuration Number 1: I/O and Boot Configuration
MX5 PCI Slot
1 KZPBA-CA or KZPAA-AA or ATM Adapter
2 KZPBA-CA
3 KZPAA-AA
4 Graphics Option or KZPBA-CA or ATM Adapter
5 Graphics Option or KZPBA-CA or ATM Adapter
Configuration Number 2: Cluster System Configuration
MX5 PCI Slot
1 Network Adapter or KZPAA-AA
2 KZPBA-CA
3 KZPAA-AA
4 Graphics Option
5 Network Adapter or KZPBA-CA
5.0 MX5 Validation Options
The options listed below are the options that will be used by QTV
during the validation.
Disk Adapters
KZPBA-CA
KZPAA-AA
Network Connections
DE500-AA
DE500-BA
DEFPA-AA
DE450-CA
Racore Tokin Ring
6.0 Deliverables:
- MX5 validation plan
- Execute tests, modification of scripts, analysis of test results
- Find/Report defects as early as possible in the release (Qar's)
in the Hailstorm QAr Database
- Ensure the MX5 project meets the Release Criteria for the release
- MX5 Statement of Quality
Page 8
7.0 Schedule:
This schedule reflects the QTV validation effort for MX5 during the
Hailstorm release. Dates are based on current project estimates and
will be updated as necessary.
Scheduled Scheduled Actual Actual
TASKS Start End Start End
Initial MX5 Testing 9-May 16-May
Standalone System Testing 19-May 20-June
EFT Pre_Qual 23-May 30-may
EFT Qaul 2-June 16-June
Clustered Local System disk 23-June 8-July
Clustered Served System disk 9-July 22-July
SSB Pre_Qual 22-July 28-July
SSB Qual 29-July 13-Aug
8.0 Staffing:
- 1 full time QTV engineer required to complete the work:
9.0 Hardware requirements:
- 1 MX5 platform with a 21142
- 1 MX5 platform with a 21143
- DP83840 and DP83840A physical Ethernet chip
- MAX memory configuration for at least 1 platform
- 4 MB OR 2MB external Cache
- 500 Mhz CPU
- additional Ultra_SCSI disk drives and enclosures
10.0 Dependencies:
- The validation schedule is based on the assumption that
QTV will have 2 MX5 platforms during the Hailstorm release
- Timely resolution of hardware and software bugs.
- Matrix testing of supported adapters in the "MX5 Primary Slots"
by the MX5 Engineering team.
11.0 Risks:
- Reduced validation time in Hailstorm because of MX5 hardware
instability.
- Reduced validation time in Hailstorm because of the late
availability of the following MX5 hardware:
Max memory configuration
Cache Memory 2MB or 4MB
500Mhz CPU
21143 based MX5 platform
- Limited availability of Ultra SCSI hardware.
- Delays in hardware upgrades on QTV proto
Page 9
12.0 MX5 Evaluation
Name of project: MX5
Date of this evaluation: 25-MAR-1997
Engineering contact: KEN FOLLIEN
Evaluator: Chuck Poat
Purpose of Project:
This project will will provide OpenVMS Alpha support for the MX5
personal workstation.
2.0 Hardware Description
- Is this a new piece of hardware or platform? Yes
- If the project is a DSR platform or adapter/controller enhancement
describe what the changes/enhancements are?
3.0 Type of Validation Effort and priority
- Will this project require a validation or is it an inuse project?
The MX5 project will require a validation effort.
- Project Priority for this release.
MX5 is a P1 for the Hailstorm release.
4.0 Development Status
What stage of Life of a Project (LOP) product development stage
is the project currently in? Put an X preceding the development stage.
Project Phase Status (%complete/not planned/n.a.)
------------- ------------------------------------
____ Problem Statement
____ Investigation
____ Draft Plan & Design
____ Detailed Project Planning
____ Detailed Design/Functional Spec
__X_ Implementation
____ Validation Testing
____ Assess Doneness
Which project documents are currently available? ("D"=Draft, "F"=Final)
State if the doc isn't applicable, or the info is in another plan.
Document Location/Comments
-------- -----------------
____ Investigation Report
____ Product Requirements
__F_ Project Plan star::docd$:[evax.miata.specs]pp_miata_x01.ps
____ Functional Specification
____ Design Specification
____ Development Plan
____ Defect Containment Plan
____ Unit Test Plans
____ Test/Validation Plan
____ Integration Test Plan
____ Final Qualification Plan
____ Doneness Criteria
____ Others (specify)
Page 10
5.0 OpenVMS Changes Required for Support
List any facilities affected by this project and indicate if the
code is modified or new.
New/
Module Facility Description LOC Modified
error_routines_1e05.c cpu1e05 Miata Error handler N
io_support_1e05.c cpu1e05 Miata I/O support N
intr_disp_1e05.c cpu1e05 Miata Iterrupt support N
init_1e05.bli cpu1e05 Init sys$cpu_routines_1e05 N
mchk1e05def.sdl cpu1e05 Machine check Def file N
io_support_1e.c sysloa Common I/O support N
pyxis_support.c sysloa PYXIS host support N
boot_support_pyxis.c bootdriver I/O support routines N
sys$icbm_1e.c iogen Autoconfig routines N
lan_boot.bli apb LAN boot mode M
pyxis.sdl lib PYXIS Cheip Def File N
hwrpbdef.sdl lib HWPRB Def file M
dcdef.sdl statlet Adapter Type Def File M
>NOTE: Code is still preliminary and a count of the LOC would not
accurate at this time.
6.0 Defect Containment Status
Briefly describe the defect containment activity by answering the
following questions. If there isn't any defect containment activity
planned or implemented please indicate that also. To estimate the project
defects use the following formula:
projected number of defects = # uncommented source code lines
-------------------------------
100
NOTE: Defect containment information will be completed once the code for this
project is finalized.
Defect Identification Goals: % Completed to date
---------------------------- -------------------
%code inspected:
%code unit testing:
%code integration tested
%code(features?) validated
Defect Data
Estimated number of defects for this project:
# Defects found by
------------------
Design reviews/inspections
Code reviews/inspections
Unit Test
Integration Test
# of defects found to date
Page 11
7.0 Configurations
Briefly describe or list the supported configurations/options for this
project.
- How will this project be supported?
The MX5 will be supported as a Client in a OpenVMS cluster
and as a standalone personal workstation.
There are no current plans for the MX5 to be supported as a server.
Mixed Architecture Cluster Impact:
- Could/will this project impact a Mixed Architecture VMScluster
features, performance, reliability or interoperability?
- Are there any firmware dependencies for this project?
Supported Options list:
Note this is a preliminary list and it is based on what UNIX is currently
supporting. I would expect this list to change before FRS.
Memory
32Mb MSP01-HB
64Mb MSP01-HC
129Mb MSP01-HD
Cache Memory
2Mb cache card MSC01-AA
Storage Adapters
PBXKP-BA
KZPAA-AA Fast Narrow SCSI
KZPBA-CA Ultra SCSI
TZK20-LK 2.3 Gb SCSI tape drive
TLZ09-LK 4.0/8.0 SCSI DAT drive
Network Adapters
PBXDA-AA
DE435-AA PCI Ethernet
DE450-CA PCI Ethernet (TW,AUI,TP)
DE500-XA,AA,BA,FA PCI 10/100 Fast Ethernet
DEFPA-AA PCI FDDI
Racore Tokin Ring
8.0 Risks
List any risks or that may hinder this project during this release.
Hardware:
There are currently 2 know problems with the PYXIS chipset
which will require workarounds in software and console.
Page 12
TBIA bug will require a software workaround for a
scatter/gather bug in the PYXIS chipset. There are no plans to
correct this bug in the chipset.
8K Page Boundary Bug. Console must provide a "Supported Device"
table which contains valid devices that can be configured in front
of the bridge on MX5 (primary slots).
Runtime driver support for the Ethernet and PHY chip mix
will require support for the following combinations:
Tulip Phy Chip
21142 DP83840
21142 DP83840A
21143 DP83840
21143 DP83840A
Since the Tulip is on the mother board QTV will require 2 platforms
one with the 21142 Tulip and 1 with the 21143 Tulip.
Software:
The code required for this project is still preliminary. The
Hailstorm release cycle is short and it will be difficult for the
MX5 project maintain this schedule.
Resources:
This project is at risk during Hailstorm because of the limited
number of available QTV Engineers, availability of Ultra SCSI hardware
and the number of Hailstorm projects.
9.0 Dependencies
List any dependencies that may have a negative impact on this project
during this release.
Hardware:
QTV will need Ultra SCSI adapters and disks for the MX5 validation.
Software:
Resources:
|