T.R | Title | User | Personal Name | Date | Lines |
---|
1849.1 | Some quick answers. | CADSYS::COOPER | Topher Cooper | Thu Mar 03 1994 13:49 | 11 |
| Due to the storm, I'm coming in from home over a noisy line, so I'll
keep this brief, and provide more info when I come into work.
Exact tests can be developed if none of your expectations are much
above 5. Some modern thought says that you can tolerate some cells
with 1<=np<5, based on simulations but the advice is "fuzzy" as to
just when you can manage this. A technique called a randomization test
may solve your problem, though it makes interpretation of just what
your "p-value" means somewhat trickier.
Topher
|
1849.2 | Need more info. | CADSYS::COOPER | Topher Cooper | Mon Mar 07 1994 15:39 | 13 |
| To know what to advise, I really need to know more about your problem.
Let me take a wild guess and you tell me where I am wrong.
You have some product which you are manufacturing, and you have, as
part of the specification, that the rate of defects (i.e., the
proportion of the products which are defective) should be no more than
"p". You test "n" of the products of which "d" are defective, and you
would like to be able to make a statement to the effect that you are
99% sure that the defect rate is no more than p.
I'm sure that that is wrong, but tell me why it is wrong.
Topher
|
1849.3 | | CARROL::COX | Ed Cox: II Cor 10:3-5 | Thu Apr 14 1994 17:52 | 19 |
| The problem is related to process development in an environment in which the
the acceptable defect rate targets keep getting smaller. This drives the
sample sizes required to establish that defect rate very large. (Some
managerial types seem to have difficulty with why you can not prove a 10ppm
defect rate with a sample size of 1000 opportunities for defects!) In a
recent experiment (that someone else did) they supported a rather significant
process decision on a comparison of 1 defect out of 14,256 verses 4 defects
out of 14,256. I had some other data that included more combination of
test variables which gave a more complete picture of the situation and which
had observed defect counts from the mid single digits up to several hundred
based on different sample sizes ranging from 1/2 to about 1/6 of the
14,256 sample size of the other data. Pure intuition tells me which
data is going to be more reliable, but I would like to be able to attach
real confidence limits on the conclusions. The 1 & 4 defects from the
other experiment has left me scratching my head for a legitimate method
of doing this.
Does that help explain what I am looking for?
- Ed
|
1849.4 | | RTL::GILBERT | | Mon Apr 18 1994 16:45 | 7 |
| Instead of measuring a binary value (i.e., defect vs no defect), are
there other, more continuous metrics available?
Supposing that such a metric were used, a value outside a prescribed range
would constitute a 'defect'. You can fit an error curve to the measured
values of a moderately-sized sample, and from that estimate the probability
of a 'defect'.
|
1849.5 | | CARROL::COX | Ed Cox: II Cor 10:3-5 | Wed Apr 20 1994 13:41 | 8 |
| Believe me, looking for a continuous variable is always my first preference,
but so far no one I know of has found a substitute for counting electrical
opens and shorts on a module. I have done some modeling of the major factors
which affect the opens and shorts on fine pitch SMT devices and it correlates
well with actual data. This helps tremendously, but there are some factors
which I just can not figure out how to model and the only way to measure
their effects is to count defects (yuk).
- Ed
|