[Search for users]
[Overall Top Noters]
[List of all Conferences]
[Download this site]
Title: | Soapbox. Just Soapbox. |
Notice: | No more new notes |
Moderator: | WAHOO::LEVESQUE ONS |
|
Created: | Thu Nov 17 1994 |
Last Modified: | Fri Jun 06 1997 |
Last Successful Update: | Fri Jun 06 1997 |
Number of topics: | 862 |
Total number of notes: | 339684 |
549.0. "Lie, why do they lie...?" by 43GMC::KEITH (Dr. Deuce) Fri Sep 22 1995 13:17
Copied from the CATO Institute WWW
The one I like is lie of 1 in 9
Counting the Errors of Modern Journalism
Brian Doherty
Despite all the rhetoric from Thomas Jefferson down to the latest
self-important musings of journalists about journalism's being the
first, best hope for a healthy polity, your newspaper is lying to
you. While assuring you that it provides precise information about
public policy issues, in many cases it is only pushing speculation
and rumor in the guise of fact. Most of the time you have no
independent way to confirm its claims, so how can you tell when
a newspaper is lying?
Here's a hint: watch out for the numbers. Newspapers are filled
with contextless reports of the latest things government officials
have said or decided. But newspapers do like to throw in a
number now and then to add verisimilitude to the tales they tell.
Knowledge of the media's inability to get it straight, especially
when dealing with numbers and statistics, has become widespread
enough to inspire a widely reviewed book--Tainted Truth: The
Manipulation of Fact in America by Cynthia Crossen. It has also
given rise to a new magazine, the quarterly Forbes MediaCritic,
the latest addition to the Forbes family of publications.
While ideologues of all persuasions like to blame media
inaccuracies on political biases, the causes of journalism's troubles
are, unfortunately, inherent in the way daily newspapers, those first
drafts of history, are written: hurriedly and by generalists who,
even if they are unfailingly scrupulous (which can't always be
assumed), are often ignorant of the topics on which they write
and depend blindly on what others tell them--and what others
tell them is very often biased. Unfortunately, those first drafts of
history are all most laypersons read.
The Problems with Numbers
Our intellectual culture is drunk on numbers, addicted to them: we
need them in every situation, we feel utterly dependent on them.
As sociologist Richard Gelles aptly put it in a July 25, 1994,
Newsweek story on the media's problems with numbers, "Reporters
don't ask, `How do you know it?' They're on deadline. They just
want the figures so they can go back to their word processors."
The culture of the poll dominates: the foolish notion that not
only every fact but every thought, whim, and emotion of the
populace can be stated in scientifically valid and valuable numbers.
The lust for numbers can, at its best, lead people to do hard
research and dig up interesting and useful information. More often,
however, it leads to dignifying guesses with misleadingly precise
numbers. For example, it wasn't enough to know that people were
dying in Somalia; as Michael Maren reports in the Fall 1994
Forbes MediaCritic, reporters felt it necessary to latch onto some
relief workers' guesses and repeat them over and over, only
occasionally letting slip honest acknowledgments that no one really
knew how many people were actually dying and that no one was
taking the trouble to attempt accurate counts.
The obsession with numbers leads to particularly egregious errors in
reports on economic figures and aggregates and the federal budget.
Those errors include calling spending that doesn't equal what had
been planned a "cut" in spending, even if more is being spent
than the year before; relying on static economic analysis, especially
when calculating the effects of tax increases and their concomitant
revenues (because they assume that people do not change their
behavior when their taxes are raised, members of Congress and
reporters make grievously wrong predictions about expected
revenues); and relying uncritically on numerical tools such as the
Consumer Price Index.
Especially during the 1992 election, "quintile" analysis of the
effects of the Reagan-Bush years on income and tax-burden
equality abounded, with hardly any explanation of the
complications of such analyses. Those complications include the
fact that when people in lower income quintiles become richer,
they often move into a higher quintile rather than buoy the
average of the lower one. Yet income added to the highest
quintile can do nothing but increase that quintile's average income.
That creates a misleading picture of the rich getting much richer
while the poor stagnate.
Quintile analysis is also static, but income mobility is common in
America, so it's not always the same people who languish in
lower quintiles or whoop it up at the top. And quintile analysis
often relies on households, not individuals-- the top quintile can
have more than 20 percent of Americans, the bottom less than
20 percent. But all of those complications are overlooked in the
media's craving for numbers to toss around.
The media even ignore the fact that "counts" of macroeconomic
variables can change retroactively--1993 data on 1992 quantities
can be different from 1994 data. As an example, in 1993 the
Bureau of Labor Statistics listed Arkansas as the state with the
highest percentage rise (3 percent) in nonfarm employment from
July 1991 to July 1992. Candidate Clinton touted that percentage
in campaign ads. But by March 1994 the facts had changed.
Although Arkansas was then thought to have had a 3.7 percent
rise in employment during the 1991-92 period, it ranked behind
Montana's 4.22 percent and Idaho's 4.21.
Macroeconomic aggregates, such as gross national product, on
which the media often rely for numerical ballast, are often riddled
with conceptual problems, such as that of counting as additions to
our national product any cash transactions, including the classic
example of neighbors' paying each other to mow each other's
lawns, and ignoring any noncash transaction that adds to economic
well- being. Other economic numbers bandied about by the media,
such as unemployment rates, job growth, and the "cost" of various
tax increases or cuts, are often derived from random samplings,
self- reported information, and guesswork. Economics is a study of
human action, not of numbers; the press's over dependence on
frequently dubious aggregates helps disguise the problem and
muddles readers' understanding of what economics--and
prosperity--is really about.
Where Do the Numbers Come from?
There are many ways to mislead while allegedly presenting
accurate counts or measures to the public. The most sinister is to
simply make up numbers or make completely bald-faced guesses.
That happens more often than you might think. The demand for
information has far outstripped the supply. Coming up with reliable
numbers to support all the things that journalists want to say and
the public wants to know is often prohibitively expensive, in
money or effort, or both. But the misuse and misunderstanding of
numbers lead to erroneous reporting.
The total number of breast cancer victims has become a matter
of much concern since the National Cancer Institute and the
American Cancer Society frightened the world with the declaration
that American women face a one-in-eight chance of contracting
breast cancer. That scary figure, however, applies only to women
who have already managed to live to age 95; one out of eight of
them will most likely contract breast cancer. According to the
NCI's own figures, a 25-year-old woman runs only a 1-in-19,608
risk.
Those very precise figures are themselves based on a phony notion:
that we know how many people have breast or any other cancer.
As two journalists concerned about cancer admitted in the Nation
(September 26, 1994), "Not only is there no central national
agency to report cancer cases to . . . but there is no uniform
way that cases are reported, no one specialist responsible for
reporting the case." So any discussion of cancer rates in the
United States is based on guesswork, and one can only hope that
the guesswork is based on some attempt to be true to the facts as
they are known.
In the case of other health threats, such as AIDS, we know that
isn't the case. In The Myth of Heterosexual AIDS, journalist
Michael Fumento documented the discrepancy between the rhetoric
about the plague like threat of AIDS to the nongay and
non-drug-using populace and official statistics on the actual
prevalence of the syndrome, which indicated that no more than
0.02 percent of people who tested HIV positive were not in those
risk groups. (And even such heterosexual AIDS cases as are
recorded run into a self-reporting problem: many people may not
want to admit to anyone that they have had gay sex or used
drugs.) As Fumento explained, projections of the future growth of
the AIDS epidemic (even ones that were not hysterical pure
guesses tossed out by interest groups) were often based on straight
extrapolations of earlier doubling times for the epidemic (which
inevitably--for any disease--lead to the absurd result of everyone
on the planet and then some dying of the disease) or cobbled
together from guess piled on guess. Even when the Centers for
Disease Control would lower earlier estimates on the basis of new
information, or make clearly unofficial speculations about higher
numbers, journalists would continue to report the higher and more
alarming numbers.
In the case of figures about AIDS in Africa, even the most basic
numbers are not to be trusted. Journalist Celia Farber documented
in Spin magazine how African health officials inflate the number
of deaths from the complications of AIDS, both because AIDS
cases attract foreign aid money, whereas traditional African disease
and death do not, and because there is no accurate method of
counting.
One relief worker told Farber that counts of children orphaned by
AIDS in an African village "were virtually meaningless, I made
them up myself . . . then, to my amazement, they were published
as official figures in the WHO [World Health Organization] . . .
book on African AIDS. . . . The figure has more than doubled,
based on I don't know what evidence, since these people have
never been here. . . . If people die of malaria, it is called AIDS,
if they die of herpes it is called AIDS. I've even seen people die
in accidents and it's been attributed to AIDS. The AIDS figures
out of Africa are pure lies."
In his autobiography, novelist Anthony Burgess gives further insight
into the generation of "official" figures. He tells of creating
completely fraudulent records of the classes he supposedly taught
fellow soldiers while stationed in Gibraltar during World War II.
His bogus "statistics were sent to the War Office. These,
presumably, got into official records which nobody read." For the
sake of accuracy, we can only hope so. But if a journalist got
hold of those numbers, he'd be apt to repeat them.
Similarly farcical figures are taken completely seriously by
journalists. For example, activist Mitch Snyder's assertion that the
United States suffered the presence of 3 million homeless people
became common wisdom for the bulk of the 1980s. Snyder's figure
was made up; he simply assumed that 1 percent of Americans
were homeless to get an initial number of 2.2 million in 1980,
then arbitrarily decided that since he knew the problem was
getting worse, the number would hit 3 million by 1983. He
claimed to be working from extrapolations based on reports from
fellow homeless activists around the country, but there was no
counting, no surveying, no extrapolation behind his assertion. And
yet most major American newspapers reported the number; it
became part of our received cultural wisdom.
In her recent book, Who Stole Feminism? How Women Have
Betrayed Women, Christina Hoff Sommers actually tried to track
to their sources numbers spread by feminist activists. One of the
much- reported stories she debunked was that 150,000 women a
year die of anorexia, which an outraged Gloria Steinem reported
in her popular book Revolution from Within. Steinem cited
another popular feminist tome by Naomi Wolf as her source; Wolf
cited a book about anorexia written by a women's studies
academic, which cited the American Anorexia and Bulimia Center.
Sommers actually checked with that group and discovered that all
they'd said was that many women are anorexic. Oops.
Another feminist canard is that domestic violence is responsible
for more birth defects than all other causes combined. Time and
many newspapers had ascribed that finding to a March of Dimes
report. Sommers tracked the assertion back through three sources,
beginning with the Time reporter, and discovered that it was the
result of a misunderstanding of something that had been said in
the introduction of a speaker at a 1989 conference--no such
March of Dimes report existed. Still, the errors of Time and the
Boston Globe and the Dallas Morning News are in more clip files
and data banks than is Sommers's debunking. The march of that
particular error will doubtless continue.
A third famous feminist factoid is that Super Bowl Sunday sees a
40 percent rise in cases of wife beating. That claim, said to be
supported by a university study, was made in an activist press
conference. (The story was also spread by a group ironically
named Fairness and Accuracy in Reporting.) Similar claims began
coming from other sources. Ken Ringle of the Washington Post
took the time to double-check them and found that the university
study's authors denied that their study said any such thing and
that the other sources that claimed to have independent
confirmation of the "fact" refused to disclose their data. When a
concerned activist makes up a number, few bother to be skeptical,
and credulous reporting tends to drown out the few debunkers.
Unfortunately, erroneous numbers in journalism are not always the
result of sincere attempts to quantify the relevant data. If you
can't imagine someone's making the effort to really count
something, and if you can imagine any reason for the source's
having an ulterior motive, best take the number with a large
grain of salt. This is not a call for ad hominem attacks; it is
merely a warning about when to look especially askance at
numbers. Even when one is following what seems on its face to
be defensible standards of sample and extrapolation, ludicrous
results can ensue. For example, Robert Rector of the Heritage
Foundation wrote that 22,000 Americans below the poverty line
had hot tubs, and many conservative publications uncritically
trumpeted the figure. But Rector's figure was "extrapolated" from
one case in a survey sample. It's disingenuous to claim that
because one poor family in a sample of 10,000 has a hot tub,
22,000 poor families have hot tubs.
Another example of numbers being attached to the uncounted, and
probably uncountable, is the debate over species extinctions.
Economist Julian Simon has explained that the conventionally
accepted figures on the number of species disappearing yearly are
based on no counts and no extrapolations from past knowledge;
they are based on guesses about the current rate of extinction,
and that rate is arbitrarily increased to produce the frightening
number of 40,000 per year. Norman Myers, one of the leading
promulgators of that figure, admits that "we have no way of
knowing the actual current rate of extinction in tropical forest,
nor can we even make an accurate guess." Yet he is willing to
make guesses about future rates.
Another much-touted scare figure, on workplace violence, was
recently debunked in the pages of the Wall Street Journal.
Reporter Erik Larson found that reports and statistics on the
prevalence of workplace violence were shoddy or misleading in
various respects. One report, which concluded that workers have a
one-in-four chance of being attacked or threatened at work, was
based on the replies of only 600 workers, who represented only 29
percent of the people whom the survey had tried to reach, which
made the groups largely self- selected within the original sample.
Statisticians frown, with reason, on self-selected samples, which are
very likely to be biased.
Larson also found that a Bureau of Labor Statistics report, which
said that homicide is the second most frequent cause of death in
the workplace, far from referring to coworkers or disgruntled ex-
coworkers blasting away at their comrades, showed that three-
quarters of the deaths occurred during robberies, and that many
others involved police or security guards, whose jobs obviously are
dangerous. But the media, and an industry of self-serving
workplace violence consultants, inspired by half-understood studies
and vivid memories of crazed postal workers, created an aura of
offices as the Wild, Wild West that caught the imagination of
many. In this case, data were not so much bogus or warped as
wildly misinterpreted.
Checking the Checkers
It might seem paradoxical to condemn journalists for incessantly
parroting errors when it is journalists themselves who occasionally
expose errors. After all, who else would? The problem is, they
don't do it nearly enough, and no one else ever does. Even
though Larson's story appeared in the October 13, 1994, Wall
Street Journal, it's a given that many other writers and TV
reporters will have missed it and sometime in the future will
again parrot false suppositions about the danger of mortal violence
in the workplace.
The culture of journalism is based on the principle of the citation
or quote: if someone else said it, or wrote it, it's okay to repeat
it. Almost any editor or writer would scoff at that brash
formulation. After all, journalists pride themselves on their
withering skepticism, their credo of "if your mother says she loves
you, check it out." But the reader would be terribly naive to
believe that journalists, under the crush of daily deadlines, under
the pressure of maintaining long-term relationships with sources,
and occasionally under the spell of ideology, always meet that
standard. In the future, you can count on it, someone will go
back to some story about workplace violence, or the homeless, or
wife beating, written before the debunking was done, and come to
an incorrect conclusion. Dogged checking of sources is rare indeed.
I recently was intrigued by a figure in our self-styled paper of
record, the New York Times. In an October 25 article about the
miserable state of Iraq after years of international embargo, the
author, Youssef M. Ibrahim, stated that, according to UNICEF, "in
the last year there has been a 9 percent rise in malnutrition
among Iraqi infants."
That figure struck me as somewhat absurd, a foolhardy attempt to
assert precise knowledge in a situation where obtaining it would be
extremely difficult, if not impossible. I tried to track the figure
back to its source through the UNICEF bureaucracy. (There is a
practical reason why many journalists end up accepting things at
face value: the tracking of figures, especially through international
bureaucracies, can be harrying and time-consuming indeed.) I was
rewarded; although my initial supposition--that any alleged count
was probably of dubious value--is probably true, I discovered that
the "paper of record" couldn't even read the UNICEF report right.
What UNICEF had actually said, with even more absurd precision,
was that the total rate of--not the increase in-- malnutrition
among infants under one year old was 9.2 percent--a figure that
seems shockingly low for an essentially Third World country
suffering under an international embargo. It turned out that the
survey was not done by UNICEF, as the Times had reported, but
by UNICEF in collaboration with the government of Iraq--as
almost anything done in Iraq probably must be. Precise figures
from lands with tyrannical governments should never be trusted.
And it should be remembered that in any hierarchy, even if the
person at the top doesn't have the literal power of life and death
over those on the bottom, there's a general tendency to tell those
higher up only what they want to hear.
Given the preceding examples, you'd think that constant checking
and rechecking of the sources of claims would be the rule in
journalism. Unfortunately, it is not. Nor, apparently, is it in
science. In Betrayers of the Truth, William Broad and Nicholas
Wade reported on fraud and deceit--and acceptance of the
same--in the scientific establishment. They found that, like
journalism's conceit about checking on whether your mother loves
you, science's conceit of being built on an elaborate system of
cross checking and confirming the results of others is mostly a
myth. Hardly anyone ever checks what other people claim to have
found or done.
All too often readers assume that everyone is doing his work
scrupulously and well, but unfortunately, that's not always the case,
as Broad and Wade, Sommers, Fumento, Larson, Farber, and
others have shown. Readers should be much more skeptical than
they are.
Almost every time I read a newspaper story about a topic of
which I have personal knowledge, or about an event that I've
witnessed, I find errors--sometimes in minor details, sometimes in
key ones. Almost everyone I've asked about this says the same.
But our knowledge of journalistic error in a few specific cases
doesn't translate into a strong general skepticism.
Total skepticism is probably impossible. But greater awareness of
the sorts of errors journalists tend to make can only help. Watch
out for macroeconomic aggregates; try to figure out where huge
counts are coming from and how they are being made; try to
check the methodology and phrasing of polls; check on the
self-interest of the groups that promulgate scary numbers; and
remember that scary stories make great copy and should be
mistrusted all the more for that reason.
If journalism were merely entertainment, this wouldn't be so
important. But despite how bad they are at it, journalists' conceit
about their key role in public policy is, unfortunately, true. Bad
information can only lead to bad policy. The first step in an
intelligent approach to public policy is to get the facts as straight
as we can, even when we don't have precise numbers.
[ Table of Contents | Home ]
T.R | Title | User | Personal Name | Date | Lines |
---|
549.1 | Goes better here than on famous quotes. | DPDMAI::GUINEO::MOORE | HEY! All you mimes be quiet! | Fri Sep 22 1995 14:01 | 4 |
|
"There are three types of lies: lies, damned lies, and statistics."
--- Benjamin Disraeli
|
549.2 | Statistics = modern numerology... | GAAS::BRAUCHER | Frustrated Incorporated | Fri Sep 22 1995 14:14 | 7 |
|
And we in the Box are not immune to this disease, nor is this a
failing of only the left or right. I think it IS, howsomever,
a very American failing, the fascination with numbers. Watch
for curveballs all over TV next election.
bb
|
549.3 | Writing on the right. | MIMS::WILBUR_D | | Fri Sep 22 1995 14:38 | 18 |
|
What does CATO mean in CATO
Institute?
I find the information hard to
swallow since the entire article is
grossly slanted to the right.
Fits right into the notes file.
Lets see they lie if its about,
Gay,Feminist,Black or Foreign aid
or about Reagan-omics.
|
549.4 | | MKOTS3::JMARTIN | I press on toward the goal | Fri Sep 22 1995 14:46 | 1 |
| In this forum, the term was founded by grampy Binder!!!!
|
549.6 | | BOXORN::HAYS | Some things are worth dying for | Fri Sep 22 1995 15:29 | 28 |
| RE: 549.0 by 43GMC::KEITH "Dr. Deuce"
> Lie, why do they lie...?
> Another example of numbers being attached to the uncounted, and
> probably uncountable, is the debate over species extinctions.
> Economist Julian Simon has explained that the conventionally
> accepted figures on the number of species disappearing yearly are
> based on no counts and no extrapolations from past knowledge;
> they are based on guesses about the current rate of extinction,
> and that rate is arbitrarily increased to produce the frightening
> number of 40,000 per year. Norman Myers, one of the leading
> promulgators of that figure, admits that "we have no way of
> knowing the actual current rate of extinction in tropical forest,
> nor can we even make an accurate guess." Yet he is willing to
> make guesses about future rates.
This is a wonderful example of how to lie.
First, quoting an economist as an expert in the field of species
extinction? Why not a plumber? Or a hairdresser? What a hoot.
Second, why did the CATO Institute fail to put even a short discussion as
to _why_ it's not possible to accurately estimate the current rate of
species extinction?
Phil
|
549.7 | | 11874::DKILLORAN | Danimal | Wed Sep 27 1995 14:42 | 13 |
|
> First, quoting an economist as an expert in the field of species
> extinction? Why not a plumber? Or a hairdresser? What a hoot.
maybe 'cuz he node sumptin' bout numbahs?
> Second, why did the CATO Institute fail to put even a short discussion as
> to _why_ it's not possible to accurately estimate the current rate of
> species extinction?
aaahhhh, maybe because the topic under discussion had nothing to do
with species extinction?
|
549.8 | rant | BRUMMY::WILLIAMSM | Born to grep | Wed Sep 27 1995 15:42 | 24 |
| A story appeared in a local paper a few years ago claiming that an
emergency four hour operation had been carried out by my father on the
victim of a cruel and outragious assault. This was a ten minute job
and entailed two stiches and wipe of something that stings.
When the paper was asked "how come" they said, we only print what we
are told. They even carried a picture of the young man in bed proudly
pointing to his stitches. Told by whome they wouldn't say.
How about that great piece of journalize "Mr X is concidering legal
action" what a dumb thing to say, almost as bad as "Mr X, not his real
name" - yeuk.
This numbers thing happens a whole bunch in th UK too, official figures
are continually rewritain, balance of trade fugures being particularly
bad, (Hard to interpret, collect take your pick).
The extrapolation problems described in .0 are all over the place,
right wing papers (we don't have much of a left wing press) are
particulary bad at coming up with stories about "beggers" making 500
pounds a day on tube stations and going back to there suburban semi
first class, or some such anti poor clap trap.
Anyway, time to stop ranting, Michael.
|
549.9 | 1990- 1 million with aids. 1994?- 1 million | POLAR::WILSONC | A dog is a womans best man | Sun Oct 08 1995 05:58 | 7 |
| The first time I noticed something fishy about numbers was in reports
about the number of aids cases in canada. An old Macleans magazine that
i was perusing before being chucked had an article in it about the
current state of aids in the nation. Gee, I thought, I just read
something from the current media. I wondered how they would compare. To
my amazement the figures were unchanged after 4 years!!
|
549.10 | | BUSY::SLABOUNTY | A swift kick in the butt - $1 | Mon Oct 09 1995 11:54 | 3 |
|
Maybe alot of them died or moved down here to the US.
|
549.11 | | MIMS::WILBUR_D | | Fri Oct 13 1995 16:16 | 12 |
|
.9
Maybe 1 million died and 1 million more caught the diease.
That the numbers are the same doesn't strike me as proof that they
are false. Though you certainly might have more information to show
that it's true.
|
549.12 | Can some nice person 80 col this? | 43GMC::KEITH | Dr. Deuce | Fri Dec 29 1995 12:18 | 87 |
| Testimony before House Energy and Power Subcommittee, 18 Jan 1994 Partial Transcript
Dr. David Egilman - South Shore Health Center.
My name is David Egilman. I'm a practicing physician in Braintree, Massachusetts. I
practice internal medicine and occupational medicine. I'm also on the Faculty of Brown
University and a member of the Center for Community Responsive Care in Boston. [In
closing comments, he mentions that he was associated with NIOSH for many years.]
[...]
The worst experiments that were conducted, in my opinion, were those that resulted in
the deaths of their participants. Those were conducted at the University of Cincinnati
between 1961 and 1972. They defined the purpose of their experiments in their first
report to the funding agency, the Defense Department. "These studies are designed to
obtain new information about the metabolic effects of total body and partial body
irradiation, so as to have a better understanding of the acute and sub-acute effects of
irradiation in the human."
In another report they went on "The human they wanted to know about the effects in
were military personnel who might be irradiated during a war." They went on to
describe the doses they were going to give - doses of 100 to 300 rads, eventually doses
up to 600 rads were anticipated. 600 rads is lethal to almost everyone who would have
received it under the conditions of this experiment. It would have killed everyone who
would have received it. The doses that they _did_ give to some of the individuals were
enough, in anticipation by the researchers, to kill half of the people, the LD 50 was
the dose.
Now the selection of subjects is very important. They were uneducated, average
education 4th grade. Low intelligence. They had brain dysfunction, because of their
underlying disease. They could not follow simple instructions. They were specifically
selected because they had tumors, cancers, that were resistant to therapy. They picked
patients whose cancers were not going to be treatable with the radiation. For, you see,
in the 30's and 40's this had been tried for cancer therapy. And they knew by 1960
which cancers would respond and which would not. They wanted patients with cancers
that would not respond, because then it wouldn't confuse the purpose of the experiment,
which was to find out what effects the radiation would have on soldiers. If it actually
treated the cancer, you would have some confusion between the cell necrosis, the cell
death from the treatment and the effects of the radiation.
62 of 88 patients were black. If this was a cancer study, it is the first one that
excluded affluent white people at its inception.
Now the methods: Because they were studying the effects of radiation to predict them
on soldiers, the effects were known, nausea and vomiting. Treatment for nausea and
vomiting was specifically denied these patients. This is just inhumane. Some of these
patients had stage 4 severe nausea and vomiting, that went on for days and longer.
And treatment for vomiting was available. Despite the fact that they specifically
selected those whose cancers would not be treated, the patient was told he was to
receive treatment to help his disease. Other effective treatments that were available for
some of the cancers at the time, not cures, but palliative treatments were available, for
the gastro-intestinal cancers, 5FU, which is still used today for that same tumor, were
denied the patients with that type of cancer.
And what were the results. Radiation sickness and death. The study participants, the
researchers themselves, in 1973, said that 8 of the victims died as a result of the
radiation. I have reviewed the data of the individual patient records, the summaries
provided by the researchers, I've only reviewed a few of the actual charts, as did the
junior faculty committee. at the University of Cincinnati, which should get credit for
having first discovered this and stopped it in 1971, and, in our opinion, more than 20
of the patients died as a results of the experiments.
Now let me turn to plutonium injections, and make just a few comments, since you've
heard a lot already. First, plutonium is not just a substance [which] causes cancer, it is
an acute toxin. It can make you suffer, just from having it injected. Acutely, right
away. The doses injected were potentially lethal, and I've reviewed the summary of the
diagnoses. In my opinion, there is no way that physicians at that time could have
thought that those patients were terminal. 12 of 18, in my opinion, were clearly not
terminal. Maybe 3 of those are questionable, 9 of 18 were definitely not terminal. And
they were not terminal by what physicians knew was terminal then - injured knee is
not a terminal disease .
Unfortunately I must say that the research was meaningless from a scientific standpoint.
This is ICRP, [referring to slide] the radiation standard for protection developed in
1972. They knew about the experiments and referenced them. And they said because
they were so poorly done and full of errors that the data from these 18 people were
not meaningful in developing the radiation protection standard. So while these people
may be heroes [reference to earlier testimony], because this won't happen, again,
unfortunately, the science in, it was not science, that it didn't provide us meaningful
information.
As you heard no medical follow-up care was planned and none was performed. The
injection of lethal plutonium into healthy individuals, showed a reckless disregard for
human life, by physicians, unfortunately, and others.
[testimony continues]
|
549.13 | Although some won't call me nice. | HIGHD::FLATMAN | Give2TheMegan&KennethCollegeFund | Fri Dec 29 1995 12:30 | 103 |
| <<< Note 549.12 by 43GMC::KEITH "Dr. Deuce" >>>
-< Can some nice person 80 col this? >-
Testimony before House Energy and Power Subcommittee, 18 Jan 1994 Partial
Transcript
Dr. David Egilman - South Shore Health Center.
My name is David Egilman. I'm a practicing physician in Braintree,
Massachusetts. I practice internal medicine and occupational medicine. I'm
also on the Faculty of Brown University and a member of the Center for
Community Responsive Care in Boston. [In closing comments, he mentions that
he was associated with NIOSH for many years.]
[...]
The worst experiments that were conducted, in my opinion, were those that
resulted in the deaths of their participants. Those were conducted at the
University of Cincinnati between 1961 and 1972. They defined the purpose of
their experiments in their first report to the funding agency, the Defense
Department. "These studies are designed to obtain new information about the
metabolic effects of total body and partial body irradiation, so as to have
a better understanding of the acute and sub-acute effects of irradiation in
the human."
In another report they went on "The human they wanted to know about the
effects in were military personnel who might be irradiated during a war."
They went on to describe the doses they were going to give - doses of 100
to 300 rads, eventually doses up to 600 rads were anticipated. 600 rads is
lethal to almost everyone who would have received it under the conditions
of this experiment. It would have killed everyone who would have received
it. The doses that they _did_ give to some of the individuals were enough,
in anticipation by the researchers, to kill half of the people, the LD 50
was the dose.
Now the selection of subjects is very important. They were uneducated,
average education 4th grade. Low intelligence. They had brain dysfunction,
because of their underlying disease. They could not follow simple
instructions. They were specifically selected because they had tumors,
cancers, that were resistant to therapy. They picked patients whose cancers
were not going to be treatable with the radiation. For, you see, in the
30's and 40's this had been tried for cancer therapy. And they knew by 1960
which cancers would respond and which would not. They wanted patients with
cancers that would not respond, because then it wouldn't confuse the
purpose of the experiment, which was to find out what effects the radiation
would have on soldiers. If it actually treated the cancer, you would have
some confusion between the cell necrosis, the cell death from the treatment
and the effects of the radiation.
62 of 88 patients were black. If this was a cancer study, it is the first
one that excluded affluent white people at its inception.
Now the methods: Because they were studying the effects of radiation to
predict them on soldiers, the effects were known, nausea and vomiting.
Treatment for nausea and vomiting was specifically denied these patients.
This is just inhumane. Some of these patients had stage 4 severe nausea and
vomiting, that went on for days and longer. And treatment for vomiting was
available. Despite the fact that they specifically selected those whose
cancers would not be treated, the patient was told he was to receive
treatment to help his disease. Other effective treatments that were
available for some of the cancers at the time, not cures, but palliative
treatments were available, for the gastro-intestinal cancers, 5FU, which is
still used today for that same tumor, were denied the patients with that
type of cancer.
And what were the results. Radiation sickness and death. The study
participants, the researchers themselves, in 1973, said that 8 of the
victims died as a result of the radiation. I have reviewed the data of the
individual patient records, the summaries provided by the researchers, I've
only reviewed a few of the actual charts, as did the junior faculty
committee. at the University of Cincinnati, which should get credit for
having first discovered this and stopped it in 1971, and, in our opinion,
more than 20 of the patients died as a results of the experiments.
Now let me turn to plutonium injections, and make just a few comments,
since you've heard a lot already. First, plutonium is not just a substance
[which] causes cancer, it is an acute toxin. It can make you suffer, just
from having it injected. Acutely, right away. The doses injected were
potentially lethal, and I've reviewed the summary of the diagnoses. In my
opinion, there is no way that physicians at that time could have thought
that those patients were terminal. 12 of 18, in my opinion, were clearly
not terminal. Maybe 3 of those are questionable, 9 of 18 were definitely
not terminal. And they were not terminal by what physicians knew was
terminal then - injured knee is not a terminal disease .
Unfortunately I must say that the research was meaningless from a
scientific standpoint. This is ICRP, [referring to slide] the radiation
standard for protection developed in 1972. They knew about the experiments
and referenced them. And they said because they were so poorly done and
full of errors that the data from these 18 people were not meaningful in
developing the radiation protection standard. So while these people may be
heroes [reference to earlier testimony], because this won't happen, again,
unfortunately, the science in, it was not science, that it didn't provide
us meaningful information.
As you heard no medical follow-up care was planned and none was performed.
The injection of lethal plutonium into healthy individuals, showed a
reckless disregard for human life, by physicians, unfortunately, and
others.
[testimony continues]
|
549.14 | | POWDML::HANGGELI | Because I Can. | Sun Mar 16 1997 11:49 | 24 |
|
I just saw a commercial for the new Jim Carrey movie _Liar, Liar_.
Which got me thinking: what kind of lies are ok, and what kind are
not, or are they never ok?
Is it ok to tell lies designed to spare someone's feelings - yes dear,
you look wonderful today, not fat at all - or not?
Is it ok to tell lies to spare yourself trouble - let's say you've been
planning to take a long weekend for some time, but you know your
manager would complain if you took Friday off, so you call in sick - or
not?
Is it ok to tell lies that might or might not be harmless - take the
classic stereotype of the wife buying a new dress and hiding it in the
closet for a month or two, so that when she wears it and her husband
asks about it, she can say "Oh, it's not new - I've had it for months!"
- or not?
I won't even get into malicious lies.
How do you then react when, some time later, the person finds out you
lied to them?
|
549.15 | | POLAR::RICHARDSON | Patented Problem Generator | Sun Mar 16 1997 12:39 | 6 |
| Everybody is lied to every day. Telling lies is part of self
preservation and it's instinctive.
If we went through life telling no lies, we wouldn't last very long.
IMO, FWIW, WGAS
|