|
The rigor of scientifically-based research with doctored data could be mortis
by Richard P. Phelps (2002)
Let’s assume for the moment that the Bush Administration’s interest in
“scientifically-based” “what works” research is sincere. (I have some
doubts.) At best, unfortunately, their plans address only one of the
three underlying sources of low-quality education research: poor
methodology. Given that, no one should expect a dramatic increase in
the overall quality of education research anytime soon, no matter how
successfully the Administration implements its programs.
The “Awful Reputation of Education Research” Many
university-based researchers thumb their noses at their campus
education schools. They appreciate the revenue these cash cows provide,
with their captive audience of potential teachers who must go to
university to be credentialed before they can work for a living. But,
typically, they may also regard education research as one wide notch
below the rest of the academy’s in its quality and rigor. They may say
that education is a “synthetic” discipline with no theory or methods of
its own (i.e., education researchers borrow from psychology, sociology,
history, and so on). They may say even worse.
Mind you, I think that some education research is of awful
quality; but, I don’t think that is because education is a “synthetic”
or “derivative” discipline, nor because education researchers are
ignorant of good research methodology. Answer this quiz: think of the
two most popular and influential advances in social science research
methodology of the past couple of decades and then identify where “on
campus” their progenitors resided. If you are thinking the way I want
you to, you guessed “meta-analysis” and “multi-level modeling” (a.k.a.,
hierarchical linear modeling). Where “on campus” did the main champions
of these new, wonderful research techniques reside? ...in education
schools. That education school professors are hardly the ignorant folk
some other professors make them out to be is my point.
So, granted, one of the three major sources of poor quality in
education research is a poor understanding of research methodology, but
only some of the time and on the part of some researchers. The other
two major sources of poor education research quality are, at best, only
tangentially addressed by the Bush Administration initiatives. They
are: 1) censorship and (2) dishonesty.
Dishonesty There, I said it. I used the “d” word.
Few are willing to. Most education researchers must know that some
education research is dishonest. Some who know, however, may consider
the dishonesty justified. They may reason that dishonesty for a good
cause–more money or power for the “good guys”–is a necessary means to a
noble end. Others who know are simply not willing to label the
dishonesty for what it is, because to do so would be impolite. At most,
they might redo a dishonestly-conducted analysis and, in doing so,
identify their effort as a “re-analysis. “Re-analysis” implies only
that one is conducting the research study in a different manner than
that used previously; it casts no aspersions as to motive. Tiptoeing
through doctored data with euphemisms such as “re-analysis” however, at
least is more informative than the far more common alternative, of
accepting the results of dishonest research as valid.
In most cases, education research dishonesty is simply
ignored, even by those who know it for what it is. To accuse a
professional colleague of dishonest research can be tantamount to
professional suicide, particularly if the dishonest researcher is well
connected and well regarded. Expose their fraud, and one probably ruins
any chance one might once have had for a high profile, successful
career in education research. The whistleblower’s fate is no more
appealing inside the education establishment than inside the average
corporation.
Thus, many, if not most, dishonest research results float
around as flotsam on the sea of education facts, cited endlessly in
other research works, motivating new research questions, offered as
evidence for particular policy prescriptions in public or in the
courts, and written up by journalists as the latest findings of
objective science, equivalent in quality to discoveries in cancer
research.
Unfortunately, a standard meta-analysis does the same, casting
a drift net into the flotsam sea and hauling in whatever gets attached.
Meta-analysis, one of the methodological techniques held up as
something of a solution to the perceived problem of poor quality in
education research, reviews the basic, reported statistical parameters
of many published studies on a single topic and, in summarizing the
mass of data, arrives at an overarching conclusion of what “all the
research” on that topic tells us. Meta-analytic researchers include
arguably the most important information reported by each reviewed
study--the number of cases or sample size; the variables incorporated
in the analysis, the method of analysis, the size of the effect
reported, and so on. Essentially, meta-analysis reviews the statistical
“box score” reported by every relevant study.
Meta-analysis, however, assumes that every study reviewed was
conducted and reported honestly. It assumes that the data have not been
doctored, the numbers fudged or made up, the literature review censored
or deliberately kept incomplete, and so on. A good meta-analysis will
screen studies that are reviewed and toss those of poor quality
overboard. For the most part, however, the quality judgments are made
of the methodology and process, not the data. A well-conducted study
that incorporates doctored data, or that reports doctored results, is
likely to be kept on board and processed as a good quality catch.
Unfortunately, it is my personal experience in reviewing,
auditing, and checking the details provided by studies of the education
topics with which I am most familiar that a shockingly large proportion
of studies in the education research literature are fraudulent (i.e.,
something very basic about them, integral to their results, is
demonstrably erroneous, and probably intentionally so).
The problem of fraud in education research is not likely to be
found through meta-analysis, however, because it does not dig into the
details far enough, or by the smartest researchers in the world with
impeccable, independent reputations. Indeed, meta-analysis can
incorporate and legitimize the results of fraudulent research,
extending its influence more deeply into public policy.
Out of naivete or convenience, we tend to assume that research
in the “peer-reviewed,” “scientific,” “academic” literature is
conducted honestly and reported on the up-and-up and, in education at
least, such is often not the case. The first time I uncovered
fraudulent research, I was surprised. I was reading a research report
written by an education professor on a tiny subtopic of research on
standardized testing, a subtopic in which probably less than half a
dozen scholars are expert. It was a prototypical example of lying with
statistics--numbers were left out if they didn't serve the conclusion,
data were altered, costs were made up, benefits were ignored, numbers
were labeled with misleading descriptions, definitions were changed
surreptitiously, supportive research was cited that did not contain the
evidence claimed, and solid research that did not support the preferred
conclusion was ignored. Very few readers would have had a deep enough
or broad enough familiarity with the topic to notice these problems,
though. To this day, the results of that research have been cited
widely and accepted as fact by thousands.
At the time, I assumed that it must represent an exceptional
case. Then I read more research reports with doctored data and
surreptitiously altered definitions of terms. I realized that I wasn't
reading aberrant research; it was actually fairly common. Nor, I am
convinced, was much of it done incompetently. This was research
conducted in a deliberate way by very intelligent scholars, some
considered tops in their field. It was simply conducted dishonestly and
reported misleadingly.
There are many different ways to commit research fraud, more
than the space here allows me to list. I describe some in detail in
chapter 2 of the book Kill the messenger: The war on standardized
testing (Transaction, 2003). Darryl Huff’s classic How to lie with
statistics, about the misleading use of data in advertising, is equally
applicable to some education research techniques, too.
In many instances, the perpetrators of education research fraud can
duck any accusations through “plausible deniability” claims. In other
instances, the fraud is so blatant (blatant, that is, to anyone who
takes the time to look at the details), that accusations could not
possibly be ducked. The beauty of the current system is that fraud
perpetrators are rarely accused, and even when accused, it is rarely
done in any public way. Indeed, some consider it impolite, or even
rude, to point out the shortcomings in another’s work, or even to
identify them--that’s an ad hominum attack. Reticence to identify
fraudulent research probably is polite, but it also shields dishonesty
and leaves the public believing falsehoods.
Censorship In addition to dishonesty, meta-analysis
will not uncover the other major source of poor quality in education
research–censorship. As with dishonesty, meta-analysis is more likely
to exacerbate censorship’s deleterious effect than to solve it.
Meta-analysis assumes not only that all the research “out there” in the
journals is sincere, but that it is a representative sample of all the
honest research that can be done on a topic. Meta-analysis, more or
less, assumes that its drift net will pull in something akin to a
complete universe of the relevant research or a representative sample
of it.
In other disciplines of inquiry, where researchers are truly
independent and have no vested interest in the outcomes of their
research, those assumptions may hold. In education, where few
researchers are independent and most have a vested interest in the
outcomes of their research, these assumptions do not hold. Assume that
the research literature on any given education topic is valid and
representative, and the meta-analytic conclusions are likely to be just
as sympathetic to vested interest doctrine as most education journals
are.
Many, if not most, education journals prefer studies that
reach certain conclusions; anyone who truly believes all of them to be
neutral and objective is being fairly naive.
Research pecking order In addition to the
aforementioned reasons why fraudulent research is well tolerated in
education, I would argue that there is probably one more. Uncovering
fraudulent research requires some digging. The “devil is in the
details” as they say. Digging in data is dirty, menial labor. Really,
it is more like forensic accounting than it is like anything taught in
social science PhD programs. Few education researchers, probably, would
even know how to begin to approach the task.
Even fewer would think it worth their time in terms of career
advancement. There is little professional glory to be gained by digging
in someone else’s dirt. Glorious researchers make new discoveries and
employ the newest wizbang methodological tools. Glorious researchers
look ahead, advance the horizon of knowledge, and cut the edge with the
most sophisticated laser slicers. They don’t look backwards and sift
through others’ refuse with spades and hoes.
Indeed, the most glorious researchers may have no experience
whatsoever with data collection or management, as that rather
inglorious work is usually relegated to technicians and bureaucrats.
Often, they simply assume that the data provided them by others must be
valid and reliable. Many do not understand the data well enough to
discern if they are not, or if they have been doctored.
Research glory is gained only by the final analysis, by making the data talk (even if what the data say is a fib).
What will “scientific rigor” applied to education research tell us?
I hope that I am wrong, and it wouldn’t be the first time. But, I
conjecture that the Bush Administration initiatives will tell us little
new in the end. To fight the devil, one must face him where he is. In
education research, he lives mostly in the details. Meta-analysis may
not find him. The most famous and prominent researchers may not find
him either, unless they are willing to yank on their hip boots and wade
in the murky morass of detail for some extended period of time.
Probably, however, they would be neither famous nor prominent today if
they had spent any substantial amount of career time in the patient,
inglorious, and time-consuming task of sifting details for research
fraud.
It is worth noting, however, that the most prominent and most
highly-paid Wall Street analysts were fooled by the crooks who
assembled the financial box scores for them at Enron, Global Crossing,
Kmart, Qwest, Pharmor, and WorldCom. It was the less famous, less
well-paid, fastidious, green-eye-shaded morlocks who discovered the
frauds, after patiently sifting through the refuse.
Nonpartisan Education Review HOME
|