Opinion
Education Opinion

Several Ways To Tell The Difference Between Good & Bad Education Research

By Larry Ferlazzo — November 15, 2011 7 min read
  • Save to favorites
  • Print

Last week, I asked a question that had been on my mind:

How can you tell the difference between good and bad education research?

Colleagues in the Teacher Leaders Network and I have previously written about the importance of having a certain amount of healthy skepticism about research in the field, and I’ve written about the importance of being data-informed instead of being data-driven.

Even then, though, we need to be careful about which data is informing us, and how it is being interpreted.

In addition, I’ve compiled additional resources at The Best Resources For Understanding How To Interpret Education Research.

Today, two experienced education researchers have provided guest responses -- Matthew Di Carlo. from the Albert Shanker Institute and P. L. Thomas from Furman University. I’m also publishing comments from two readers.

Response From Matthew Di Carlo:

Matthew Di Carlo is a Senior fellow at the Albert Shanker Institute, and often writes for the Institute’s highly-regarded blog.

I would encourage people not to think of research as “good” or “bad.” The better question, especially in policy research, is: Do the data and methods used to analyze them support the conclusions? Even the simplest analyses can be useful if interpreted properly. Conversely, the most sophisticated studies can be counterproductive if they’re used to draw inappropriate conclusions.

So, though far from an exhaustive list, here are a few questions to ask yourself when reading research papers and reports.

Is there a causal argument being made? You’ve probably heard the phrase “correlation is not causation,” and it’s probably the most important thing to keep in mind when interpreting policy research. For instance, schools that spend more might have higher test scores, but that doesn’t mean the spending is directly responsible for the higher scores. You can be most confident that effects are causal when researchers use experimental methods, including random assignment (like you’d do if you were testing a drug). Statistical techniques, such as models that attempt to “control for” the influence of other measurable factors, can provide tentative evidence of causality. Be wary of any analysis that claims, or even implies, that X causes Y, when that assertion is not directly tested. It’s often little more than speculation.

Is the size of the “effect” meaningful? A “statistically significant effect” only means that the association is unlikely to be zero (and it’s not necessarily causal, either). These “effects” are often so small as to be educationally meaningless, or at least not large enough to carry policy implications. Read the part of papers that discusses the size of the effect, which many authors “translate” into more accessible terms. And never rely solely on summaries or abstracts.

Do the findings and conclusions square with prior research? No one study can confirm or deny anything. In addition, policies that work in one context do not necessarily work in others. There is almost always relevant prior evidence. If it conflicts with a paper’s findings, or if results seem to vary by context or location, those are warning signs that any conclusions are, at most, tentative. Most papers have a literature review, which is a useful starting point.

Has the research undergone a professional peer review process? Papers and reports reviewed and approved by experts in the field have undergone a “quality control” process, and you can have more confidence in their methods and conclusions. Most commonly, this includes papers published in peer-reviewed academic journals. Good research organizations, such as Mathematica and RAND, have internal review processes.

My final and perhaps most important recommendation is to keep at it, and muddle through, even if you think it’s “over your head.” Don’t be frustrated. The more papers and reports you read in a given area, the better-equipped you’ll be to interpret them properly. Consuming research, just like research itself, is a difficult, cumulative process. Progress can be slow, but there is no chance of failure if you persist.

Response From P.L. Thomas:

P. L. Thomas is an Associate Professor at Furman University and taught high school English for 18 years before moving to higher education. His Ignoring Poverty in the U.S.: The Corporate Takeover of Public Education will be published this fall by Information Age Publishing.

His response is a summary he wrote of a longer piece titled A Primer On Navigating Education Claims.

For all stakeholders in U.S. public education, debates about education and education reform can be taxing, confusing, and ultimately circular, resulting in little that we call productive.

Here, then, I want to offer some guiding questions for navigating the education debate based on my own experience as an educator for nearly three decades (almost two decades as a high school teacher and another decade in higher education/teacher education) and my extensive work as a commentator in print and on-line publications.

When you confront claims about education, and the inevitable counter-claims, what should you be looking for?:

• Are the claims and counter-claims framed within the perspective of the person making them?

• Are educational claims framed as “miracles”?

• Are the claims of educational quality expressed in terms of correlation or causation?

• Do the claims address student populations being addressed?

• Do claims of education success by non-public schools address issues of scalability, selection, attrition, stratification/re-segregation of students, and out-of-school factors?

• Do counter-claims made about education commentaries start with fair and accurate characterizations of the positions being debated?

• What are the experiences and credentials of the person making the claim?

• Are claims supported with evidence -- citations, hyperlinks, or both?

Just as our public schools appear to be mired in conditions that never change, our public debates about education and education reform suffer from insular and unproductive cycles of monologues.

Our public schools need and our children deserve genuine school reform -- reform that is nuanced and complex -- and without the same nuance and complexity in an authentic dialogue about education and education reform, we are unlikely to reach the school reform we need.

Responses From Readers:

Paul Bruno:

One thing I’d say to look out for: poorly-defined or poorly-selected comparison or control groups. A lot of educational interventions look impressive until you realize that in terms of comparative effectiveness they actually do quite poorly.

I’m guessing it’s a widespread problem, but I see this with a lot of research on “metacognitive strategy” and “discovery learning” interventions, both of which look good compared to nothing but fare poorly against, e.g., vocabulary/content instruction and direct instruction, respectively.

One of my most vivid memories of grad school in education was one of the most esteemed faculty members in the department disparaging controlled experiments because, basically, they’re hard to do in educational contexts. And it’s true, they’re hard to do! But research that doesn’t bother even trying should set off our alarm bells.


ssilvius:

Good research is extremely careful about definitions. It does not attempt to appear value-neutral, but rather makes its values explicit and apparent.

When qualitative, it lays out a careful narrative, highlighting exceptions and subtleties, analyzing linguistic and observational information in detail.

When quantitative, it makes the structure of the study EXTREMELY transparent. It provides rationales for choices of categories, descriptors, survey questions, test hypotheses, and all the places that values can sneak into data without being noticed. It does not simply mine the data and then confirm itself, but uses separate data or observation to develop hypotheses and then runs tests to support or reject them. It reports effect sizes, gives access to sources and raw data, and includes enough detail to be replicable. It is given a thumbs up by Shanker Blog.

Good research is typically not funded by think tanks or foundations but is peer reviewed (though this is hardly a guarantee of quality) . It does not generalize its findings except when suggesting further work to do. It recognizes the inherent complexity of social research particularly in a field like education and does not attempt to reduce human beings and their social systems to easily manipulated bytes--sound or data.

Most of all, good education research does not attempt to provide absolute answers, but rather attempts to be interesting and relevant to people who do education. The best research is often small in scope, big in ideas. It starts or continues a discussion and challenges/advances the way we think about our praxis.

Please feel free to leave a comment sharing your reactions to this question and the ideas shared here.

Thanks to Matthew, P.L. Thomas, Paul, and ssilvius for sharing their responses!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org.When you send it in, let me know if I can use your real name if it’s selected or if you’d prefer remaining anonymous and have a pseudonym in mind.

Anyone whose question is selected for this weekly column can choose one free book from a selection of twelve published by Eye On Education.

I’ll be posting the next “question of the week” on Friday.

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.