Opinion
School & District Management Opinion

Research “Says” - Or Does It?

By Beth Holland — August 29, 2017 4 min read
  • Save to favorites
  • Print

Research says...

After two years of doctoral studies, that phrase makes me cringe. If I see mention of a research study in a blog article or tweet, I feel compelled to corroborate and confirm any claims. However, until recently, I neither really understood how to critically read a research study nor debate its findings. Just because an author asserts a claim, how can we - as educators and consumers of that information - consider it as “true”?

Three Criteria for Assessing Any Study

In their seminal work, Carmines and Zeller (1979) explain that every study should be assessed according to three criteria: credibility, reliability, and validity. To determine whether a study is credible, not only do we need to look at the biographies of the authors but also any sponsors of the research. Does a particular study extol the value of a particular product? Look to see if the parent company paid for the study! Similarly, if research from an organization claims that a particular strategy or technique should be used in the classroom, look to see if large amounts of grant money might be available in that particular area. Though making these discoveries does not necessarily discredit the study, they should inform your analysis of the findings.

The next criteria to consider is reliability. A reliable study implies not only that the results would be reproducible but also that the measures and survey instruments have been assessed for their reliability. For example, think about a bathroom scale. It should reliably indicate your weight every time you step on it. The same thing should happen with research studies. Did the authors mention the reliability of their measures and how they made that determination? If they make a claim such as “40% of survey respondents indicated an improvement,” then you would want to make sure that the survey accurately measured that improvement.

Beyond credibility and reliability, are the author’s claims valid? In other words, did the study actually do what it claims to have done? For research to be valid, then it needs to show not only that the study measured what it claimed to have measured but also that the results can be used to make a causal claim. In other words, did X really cause to Y to occur?

Assessing the OECD Report

I think that these concepts first really started to make sense to me when I applied them to the 2015 Report from the Organisation for Economic Co-operation and Development (OECD). The Students, Computers and Learning: Making The Connection report asserted that technology had no measurable effect on student performance. Further, it claimed that many students with access to technology performed worse on math and reading assessments. When I first read those results, I wanted to argue with them. As an edtech advocate, I wanted to insist that if the teachers had used the technology more effectively then the students would have performed better. However, now that I have a better understanding of how to assess research, I realize that teacher and student use of technology had absolutely nothing to do with this study.

If we go back to the criteria from Carmines and Zeller (1979), then we can have a more critical discussion. First, is this research credible? The OECD is a reputable, international organization, and it did not receive funding for the study that may have influenced the results. So, yes. Next, student achievement was assessed via the Programme for International Student Assessment (PISA) -- a reliable measure for assessing student learning in reading, math, and science. However, is PISA a valid way to determine the effectiveness of access to technology? Maybe?

A valid study shows causation: if some condition occurs, or some program happens, then some other action should take place as a result. The OECD report intimated that increased access to technology did not necessarily correlate to increased PISA scores. However, this is not necessarily a statement of causation

Leviton and Lipsey (2007) would call this a “black box situation.” They explain that most complex problems - such as student achievement - cannot be justified by a simple cause and effect statement. Instead, there needs to be an underlying theory to explain the internal processes that may not be visible. Consider the illustration below.

So if technology access constitutes an input, and PISA scores serve as the output, what happened inside the black box of the classroom? To answer that question would require a completely different study.

Oftentimes, research studies are used as “truth” based on discussion of inputs and outputs. However, we need to critically examine these relationships to determine whether or not the authors can make a valid causal claim. Much like we encourage our students to be critical consumers of information, as educators, we need to be critical consumers of educational research.

References

Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. Thousand Oaks, California. SAGE Publications, Inc. doi:10.4135/9781412985642

Leviton, L. C., & Lipsey, M. W. (2007). A big chapter about small theories: Theory as method: Small theories of treatments. New Directions for Evaluation, 2007(114), 27-62. doi:10.1002/ev.224

Related Tags:

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.