Three Questions to Guide Your Evaluation of Educational Research
Last month, I participated in a webinar with Sarah Hannawald, Executive Director of ATLIS (the Association of Technology Leaders in Independent Schools), to discuss how technology leaders can better understand academic research. Our conversation stemmed from the challenge of how to reply when a parent, colleague, or administrator initiated a conversation with "the research says..." During the conversation, one participant commented that she still found the prospect of critically analyzing academic studies a bit daunting. In trying to come up with a strategy to make the process more manageable, I started thinking about the power of one question: who?
Who Wrote the Study?
Years ago, a manager tasked me with becoming a Human Systems Integration expert. At one point, I stumbled on the most fantastic paper - articulate, cogent, and written by a college undergraduate! Not something that I could cite.
When you first find a research study, do a quick background check on the author. Is this person a professor, a researcher, or a student? Additionally, who paid for the research study? Perhaps the researcher works for a company and then uses the research to promote a product. Particularly in educational technology, you have to critically question whether the company published all of their results or only the positive ones.
Who Published the Research?
In my first year of doctoral studies, professors only allowed us to read peer-reviewed academic journals. In doing this, we knew that a panel of scholars had already reviewed the studies for credibility, reliability, and validity. However, government groups, think-tanks, and nonprofits also publish research. These studies -- known as grey literature -- often undergo a thorough review process and typically come from highly credentialed authors, yet they need to be read critically for two reasons.
First, remember that the interests or politics of the organization could impact the type of research that it chooses to publish. Second, many organizations not only present original, empirical studies - meaning that the researchers design an original study and then measure what happens - but also synthesis reports where the authors analyze other studies and then synthesize them into a new report to build an argument.
Take Jobs for the Future as an example. I have used several of their reports in my dissertation. Highly credentialed scholars have written these papers, and they incorporate credible, reliable, and valid research studies. And yet, before making any judgements based on these synthesis papers, it is important to track down the original sources. As an example, in my first year, I read a report that cited a statistic a study conducted by the Gates Foundation. When I tracked down this study, I could not find the same percentage. The first author had calculated it based on an inference. If I had drawn a third conclusion based on the synthesis report and not the original study, then it would be a bit like playing scholarly "telephone." In other words, you have to go to the source.
Finally, we have to be cognizant of "research" presented in editorials. A few weeks ago, a New York Times article from a university professor about banning laptops in classrooms sparked a lot of conversation. This was a highly credible and credentialed author, in a national newspaper, citing empirical studies. However, despite these positives, it is important to remember that the editorial section remains an opinion forum and not "research." To be fair, this blog is an opinion forum that happens to cite more scholarly research than most. Maybe you find me credible, but like with all blogs, I should be read critically.
Who is the Audience?
Until recently, I did not understand why our professors asked us to note the intended audience for papers, but then I considered this question in light of the editorial mentioned above. The author assumed that educators - as well as the general public - might read her work, so she wrote in such a way as to engage that specific audience. However, in her article, she mentioned an original research study conducted at the United States Military Academy. If you tracked down and read that entire research study, the last sentence states, "Given the magnitude of our results, and the increasing emphasis of using technology in the classroom, additional research aimed at distinguishing between these channels is clearly warranted" (Carter, Greenberg, & Walker, 2017, p. 28).
In other words, Carter, Greenberg, and Walker do not appear to have intended for their work to be generalized to the entire education community or the world in general. In fact, they further argue that their study may not apply within contexts that promote active use of technology for instruction. Especially with editorials, blog posts, and opinion pieces that cite studies to frame an argument, it is critical to go back to the source and see if the original authors intended to make definitive statements to the public or recommendations to scholars for future study.
Scholarly research intends to present findings in an unbiased manner so that the reader can draw their own conclusions through analysis and synthesis. This can seem like an overwhelming task. However, consider that moment when a colleague or parent runs up to you and exclaims, "research says..." Whether published in a peer-reviewed journal or cited in an essay, blog, or editorial, to debate the implications of research studies start by determining who wrote the article, who published it, and who might be the intended audience.
Carter, S. P., Greenberg, K., & Walker, M. S. (2017). The impact of computer usage on academic performance: Evidence from a randomized trial at the United States Military Academy. Economics of Education Review, 56, 118-132. Retrieved from https://seii.mit.edu/wp-content/uploads/2016/05/SEII-Discussion-Paper-2016.02-Payne-Carter-Greenberg-and-Walker-2.pdf