Classroom Technology

Take the Long View When Evaluating 1-to-1 Computing Efforts, Researcher Says

By Michelle R. Davis — February 12, 2019 6 min read
  • Save to favorites
  • Print

More K-12 districts are moving toward 1-to-1 device programs, but determining whether such initiatives are having a positive impact on student learning might take a while.

A researcher who studied a well-known 1-to-1 program in the Mooresville Graded School District in North Carolina said it may take more than a handful of years to determine whether ed-tech projects are working. Educators and researchers who do evaluations after just one or two years—as much existing research on ed tech currently does—may be tempted to scrap programs that don’t seem to be having a significant impact before they have a chance to show their effectiveness.

“We shouldn’t necessarily give up on programs that have null findings in the short term,” said Marie Hull, an assistant professor of economics at the University of North Carolina Greensboro who co-authored the 2018 paper “One-to-One Technology and Student Outcomes: Mooresville’s Digital Conversion Initiative” with social scientist Katherine Duch, a principal at One Minus Beta Analytics.

The researchers looked at Mooresville’s program over eight years of planning and implementation and compared the district’s progress to similar, neighboring North Carolina districts. In the report, first released online in the journal of Educational Evaluation and Policy Analysis in September of 2018, the researchers examined test scores, along with survey data.

They examined math and reading achievement for students in grades 4 to 8, but also looked at the impact of the program on student behavior, such as time spent using digital learning devices and student absentee rates. The researchers found that in the short term, test score improvements in math and reading were statistically insignificant compared with other districts. But in the “medium term"—four and five years after implementation—math scores in Mooresville improved in “meaningful” ways compared with neighboring districts: the equivalent of about 3.2 extra months of math learning. Reading was more of a mixed bag: in the fourth year, the 1-to-1 program had no significant impact compared to other districts, but in the fifth year there was a slightly positive impact on reading compared with other districts over that span of time.

As more districts move toward such programs—a 2018-19 infrastructure study from the Consortium for School Networking found that 40 percent of K-12 technology leaders said their school systems had 1-to-1 computing programs, and 43 percent said they expected to reach that level within three years—Hull’s emphasis on taking the long view becomes even more important.

Education Week Contributing Writer Michelle Davis recently interviewed Hull to help educators, policymakers, and researchers better understand what it will take to evaluate ed-tech programs in truly meaningful ways.

Why is this research different?

The data I used covered eight years—five years of post-implementation data. That’s unique. When I did my literature review, the most I saw was a look at a program over three years, but most of them were one or two years. The longer horizon does make a difference.

In our findings, we saw that the short-term impacts are modest, even statistically insignificant. But when we go out four or five years, we see positive impacts that are meaningful. The evidence on reading scores is mixed, but for math scores we find in the first couple years after the program was implemented, math scores improved, but not by much—it wasn’t statistically significant. But in a medium-term horizon, four or five years post implementation, we see an improvement in math scores by .13 standard deviations. Anything at or above .10 is noteworthy. Reading was similar to math in that the first couple of years after the program was implemented the positive impacts were small and not statistically significant. But when we look at the medium horizon, there was one year when there was a positive effect and one year where it was statistically insignificant.

You examined factors beyond test scores. Why is that important?

Yes. We had some results on time use. The first was how often students reported using a computer, and that went up a lot, which was good. If you give students a computer you want them to be using it. We also looked at the time spent on homework and the time spent reading for fun. The homework results were interesting, because we didn’t find any new time spent on homework overall. The students were keeping their homework time constant, but they were spending more of that time using a computer. In terms of free reading, we found a decrease in time students reported reading for fun. That could explain why we see impacts in math, and not as much in reading, but those negative impacts on free reading were small in size—amounting to about eight fewer minutes per week.

What lessons can other districts draw from this research?

The positive impacts were really encouraging. Even more positive is that the program was essentially cost neutral for the district. The district did get a donation to start, but they mostly did it by moving around items in the budget. Also, in Mooresville, it wasn’t just about the laptops. They used the 1-to-1 program as a means to an end. They did a lot of professional development with teachers and tried to accompany the initiative with a culture change.

Other school districts shouldn’t think they can just pass out laptops or tablets and that’s going to make a difference. You have to be working ahead with the teachers and getting buy-in from them and the community.

Other districts, notably the Los Angeles district, have not had the same success with 1-to-1 programs. What made Mooresville stand out?

The Mooresville case is a best-case scenario, kind of a proof of concept. A 1-to-1 program has the potential for this upside, but when you dig into the details, it wasn’t just the laptops in Mooresville. They did a lot of work before and continue to support it.

What can the research community learn from this report, particularly around educational technology?

I’d go back to this short-term versus medium-term or longer-term horizon idea. It shows that we shouldn’t necessarily give up on programs that have null findings in the short term. I get why people do—budgets are limited and you want to invest more resources in things that are working initially. If I had only written this paper based on two years of results, I’d say the program wasn’t very effective and even if it was cost neutral budget-wise it took up a bunch of people’s time. It was really only in four or five years post implementation that I saw positive effects.

Are there other ed-tech areas researchers should be taking a closer look at?

With technology it’s just that the effects can be all over the place because it depends on what the computer is replacing or how people are using the computers. What we still don’t know much about is mechanisms. I’d like to see more work done surveying teachers on the practices they’re using, surveying parents on the rules around using a computer at home. That way we can uncover the key elements of what’s working and what isn’t.

See also:

Related Tags:

A version of this news article first appeared in the Digital Education blog.