Opinion
Personalized Learning Opinion

Let’s Not Jump to Conclusions about Personalized Learning...Yet

By Contributing Blogger — November 07, 2017 4 min read
  • Save to favorites
  • Print

This post is by Rebecca E. Wolfe, Associate Vice President, Jobs for the Future.

The release of the latest installment of a report on personalized learning by RAND Corporation researchers over the summer was met with a number of headlines in major education press outlets. They included, “Time to Adjust Expectations on Personalized Learning,” “Personalized Learning Boosts Math Scores, New RAND Study Finds--But Scaling Is a Challenge,” and “When a Hyped School Model Proves Difficult to Replicate,” and folks in many corners of the twitter and blogosphere started gasping about counter-intuitive findings, such as

Reading these, my inner researcher started Hulking-out, wanting to stomp to the top of a building and yell “Consider what’s being measured before making potentially harmful and broad pronouncements about an entire reform effort!!!” Instead I took time to process, read, and synthesize the questions we should be raising about these studies, the subsequent headlines, and possibly, personalized learning research in general:


  1. How is personalized learning being defined?
  2. How is it being interpreted and implemented by the schools studied?
  3. Where are they in their implementation trajectory?

Others have already developed very thoughtful and careful breakdowns of these first three questions. Among them is the ever-insightful Julia Freeland-Fischer, in her article, “Why Personalized Learning is Hard to Study,” which she wrote a few days after the study’s release. It makes the point that there are a wide range of personalized learning practices, models, and purposes in what is a fairly nascent field that can make any stark (pro or con) interpretations of the study problematic.

It’s important, however, that we peer closer at the first three questions alongside a fourth concern: who are these schools being studied? The RAND study series was commissioned by the Bill & Melinda Gates Foundation and the schools they are observing lean heavily on computer-adaptive testing and student-information systems that Gates--at the time--supported as a means (and unfortunately, sometimes an end) to personalize learning. Additionally, these schools were early in their implementation trajectory, when the influence of the shiny digital tool is likely to be strongest and working out the complex human and relational aspects, the weakest. In short, despite researchers’ use of defining words that sound similar to how Jobs for the Future’s learning research-advocacy-and-support initiative, Students at the Center (and many others in the field), use the term, “personalized"--i.e., student-choice and voice, emphasizing the impact of personal relationships on learning--there is a disconnect with the actual implementation and execution described in the study.

As a result, RAND’s study points out exactly the kinds of mixed and/or mediocre findings we’d expect to see given the schools largely have not embraced the full, student-centered, research-based definition of personalization (summarized here). The study found “some more-difficult-to-implement aspects did not appear to differ from practices in schools nationally, such as student discussions with teachers on progress and goals; keeping up-to-date documentation of student strengths, weaknesses, and goals; and student choice of topics and materials.” Without those aspects implemented with at least equal attention and care to how they roll out the computer software training, (so-called) personalized schools showed relatively small or insignificant gains on a number of critical markers (other than math, a subject that lends itself well to learning on a computer). Right! We agree.

This report summary, ironically, largely supports our perspective as well as the research that learning won’t deepen or accelerate in the absence of human interactions. There needs to be attention paid to how the learning process occurs, where students are given more authentic voice and human support, with helpful digital platforms used in developmentally appropriate ways. And let’s not forget one of the most disheartening and potentially harmful aspects of this. Without those human aspects and student voice, the settings RAND is studying are simply reproducing the same assimilative sit-there-and-be-quiet kind of education we’ve forced down the throat of low-income students and students of color for decades. But now they get to do it on computers.

Thus, the really important study will be the next one, after schools have improved on or implemented the “difficult-to-implement” aspects of personalization. And in fact, we know from Andy Calkins’s blog after the RAND release, that that’s exactly what these schools are doing. Of course, that’s not what’s being reported in many of the headlines. That’s a level of nuance that doesn’t tend to sell newspapers and get retweeted. It is, however a really important level of nuance to keep discussing. The last thing we want is for folks to have the take-away be, “Research says personalized learning isn’t showing much impact other than a little bit in math; therefore, let’s go back to doing what we were doing before, but more.”

While the headlines lament the ability to scale personalized learning, we say hooray. We shouldn’t want to support scaling more schools that substitute a student-profile database and computer quizzes for the hands-on, messy, human-centered, hard--and ultimately far more lasting and meaningful--kind of personalized learning we should be scaling and sustaining.

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.