Opinion
Education Opinion

Nudging, Priming, and Motivating in Blended Learning

By Justin Reich — April 16, 2014 4 min read
  • Save to favorites
  • Print

Last week, I spent last two days at a blended learning workshop hosted by the Dean’s Office at the Harvard Graduate School of Education and the Center for Education Policy Research (CEPR). With teams of researchers, funders, school leaders, and software developers, we proposed and pitched a series of potential research studies to advance blended learning.

Earlier this week, I wrote about one major theme of the conference, that the data from personalized learning software is useless for teachers. Today, I write about another major theme:

Educational Software as the Vector for New Pyschology-Based Innovations

In a number of sectors of society, psychological research is unveiling ever more ways to get people to do things without bothering them so much. Cass Sustein’s book Nudge provides a pretty good overview of this field: there are all kinds of subtle ways to change people’s behaviors. For instance, in states that switch organ donation registration from opt-in to opt-out, gazillions more people agree to people organ donors. Really, do whatever you want with our organs--leave them in, take them out--just don’t make us check more boxes.

There are now legions of pyschologists and behavioral economists developing various lab-based experiments to get people to change their behavior in ways with long-term implications that involve cheap, quick nudges, primes, prods, and pokes. Many of these approaches were brought to bear in the 2012 presidential election, as documented in the book The Victory Lab. They are coming soon to a classroom or Learning Management System near you.

Educational psychologists have taken a great interest in these various nudges, primes, and prompts that spur people’s behavior. For instance, psychologists have found that giving people certain kinds of motivational messages, having them plan how to overcome obstacles, or even writing down how a topic relates to their lives can all increase motivation and engagement.

A body of research studies is developing that suggests that some of these very small interventions, some taking only a few minutes, can have substantial effects on things like GPA and course completion. The problem with bringing them to scale is that you need to teach 3.4 million teachers about how to do them correctly.

So many folks enamored of this educational psychology are interested in baking these interventions into software platforms for learning, where they can be implemented correctly, cheaply, and with lots of kids at once. Khan Academy recently reported on some internal findings (not yet published, peer reviewed, etc.) that showing students motivational messages inspired by the work of Carol Dweck increased students activity (problems taken, returns to site, etc.) by 4 or 5%. That’s not a massive gain, but given that the cost of hanging motivational posters over randomized problems is basically 0, that’s a pretty good return on investment.

In our research design conference, one group (mine) came up with the idea for a GritBit, a kind of student-owned, real-time display of motivational data. Students would be regularly asked survey questions from well-developed scales that would help them track their own self-regulation, interest, motivation, and so forth.

At HarvardX, a number of these kinds of studies have been proposed, ranging from identifying a friend to act as a study supporter, using surveys to highlight similarities between faculty and students to increase rapport, and (this one literally hitting my inbox this morning) the implementation of a tracking device that measures wasted internet time and helps students set goals for engagement. In the next year, along with more fishing expeditions in the data stores, we can expect to see these kinds of experiments to be the next wave of published papers.

For me there are several big questions about this kind of work. First, will ideas that work in a freshman pysch lab work in real educational settings? The history of education policy is littered with innovations that work in one context but not at scale. Will there be a diminishing effect or regression to the mean if lots of these get used in lots of settings? Is there a diminishing marginal return on each nudge. Most importantly, what are the implications for doing this kind of work in marginally effective learning environments? I think of these kinds of interventions as pyschological boosts, nudges to get people to work harder. But if we have software that asks students to do pointless or ineffective activieis, then motivating people to do more of that chaff isn’t really helping. It’s like boosting the engines before you know whether the car is heading in the right direction.

That said, if you think that online learning platforms and software are generally helping people learn, then simple tricks to motivate people to be more engaged and active in these platforms is a good thing.

For the next post in this series, I’ll turn my attention towards innovations in assessment that would help us better understand whether students are actually learning from online software and platforms.

For regular updates, follow me on Twitter at @bjfr and for my papers, presentations and so forth, visitEdTechResearcher.

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.