Opinion
Education Opinion

Blended Learning, But The Data Are Useless

By Justin Reich — April 13, 2014 3 min read
  • Save to favorites
  • Print

I’ve spent the last two days at a blended learning workshop hosted by the Dean’s Office at the Harvard Graduate School of Education and the Center for Education Policy Research (CEPR) .

Rather than two days of talking heads, the workshop brought together researchers, funders, school people, and software developers to build research proposals that would help us better understand “what works in blended learning,” for whom, when, and under what circumstances. I’ve been interested in these kinds of design-oriented, working meetings for a long time now, and I increasingly believe that if we are going to spend resources to bring people together, it’s much better to have them build something then talk to one another (or read papers while the audience checks their email).

Three themes came out on the development process and final project presentations, that suggest three challenges and opportunities for computer-assisted instruction. I’ll write a bit about all three this week. Both regular readers of this blog will recognize that I bring some skepticism to blended learning, but I always enjoy the opportunity to spend time with committed educators with values different from my own.

So first up, challenge #1: the data don’t help teachers.

My development team at the CEPR workshop included a cadre of blended charter school administrators, and their primary complaint about instructional software was clear: the data are useless. (These educators, by the way, were the true believers of blended learning.)

The dashboards present too much data, or data that is too obvious. Teachers know when a kid can’t read. They don’t need software to tell them that. They need to know the most pressing challenge a child has with reading that a teacher might be able to remediate. The balance between being useful and being overwhelming is quite hard. (One administrator offered the plaintive cry, “I feel like a human API,” which was terribly funny if you know what an API is and what an accountability officer at a charter management organization does for a living.) Dan Meyer has done some good writing on this.

It’s not clear to me that we have the technology yet to effectively address this issue, but if someone out there is looking for a killer app to build, it’s an instructional recommendation system that concisely guides teacher instruction.

I was also somewhat surprised to learn that in many systems, it is actually quite difficult to get a raw dump of all of the data from a student or class. Many systems don’t have an easy “export to .csv file” option that would let teachers or administrators play around on their own. That’s a terrible omission that most systems could fix quickly.

Finally, one challenge these practitioners faced was that every software system they used told them that it was working, and their kids were learning. But these educators weren’t so sure. There was an interest from several parties in conducting research studies that could correlate software measures with commonly used interim measures (like the NWEA MAP test or the DIEBLS reading assessment) and then ultimately with PARCC and Smarter Balance Tests, to suss out which products had the best correlation with these tests.

One design team even proposed a research project to create a kind of Value-Added Models assessment of software programs. This made a lovely, subversive sense to me (though I’m not totally sure that the presenting team appreciated some of the ironies). If we are using software as teachers, why not treat them as teachers in our test-score analyses? Why not hold them accountable for improving student scores beyond the expected? “But wait,” the software developer cries, “software packages are nested in a network of teachers and other software products... and furthermore there are factors in the classroom that are out of the software’s control. How could you accurate assign the effects of this milieu to one teacher...er... I mean, software product?"I bet if software were subjected to VAM evaluations, Silicon Valley might be ready to join up with Diane Ravitch and the other critics of value-added models.

In the days ahead, I’ll write about two more challenges and opportunities with blended learning: incorporating new research from social pyschology and developing new tools to expand the range of the assessable.

For regular updates, follow me on Twitter at @bjfr and for my papers, presentations and so forth, visitEdTechResearcher.

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.