Opinion
Education Opinion

Will Flipped Classrooms Reveal the Lack of Value in Assigned Work?

By Justin Reich — September 19, 2013 3 min read
  • Save to favorites
  • Print

Last week I was interviewed in the Atlantic regarding a soon to be released study about a flipped experiment in a pharmaceutics class in a school of pharmacy. Year 1 the class was traditional: readings, then lectures, then exams. Year 2, the class was flipped. Students watched lectures and did the same readings in advance, and then did stuff in class (“active learning” in some of the parlance of the day): “With course lectures offloaded to self-paced online videos, scheduled class periods were dedicated to four active learning exercises conducted in the following sequence:clicker questions (~15 min); think-pair-share (~15 min);student presentations (~25 min); and a quiz (~20 min).” In the second year, students scored 2.5% better on the final exam.

Lots to discuss here, and Robinson Meyer does a good job covering the study. I want to make one further point about implications for time.

Interestingly, the 75 minute lectures condensed into 35 minute videos. I’ve heard this before numerous times: that when teachers make videos of their direct instruction, the videos are shorter. No pesky kids asking questions; no diatribes about the Red Sox; no funny jokes at the expense of the kid who comes in late because he lost track of time in the wind tunnel. Just lecture. Could it be that half the time spent in a typical lecture is unnecessary? What would happen if you simply got rid of the lecture content all together?

I had some questions, still not totally answered, about time working as well. In the study, students were assigned these lectures and the readings. That means adding 70 minutes of homework a week (2 lecture videos). In response, students seemed to just do less of the readings, which hadn’t be popular anyway. On a five point scale from (1) Never to (5)Always, the average student response to a student question about whether they did the readings got a 2.29 in the baseline year. In the flipped year, it was a 1.67. So it seems like when faced with additional preparation work, students just didn’t do a bunch of it. That doesn’t necessarily align with one of the arguments for the flipped classroom: students will do their homework because they have to be prepared in class for clicker questions. That suggests the readings may not have been of much value either. What if you got rid of half the readings? Or all of them? Or used Wikipedia pages?

Meyer has the professor respond to some of these concerns about whether students are doing more work: “When I asked Dr. Mumper about this, he said that, while he assigned more work, exit surveys showed that students did the same amount of work. Because more of the work was upfront, students crammed less, so they wound up devoting the same amount of time to the class.” I’d be interested to see the data that supports that claim. I’m not sure how you would ask students to measure the amount of work they did over the course of the semester. But even if it is true, given that the course had about ~900 extra minutes of work (24 lectures * 35 minutes), if students did the same amount of work in the flipped year as in the baseline, they were skipping a bunch of stuff. What was the stuff that had no value?

All of this is somewhat corroborated by some of the studies done on Carnegie Mellon University’s Open Learning Initiative Statistics online course, which has had some studies demonstrate that students learned intro stats as well from OLI Statistics as from an in-person class, but with fewer hours of effort per week.

One way to improve education may be to find things that work really well. Another way to improve education is to find things that take a bunch of time but have no impact on student learning. Most flipped studies and initiatives seem to be focusing on what you can add to the experience to make students learn more. Another interesting approach is to say, what can we take away that’s not working and still have students do just as well.

As a final footnote. I always think it’s interesting to see how reporters take what I contribute and weave it into a story. Sometimes it makes me cringe, but here I think Robinson does a nice job representing my views. But in the interest of full disclosure, here’s the full text of the email I sent him. It’s interesting to see how these pieces get in the story:

Briefly, the study is well done for a quasi-experimental, year over year study. As the authors note, a randomized trail would be better, because we cannot definitive attribute the differences in outcomes to differences in course experience--it might simply be that the cohorts were different to begin with (this is particularly true of the final exam results, less true of some of the course eval results). They do a nice job of arguing against that threat by noting that their incoming GPAs and PCAT socres are the same. It would have been nice to have their mean fall GPAs in the program as well. Relative to many other conditions, I'd say the quasi-experimental nature of the study is relatively robust to this kind of bias because everyone has to take the class and the cohorts are so similar year to year (this is a much bigger problem with some of the studies we are trying to do in Harvard College).
The results are somewhat positive, but also not particularly striking. The final exam scores in 2011 were an 80% on a 100 point scale, and an 82.5% in 2012. In the right direction, but not earth shattering. There was also no difference in overall course satistifaction between 2011 and 2012, though some gains in particular areas.
One important question, unclear from the paper, is how the rest of the course changed. In 2011, from a syllabus I found online, it looks like they had readings with each lecture. In 2012, were the readings still due? If so, they the instructors added about 70 minutes of work per week to student schedules.
In other words, the gains in scores might have been primilary due to just making students work more. Now, by all means, I'm all for making students work more. And they managed to make students work more, without negatively impacting how much they like the class (and having them claim to like the format more), and with a very modest gain in learning outcomes as measured by the final exam.
That's all good, but it raises questions about sustainability. Students take 5 classes in the second year of the UNC Pharm program. If they flipped all of them, could students do an additional 350 minutes of work per week? Would we see the gains repeated across the courses?
Overall, I'd say the study adds a useful contribution to the growing literature on flipped instruction and active learning in higher education. One question to follow is should we expect even greater gains in future years as instructors refine their methods? A second useful question is, are we seeing the limits of teacher-centered content delivery. With all of the technologies now available to us, should we be content with a two percent gain on final exams?

For regular updates, follow me on Twitter at @bjfr and for my publications, C.V., and online portfolio, visit EdTechResearcher.

Related Tags:

The opinions expressed in EdTech Researcher are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.