(Unintentionally) Ironic Blended-Learning Study Shows Principals How to Avoid Critical Mistake
Note: Elliot Sanchez, the founder and CEO of mSchool, is guest posting this week.
On Monday we talked about the tension that can be introduced when a school personalizes learning. At its core, the tension is a familiar one: how do we balance the calendar of the school year with the diversity of student needs? When technology makes it possible for every student to work on material tailored to their zone of proximal development, there's no doubt that it's more efficient. But what if it's not what was in the unit plan for this week? If students aren't working on the lessons that were scheduled, is personalization worth it?
A new study in Educational Technology Research and Development examined how teachers integrate technology into their classrooms and how the professional development they receive supports that. In an ironic twist, the teachers reported that the training on this new technology (which would ostensibly allow them to personalize their classes more easily) was too formal and not personalized to the teachers' needs! As the study reports, "Several teachers recommended that training classes should be customized to content area and choice be provided as to which training classes they could attend. They felt they were required to attend classes that were not useful for them due to lack of resources or inappropriateness with their particular content area."
The tendency to keep content as one monolithic path is so strong that it even permeates professional development for customized learning. Rick's Cage Busting series provides countless examples of the traps we can fall into by assuming that our environments are constricting our options even when the real culprits are our habits and well-worn patterns of behavior.
So how does a principal avoid becoming a victim of habit and give personalized learning a real opportunity to succeed? This isn't a hypothetical problem. I've watched school leaders and teachers struggle with this question as they use personalized learning for the first time. In districts that have made extensive investments in scripted curricula, the issue is even more pronounced. Often we'll begin a blended learning pilot, have students working on personalized material for a few weeks, and then start to examine the data.
Sure enough, there's frequently a subset of students who spend time in personalized learning and see their grades dip initially. After a lot of careful preparation (and, often, convincing of colleagues who might have been hesitant), seeing anything but immediate, overwhelming success can be discouraging. And, given the number of demands on school leaders' time and attention, some don't have the desire to find out what went wrong.
I remember one teacher who had just finished a 4-week pilot and was disappointed because students' unit test scores weren't as high as those of their peers in our control group with a normal schedule and no technology block. I asked the teacher what he thought of the results. "It's obvious that the students are learning so much. I saw students covering material that they've struggled with for months but that we haven't had a chance to cover again," he said. "They didn't do as well on this quiz because they were all practicing the material they'd been struggling with." He looked down at the score report. "But if they stay on their personalized paths, at this pace, they'll be way ahead of the class by the end of the year. I just don't have any way to show that in their grades."
And therein lies the key to success: starting with the right definition of success.
If you've spent years using weekly quizzes as your primary indicator of whether students were on track, it was probably for good reason--because students who kept up on quizzes made it to the end of the year with the knowledge they needed to move on.
If you've sorted students into classes based on last year's final test scores, it was probably because that was the best indicator of how they'd perform at the end of this year.
But if you're serious about personalized learning and want to avoid derailing the program before it begins, it all starts by defining success as maximizing each student's learning. It takes work to measure, but if done correctly, measuring student growth in this way not only captures student progress more accurately, it allows for deep insights into the ways that you can most effectively target support.
Using data that allows for direct measurement of student growth, regardless of where a student starts and ends, means individualizing not just curriculum, but also assessment of student progress. If done correctly, it can precipitate broad changes in how teachers approach the development of their students. Unfortunately, if done incorrectly, data collection won't just hamper student progress--it can actively discourage it. On Wednesday I'll discuss the dangers of student data collection--and the surprisingly easy way to avoid them.