Toward 'What Works' 2.0: The LEGO Theory of Educational Improvement
This post is by Amelia Peterson, a doctoral candidate in Education Policy and Program Evaluation at Harvard Graduate School of Education and an Associate at Innovation Unit.
How can social science contribute to educational improvement? Perhaps the key is in the mechanisms.
Last month, I wrote a blog post for the RSA about the What Works approach to education, titled "Why we need more LEGO and less Polly Pocket." In it, I argued that education research commissioners have been too fixated on the medical model of randomized controlled trials (RCTs), where a treatment is meant to provide the whole solution in one. This model ends up promoting overly specified program designs that are too rigid to be adapted to the variety of contexts that make up public education. Instead of these fixed programs (Polly Pocket houses), I argued, we need more experimental trials of discrete mechanisms, principles of learning that are portable and can be combined by teachers in different ways: like LEGO.
In the traditional view of What Works, to implement a program with fidelity a teacher has to slot into a role specified by the design, just as in a Polly Pocket we can only move Polly to stand in a few pre-ordained spots. If What Works focused on mechanisms rather than programs--taking a lead from international development over medicine in the way we design RCTs--we could start to identify the human and social learning factors which are relevant across contexts, homing in on the most essential pieces that need to be in place to create powerful learning. We might not be able to do much with just a few "LEGOs," but at least each one is rock solid and versatile. And the more we have, the more we can build. We might like the metaphor, but it prompts the question of what makes a good LEGO piece.
What is a mechanism? At the most basic level, a mechanism is a link between an action and an outcome. It is the key part of an explanation for how one thing causes another. The field of psychology, for example, has established methods of identifying discrete "behavioral mechanisms": terms that describe how certain actions of features of the environment can change how we behave. Some of these, such as priming, framing, regression or avoidance, have become relatively common parlance.
With the morphing of behavioral psychology into behavioral economics, some of these mechanisms have even become principles for designing policies. The U.S. federal government is just one of many to have signed up to the "nudge'" theory of public policy: by working with general psychological principles such as priming and network effects, they can refine the way policies make use of social norms, environmental features, or even word choices. The work of the UK behavioral insights team, starting with the 2010 publication MINDSPACE, is an excellent example of how psychological science can be translated into useable principles that are then tested in a variety of experiments and finally deployed in policymaking. One education-relevant example is changing financial aid processes when students apply to college: streamlining the design of forms and including financial aid information when parents complete taxes can significantly increase the proportion of students who not only apply for financial aid, but go on to enroll in college.
This work already provides a roster of mechanisms to apply in education, but we might also consider how to capture and describe mechanisms that are more specific to contexts of teaching and learning.
We already have many good starting points for identifying learning mechanisms. The OECD's 7 Principles of Learning provides one authoritative take on how contexts can give rise to engagement, concentration and practice. Amalgamated catalogues such as John Hattie's Visible Learning provide lists of factors which positively impact learning outcomes. Cognitive scientists such as Daniel Willingham have also worked to effectively deduce educational principles from the study of domains such as memory, attention, and routine.
With all these different ways of describing the factors that "cause" learning, it looks like mechanisms, like LEGO pieces, can come in different shapes and sizes. But what LEGO pieces have that our learning mechanisms currently do not is a consistent means of sticking them together. Give any child a box of LEGO and she knows what to do with it: the social practice of working with LEGO is well established. But our social practices around acting on learning principles in a school or classroom are less so.
Too often, there is a gap in moving from sets of facts about what promotes learning to concrete practices that can be utilized in real-world contexts. From reading Hattie and others I might know that providing feedback is crucial to supporting learning; I might even know that it is better if that feedback is specific and comes with the opportunity to respond. But give me a pile of student work and tell me what to do with those students next Monday and I wouldn't be sure where to start. I may know that feedback is the key factor, but what are the actual mechanisms by which feedback will lead to improved learning outcomes? What should I be trying to get my students to do? What should I be looking for to know if it is working? Redescribing feedback in terms of a key mechanism might end up with something like, "the learner understands how something could be improved." In designing my practice, then, I'm not thinking just about how I will give the feedback, but about how I will know whether or not the student understands that feedback, and has the necessary information to have more success on a second go. This might seem like a subtle shift, but a look at some of the practices and even guidance around grading and marking indicates how little of it actually amounts to real feedback. Focusing on mechanisms is therefore a step towards better, concrete teaching practices that are underpinned by basic research.
Perhaps even more importantly, it is a way to research teaching that takes seriously the agency and skills of teachers. What Works 1.0 was an improvement agenda that saw teachers only in terms of a problem of teacher variability. By treating teachers as professionals whose individuality is a powerful driver of engagement and ingenuity a source of new ideas, we have much more hope of improving learning outcomes.
There is already an audience for mechanism-based research. The popularity of documents such as The Science of Learning demonstrate that teachers are hungry for principles to ground what they are doing, while catalogues of techniques such as Doug Lemov's popular Teach Like a Champion appeal to those who want concrete instructions as well as general principles. As a research community, however, too often we allow these principles and techniques to be bundled up, packaged, and branded, placed behind paywalls where teachers or leaders can only access them if they sign up to a whole program. In the past this was a necessary means to promote dissemination: how else would practitioners learn of something if not through a company that could take care of marketing and providing PD? But today, spend any time with teachers involved in inquiry or research networks, whether on Twitter or in person, and you already see how teachers are eager to exchange and challenge ideas about practice. The spread of communities such as ResearchED or gathering like TeachMeets illustrate how practitioners are sharing their own techniques and approaches.
To connect the interest and activity with education research funds, we could do worse than develop collective efforts to deduce from existing works a set of learning mechanisms, and design tests of these mechanisms. This would provide on the one hand a more solid set of principles of how learning is promoted in practice--not just in theory based on brain science. It would also start to refine sets of reliable techniques as to how to activate these mechanisms, and more systematically identify relevant context-mechanism interactions--just as policy teams are starting to move from the knowledge of mechanisms such as priming toward a raft of different ways to contextualize use of that mechanism in the design of public communications.
RCTs are of course only one approach to conducting education research, but they remain one backed by considerable funding opportunities, and buy in from researchers, the public, and many teachers. The most important moment in a trial might be how a treatment is defined: if nothing else, it would be a big step forward to turn that into a more public and collaborative process.
This post draws on an article recently published in a special issue of the International Journal of Research and Method in Education, on 'What Works.'