« Collaborative Professionalism. We Can't Do It Alone. | Main | Burnout Or Balance. What Will You Choose? »

John Hattie Isn't Wrong. You Are Misusing His Research.

Hattie isnt wrong.jpg

Have you ever been the teacher or school leader and find that your ideas are interesting to other people? Other colleagues like how you engage students through classroom strategies or relationships, and they want to learn more from you? Suddenly, months go by and other colleagues in the school or district begin to get a bit upset that you are the one asked to teach new strategies at faculty meetings or in professional development sessions. 

Perhaps they tire that it's your name that comes up, or that their ideas are ignored...

Lately, it has been happening a great deal to University of Melbourne researcher and professor, John Hattie. While many teachers and leaders want to soak up his research and get a deeper understanding, there are others who want to cherry pick at his research and criticize it. They treat it as though it is static and hasn't changed from when he first published Visible Learning in 2009, and ignore the fact that he has been open and honest about the lessons he has learned along the way. 

There have been a few consistent criticisms of Hattie's work. Those are the following: 

  • We don't like the meta-analysis approach that Hattie uses because he combines large studies that include smaller individual studies
  • We don't like large effect sizes that are over a 1.00.

For full disclosure I work with Hattie as a Visible Learning trainer, and I often write about his research here in this blog. Additionally, I present with him often, and read pretty much everything he writes because I want to gain a deeper understanding of the work. To some that may mean I have a bias, which is probably somewhat true, but it also means that I have taken a great deal of time to understand his research and remain current in what he suggests. I also left my job as a school principal which I loved very much, so I too have taken a great deal of time to look at his research through a critical lens, and I believe I see how it fits into our practical lens as building principals and teachers. 

John Hattie is Wrong
Recently, I read a criticism of Hattie's work from Robert Slavin, which you can read here. It was published by the Johns Hopkins School of Education and is called John Hattie is Wrong. In his criticism, Slavin wrote,

"The essential problem with Hattie's meta-meta-analyses is that they accept the results of the underlying meta-analyses without question. Yet many, perhaps most meta-analyses accept all sorts of individual studies of widely varying standards of quality. In Visible Learning, Hattie considers and then discards the possibility that there is anything wrong with individual meta-analyses, specifically rejecting the idea that the methods used in individual studies can greatly bias the findings."

I reached out to Hattie to clarify some of these criticisms that Slavin wrote about in his blog. Hattie wrote, 

"First of all, Slavin does not like that the average of all effects in my work centre on .4 - well, an average is an average and it is .40. Secondly, I have addressed the issue of accepting "all sorts of individual studies of widely varying standards of quality". Given I am synthesising meta-analyses and not original studies, I have commented often about some low quality metas (e.g., re learning styles) and introducing a metric of confidence in the meta-analyses.  Those completing meta-analyses often investigate the effect of low and high-quality studies but rarely does it matter - indeed Slavin introduce "best-evidence" synthesis but it has hardly made a dent as it is more important to ask the empirical question - does quality matter? Hence, my comment "There is...no reason to throw out studies automatically because of lower quality" (Hattie, 2009, p. 11). It should be investigated."

Slavin also writes, 

"Part of Hattie's appeal to educators is that his conclusions are so easy to understand. He even uses a system of dials with color-coded "zones," where effect sizes of 0.00 to +0.15 are designated "developmental effects," +0.15 to +0.40 "teacher effects" (i.e., what teachers can do without any special practices or programs), and +0.40 to +1.20 the "zone of desired effects." Hattie makes a big deal of the magical effect size +0.40, the "hinge point," recommending that educators essentially ignore factors or programs below that point, because they are no better than what teachers produce each year, from fall to spring, on their own. In Hattie's view, an effect size of from +0.15 to +0.40 is just the effect that "any teacher" could produce, in comparison to students not being in school at all."

Hattie very rarely says to throw out any influence that has below a .40 effect size unless we have been talking about an influence like retention which has an effect size of -.32. A negative influence is a long way from the .40. We know there is a plethora of research that suggests retention doesn't work. 

In fact, In Visible Learning for Literacy (Fisher, Frey and Hattie) the authors focus on Problem-based learning (p. 37) which has had effect sizes of .12 in 2016 to .26 in Hattie's present list of 251 influences in 2018. The authors write, 

"Problem-based learning can work, under the right conditions. However-and this is critical- it isn't particularly effective when students don't yet possess the knowledge, skill and dispositions needed to engage an inquiry-driven investigation about a topic."

There Are No Shiny New Toys
Slavin does bring up a point that Hattie often brings up. Hattie has always suggested that school leaders and teachers not simply go after those influences that have the highest effect sizes. In fact, in numerous presentations and articles he has suggested that schools get an understanding of their current reality and research the influences that will best meet their needs. He has additionally suggested that those leaders and teachers not throw out those strategies they use in the classroom, but actually gather evidence to understand the impact of those strategies they use. 

In all of his criticisms, he only cites Hattie's work from 2009 or before. Hattie has written numerous articles, books and has done many presentations where he himself has said that he has learned a great deal from taking his work on the road and listening to other researchers, professors, leaders and teachers. I believe it's a strength of Hattie that he listens to those he interacts with so often. Can we say the same for other researchers? 

In our correspondence about the blog, Hattie writes,

I think we agree that care should always be taken when interpreting effect-size (whatever the level of meta-synthesis), that moderators should always be investigated (for example, when looking at homework two of the moderators are how homework works in elementary students and how it impacts secondary students), and like many other critics who raise the usual objections to meta-synthesis the interpretations should be queried - I have never said small effects may not be important; the reasons why some effects are small are worth investigating (as I have done often), care should be taken when using the average of all influences, and it comes down to the quality of the interpretations - and I stand by my interpretations.

Secondly, Hattie's work is not all about the numbers, which Slavin suggests. And that is actually a good point to end with in this blog. Many school leaders have a tendency to go after the influences with the highest effect sizes, and those of us who work with Hattie, including Hattie himself, have suggested that this is flawed thinking. 

In the End
Hattie's work is about getting us to reflect on our own practices using evidence, and move forward by looking at our practices that we use every day and collecting evidence to give us a better understanding of how it works. 

Overall, Hattie's messages are about evidence of impact, understanding our practices, addressing how we engage families and how we talk with them (Flaxmere Project), and putting students in the driver seat of their own learning. It's about preparing students for their future so they know what to do when an adult isn't around. 

Our world is an increasingly complex place where people talk at each other and not with each other, and Hattie's research is about preparing students to actually work together in a way that they can solve these complex issues together. Is that really something to argue with?  

For an article of Hattie explaining the meta-analysis approach, click here

For other news regarding Hattie click here. 

Peter DeWitt, Ed.D. is the author of several books including Collaborative Leadership: 6 Influences That Matter Most (Corwin Press. 2016), School Climate: Leading with Collective Efficacy (Corwin Press. 2017), and Coach It Further: Using the Art of Coaching to Improve School Leadership (Corwin Press. 2018). Connect with him on Twitter

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in Peter DeWitt's Finding Common Ground are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments