Learning Improvement Science
This week we are taking a break from our regularly scheduled programming to reflect on an annual gathering and a special meeting hosted by the Carnegie Foundation for the Advancement of Teaching (@CarnegieFdn). This post is by Paula Arce-Trigatti, Director of the National Network of Education Research-Practice Partnerships (NNERPP, @RPP_Network), who shares her perspective as a first-time attendee.
Stay tuned: Thursday we will share the perspective of a Carnegie Summit veteran.
Last week I was invited to participate in the first ever "Academic Symposium," a special meeting hosted by leaders from the Carnegie Foundation for the Advancement of Teaching to explore the "Improvement Movement" in education. As part of this event, I was also invited to attend the Carnegie Foundation Summit on Improvement in Education, the annual gathering of those interested in or currently pursuing improvement science in education. We'll hear more about the Academic Symposium in Thursday's post, including the importance of the meeting and some potential ways you might connect to these efforts in the future, so in today's post, I'll share some reflections as a first-time attendee to the Summit.
In a few words: I found the Summit to be immensely valuable and would highly recommend it to those of you who are curious and interested to learn more about improvement science. Below, I briefly introduce the Carnegie Foundation and the Carnegie Summit and then highlight my key takeaways.
Who is the Carnegie Foundation?
The mission statement of the Carnegie Foundation for the Advancement of Teaching begins by describing their commitment to "developing networks of ideas, individuals, and institutions to advance teaching and learning." To make this vision a reality, they "join together scholars, practitioners, and designers in new ways to solve problems of educational practice." And last but certainly not least, the Carnegie Foundation explicitly denotes their efforts "to integrate the discipline of improvement science into education with the goal of building the field's capacity to improve."
Without a doubt, they are leaders in the improvement science space, as evidenced both by the popularity of the Summit and the increased interest I have seen in "networked improvement communities" across the country. Several of the key leaders within Carnegie have also written a book on their ideas around improvement science, called "Learning To Improve" (by Tony Bryk, Louis Gomez, Alicia Grunow, and Paul LeMahieu). I decided to read it in preparation for the Summit (as any over-achiever would) and would recommend it regardless of if you ever make it to the Summit — there is a treasure trove of valuable insights available in the book alone.
What is the Carnegie Summit?
The Carnegie Foundation Summit on Improvement in Education is the annual get-together, so to speak, for anyone in the education space who is interested in or currently pursuing improvement science strategies. This year's conference was the fifth iteration of the event and was attended by over 1,400 people representing a variety of organizations from both research and practice arenas in education. With over 45 breakout sessions across diverse topic areas ranging from data & measurement to professional development, the program featured learning opportunities both for newbies (such as myself) to learn more about all aspects of improvement science and for veterans to this work to share experiences and deepen their knowledge. A new collection of sessions at this year's Summit included a program strand called "Spotlight on Continuous Improvement," featuring real-live case studies of groups that are currently engaged in exemplary continuous improvement efforts.
Key Learning Opportunity: The Simulation
Although I was able to participate in several sessions throughout the Summit, I focus here on one in particular meant for newcomers to the ideas of improvement science and continuous improvement.
Coming from a strictly "I research education" orientation (which just means that I tend to study education rather than actively practice it), I (unfortunately) did not have improvement science or continuous improvement approaches built into my graduate training. Thus, when I discovered the "Introduction to Improvement Science: A Learning-By-Doing Simulation" session on the program, the inner lifelong-student in me rejoiced (also, some fist pumps may or may not have take place).
The purpose of the session was to take participants on an "Improvement Journey" so that they could experience first hand how to adapt the 6 principles of improvement science to problem solving. For our problem of practice, we tackled what it would be like to address a drop in attendance rates at our imaginary high school, which involved some minor role playing, complete with generic name tags (e.g., "Principal," "Teacher").
First, Talk About What You See
We started with a simple graph illustrating a decrease in attendance over a one year period at our high school and discussed: "What do you see?" As the presenters called for feedback from the audience about their observations, responses ranged from highlighting data-specific notes (e.g., "I see a one-percentage point decrease in attendance over one year for our school") to more probing questions about what wasn't on the graph (e.g., "How does this overall average drop vary over all sub-groups of students?"). This basic exercise reaffirmed something we've also seen across the partnerships in NNERPP: The opportunity to just listen to others as they share their observations and questions based on a simple data illustration is incredibly valuable. Indeed, a rich discussion around an important problem of practice is only one graph away.
Second, Investigate Across The Whole System
Next up, our guides helped us embark on a symptom-collecting mission in order to better understand the problem areas within the system that could be producing the undesired outcome (i.e., the decrease in attendance). I found this exercise to be as eye-opening as the previous one: rather than solely relying upon a scientifically-researched theoretical framework for assessing which parts of the system might be potentially failing students, we were required to go out into the "field" (i.e., our school) and gather some initial information to help guide our next steps. What I found most compelling about this strategy is that no assumptions were made about the source of the problem within the system — the undesired outcome could be due to school-level processes, like how attendance is recorded; it could be student-centric, such as individual health problems preventing students from attending school; or it could even be related to family or parent issues, such as not supporting a child's schooling efforts in a productive way. Each of these "symptoms" may lead one to propose a different possible solution, a result which I found to be quite fascinating.
Third, Learn From Failure
From there, our "Improvement Journey" took us through a number of additional activities, all in service of "learning to improve." Without giving too much away, I summarize these as follows: Once some drivers of change were identified, potential solutions were developed. These were then subject to continuous improvement cycles (i.e., plan-do-study-act, or PDSA, cycles) in order to discover what action would actually produce the change desired. After reaching a point of confidence that a particular "change package" was in fact working, we then thought about scaling this to a greater number of schools. All throughout, the idea of learning from failure was consistently highlighted, which is often missing from our conversations in how to improve education.
This session was clearly a product of its own continuous improvement cycles, reflected in the high-quality delivery and outstanding facilitation of the presenters. After attending the Summit and reading "Learning To Improve" in preparation, I think there is great value to engaging with both, as the Summit affords an opportunity to experience the activities from the book live and under very capable guidance. If you are only able to read the book, however, I think you will still have a solid foundation from which to begin. Which takes us to my final thoughts...
Start Before You Are Ready
This is just one of several key ideas shared at the end of the simulation and one that really resonated with me. The only way to improve, really, is to start, regardless of where you are in your improvement science knowledge. Certainly, you may "do it wrong" but in the very least you are on your way to gaining greater insight about how different levels of the system may be contributing to a particular problem. To put this key takeaway into practice, one example suggested by the session presenters was to try improvement science on a personal project. So, on that note, please excuse me while I generate a simple graph illustrating my ever-increasing commitment to chocolate chip cookies over time...