« Using Evidence to Improve Ethnic Studies Curriculum for San Francisco Students | Main | Partnering to Improve: Insights from Baltimore »

The Case for Improvement Science

| No comments

This week we are hearing from the Baltimore Education Research Consortium (BERC, @BaltimoreBERC). Today's post is written from the researcher perspective. Stay tuned: Thursday we will share the practitioner's perspective on this approach.

This post is by Faith Connolly, Executive Director of BERC, Marc L. Stein, Research Co-Director of BERC and Associate Professor at John Hopkins University, and Tami K. Smith, Associate Professor, Goucher College, and Affiliated Researcher at BERC.

How do we improve educational outcomes for students, schools, and districts? This basic question has been a primary focus of education research, policy making, and practice. A lot of intelligent, caring, well-skilled individuals have worked on this for decades and we still haven't found the answers, even as many students and schools continue to struggle. Something needs to change: A way to coalesce the existing knowledge and skills coming from research AND practice is needed.

We pose Improvement Science as a method for identifying underlying systemic causes of seemingly intractable problems, for which there are no silver bullets, and guiding step by step approaches moving toward substantiated refinement. This method, however, requires a shift in our usual role as researchers.   

Student outcomes have become the primary focus of education research and policy making and are a prime driver of the work of school and district personnel. Yet too often, the processes that generate those outcomes are not examined. An often-quoted statement from W. Edward Demings, the Father of Improvement Science, is, "We should work on our process, not the outcome of our processes." This means that if we improve the processes that generate the outcomes we care about, then we will improve those outcomes by default. This shift in thinking focuses the work on incremental development over time, identifying solutions to smaller parts of the big problems, in order to create large shifts and changes in outcomes for students and schools.

Improvement Science is a systematic, evidence-based process that healthcare practitioners and other industries have successfully used to guide improvement. In education, improvement science methods aim to eliminate the gap between traditional research and real-world practice. Improvement Science moves beyond research findings and program implementation to establish practitioner "know-how," learning to improve reliably, and collecting data for learning (Bryk, 2015). Further, the role of improvement research is to make incremental changes to existing structures, social systems, and procedures.

Research-practice partnerships such as the Baltimore Education Research Consortium are uniquely positioned to use Improvement Science methods. Each partner brings distinctive expertise to the table to increase our collective understanding of the challenge and to learn as a team to improve. Researchers provide the tools of research — data analyses, questioning frames of mind, an empirical framework that can focus the work, and idea generation. Practitioners bring their wealth of experience in schools and an intimate knowledge of their students' and school's community, history, and context.

For the partnership to work, both practitioners and researchers need to shift their way of thinking and adapt interactions atypical of traditional researcher-practitioner partnerships. We observed this in our own partnership: For example, researchers needed to think about their work as more than empirical studies on district-level data, and rather focus on microdata analysis for use by practitioners. They needed to work more in schools with practitioners and less in offices with large data sets. This was a new paradigm for many researchers. Practitioners had to change, too: They had to learn to see utility in the data. Additionally, after many years of district reporting on accountability data, they were unaccustomed to looking at their own data critically instead of defensively.

As a partnership, we established trust, processes, space for conversation and reflection, and a new methodology toward answering and enacting the how of improvement. Stay tuned for Thursday's post for specific examples of how Improvement Science can improve the educational outcomes of Baltimore's students — and beyond.

Bryk, A. S. (2015). 2014 AERA distinguished lecture accelerating how we learn to improve. Educational Researcher, 44(9), 467-477.

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in Urban Education Reform: Bridging Research and Practice are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments