« How Teachers' Insights Inform State Policy in Tennessee | Main | Research-Practice Partnership Effectiveness: Which Strategies Matter for Impact? »

Measuring the Impact of Research-Practice Partnerships in Education

| No comments

This week we are taking a break from our regularly scheduled programming to reflect on an important question of increasing relevance to research-practice partnerships (RPPs): The question of effectiveness. In today's post, Paula Arce-Trigatti, Director of the National Network of Education Research-Practice Partnerships (NNERPP; @RPP_Network) talks with our close friend of the Network, Caitlin Farrell (@ccfarrell), Director of the National Center for Research in Policy & Practice (NCRPP; @NCRPP) about what 'effectiveness' means and where the challenges are in measuring the impact of RPPs.

Stay tuned: Thursday we will share a practitioner's perspective on RPP effectiveness.

 

Towards Measuring the Impact of Research-Practice Partnerships in EducationInterest in our ability to assess the performance and impacts of research-practice partnerships (RPPs) has noticeably increased in the last few years. With a growing number of partnerships popping up all over the country, more resources are being invested in this type of approach than ever before. Understandably then, a variety of stakeholders are interested in knowing what effective partnering looks like. While there has been some work expanding our thinking on how we might assess the outcomes of RPPs, we are still far from having reliable tools for measuring partnership effectiveness.

At the National Network of Education Research-Practice Partnerships (NNERPP), we are well aware of this need and are actively working on further developing our knowledge in this area. At the National Center for Research in Policy & Practice (NCRPP), Caitlin Farrell and colleagues have studied research-practice partnerships and thought about the different questions that emerge when trying to understand the impact of RPPs (see for example here and here). Here, we share a conversation between Paula Arce-Trigatti and Caitlin Farrell about the insights from that work.  

PA-T: Caitlin, as a researcher of research-practice partnerships, what does "RPP effectiveness" mean to you?

CF: You're right  Before we can start thinking about how to measure RPP effectiveness, we have to think about how we're defining effectiveness. There are a number of possible outcomes of RPPs that someone might focus on.

Some partnerships seek to bring in parent and community voices into discussions on how to improve education, so whether these groups feel more heard and included as a result of an RPP might matter. Others may want to focus on intermediate outcomes, like whether RPP participants' access to research increases or whether they value and use research more due to their involvement in an RPP. (See for example this previous blog post about what happens when educators and researchers work together in partnerships.)

Still others might want to focus on long-term outcomes of RPPs, like changes in student academic or socio-emotional outcomes. Sometimes, it's the innovation, like a new curriculum, an RPP produces that results in increased learning. My colleagues and I at NCRPP are interested in understanding how RPPs may contribute to shifts in research use in policymaking, and with what consequences for policies and practices in school districts.

So, the first question in any discussion about RPP effectiveness needs to be, what possible consequences of RPPs do we value and want to track?  

PA-T: One thing we've seen in the RPP field is that there is no one shared focus for RPP effectiveness emerging yet. What's contributing to that divergence?  

CF: One issue is that different stakeholders have different kinds of questions related to RPP effectiveness, and each group likely has different intended uses for the data gathered, too. So, at NCRPP, we're policy researchers who focus on questions with an organizational lens. One NCRPP study focuses on understanding how ideas from RPPs contribute to organizational policies, practice, and routines, for example. A major goal of ours is to use the data we gather to help generate empirical findings and develop theory for the field more broadly.  

In contrast, an individual RPP may have specific goals that are a part of their localized theory of action. Their gathering of RPP effectiveness measures are then useful for formative purposes, to test that theory and also refine their strategies as an RPP so they can work more effectively and efficiently.

A third potential perspective are the questions arising from the funding community. A funder of RPPs may be most interested in understanding the return on their investment, focusing on a particular outcome based on their organization's overarching mission. They may be interested in developing criteria for comparing investments, evaluating the "success" of past investments, or to guide future funding investments.

These different foci  and the potential uses of the evidence gathered  all contribute to the variation we're seeing in the field today. It's important to note that this variation is not a bad thing. It just presents challenges to quickly developing a shared language around RPP effectiveness and requires us to be thoughtful about these different dimensions when designing any kind of measurement tool for RPP effectiveness.

PA-T: Given that the RPP field itself is in its infancy, it makes sense that we are only recently able to say something meaningful about impact, performance, and effectiveness of partnerships. What, in your opinion, are the biggest challenges to assessing RPP effectiveness?

CF: Figuring out what outcomes you're interested in, for whom, and for what purposes is the first place to start.

Some inquiries  like those focused on the return on investment of RPPs, for example  may also beg the question, RPPs as compared to what else? Figuring out what makes for the best counterfactual is not an easy task.

Say, for instance, you want to understand the impact that an RPP focused on college readiness had on the students' college attendance rates in one district. What would be the best hypothetical comparison here? One might argue that you should compare those outcomes against outcomes from a similarly situated district that had no researcher involvement at all. Or, should you compare it against a district working with a researcher in highly traditional ways, where the research questions were largely developed by the researcher, and findings eventually were published as peer-reviewed journal articles? Or, perhaps you should focus on the RPP as compared to other kinds of external partners who engage with districts, like for-profit organizations or advocacy groups? While not all questions related to RPP effectiveness will require this kind of comparison group, some will, and we will need to think carefully about what makes the most sense.

Being able to determine if an RPP had a certain impact is only one part of the story. Equally as important is understanding the underlying the chain of change that contributed to particular RPP outcomes  the how and the why an RPP was able to make the progress it did.  

Let's return to our hypothetical district who worked with an RPP to improve college readiness. With some investigation, we might learn that RPP members worked together in ongoing ways that supported trust and collaboration. The trust that developed meant that the members made sense of some difficult research findings together, contributing to shifts in how district leaders understood the problem of college readiness. Leaders also trusted researchers to connect them to new information on effective course re-design strategies for increasing college readiness. Together, the partners drew on these new understanding as they re-designed high school courses. The new policy went into effect, and the accompanying professional development helped shift school leader and teacher behaviors. Ultimately, school sites had more supportive college-going cultures in school, and it was at this point that the district saw increases in college attendance rates.

Understanding this pathway helps make clear the complex, interrelated chain of events, actors, behaviors, and conditions that are involved in achieving these outcomes. It helps us identify particular short-term outcomes (e.g., the development of trust within the RPP) that may contribute to proximal outcomes (e.g., district leaders' use of research in policymaking) and long-term outcomes (e.g., change in student outcomes). When things don't go to plan, we have a better idea of where we might intervene. And, for another RPP looking to replicate this successful experience, this pathway provides a roadmap for understanding what may need to be in place before getting started.

PA-T: There have been some efforts around developing frameworks and definitions to help RPPs assess their impact. How are these helpful for the field, and what do you think are the next steps?

CF: The Henrick et al. framework is my go-to resource to share when someone asks me about RPP effectiveness  either for researchers who want to study this phenomenon or for participants of RPPs who want to think about progress they're making towards local goals. The framework lays out five different dimensions, surfaced from conversations with RPP members from different forms of RPPs. They include both process and outcome dimensions  both are critical for understanding RPP effectiveness. The indicators are also general enough that many different types of partnerships  including research alliances, networked improvement communities, design research, or community-based partnerships  may find them useful. The framework is a great starting place for the field, and next steps include developing more specific measures, indicators, and tools to help assess progress.

PA-T: Thanks so much for joining us and sharing your thoughts with us, Caitlin! We look forward to more conversations with you on all things RPP.

CF: My pleasure!

 

Photo: Unsplash

Notice: We recently upgraded our comments. (Learn more here.) If you are logged in as a subscriber or registered user and already have a Display Name on edweek.org, you can post comments. If you do not already have a Display Name, please create one here.
Ground Rules for Posting
We encourage lively debate, but please be respectful of others. Profanity and personal attacks are prohibited. By commenting, you are agreeing to abide by our user agreement.
All comments are public.

The opinions expressed in Urban Education Reform: Bridging Research and Practice are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Follow This Blog

Advertisement

Most Viewed on Education Week

Categories

Archives

Recent Comments