Opinion
Education Opinion

Do QRIS Improve Student Outcomes?

By Sara Mead — August 28, 2013 6 min read
  • Save to favorites
  • Print

In recent years, most states have moved to adopt Quality Rating and Improvement Systems as a market-based approach to improving children’s early learning. QRIS typically assign early care and education providers a rating of between one and five “stars” based on the extent to which providers meet certain quality features defined in the QRIS, such as teacher credentials, class and group size ratios, and managerial or administrative features; publish these ratings to help inform parent choices; and in some cases offer financial incentives or rewards to programs with higher star ratings. QRIS systems are based on two key premises: First, that most children are likely to remain in market-based preschool and childcare options paid for by their parents, and so efforts to improve the quality of early learning should focus on utilizing market forces to create incentives and rewards for higher quality and to drive parent demand to better quality programs. Second, that by creating a common quality metric and set of standards across a variety of program auspices (child care, state pre-k, Head Start), QRIS systems can help move our patchwork nonsystem of early care and education in the direction of a more aligned system (See these excellent previous posts by Louise Stoney for a fuller discussion of the potential benefits of QRIS). Both arguments have significant appeal, which, combined with the Obama administration’s push for QRIS systems in the Race to the Top Early Learning Challenge, has motivated more than 40 states to implement or pilot QRIS at a statewide or regional level and most of the rest to begin planning to do so.

But as states invest resources in QRIS systems, a fundamental question remains: Do QRIS work? More specifically, are are providers that receive higher QRIS ratings associated with better student learning outcomes? And, does the implementation of QRIS lead to improvements in early learning outcomes for children in a state overtime. Efforts to date to answer these questions have presented mixed results.

A new study published in the journal Science, however, sheds troubling light on the question. The researchers used an ingenious approach to evaluate whether higher QRIS ratings are likely to be associated with improved student learning outcomes. Researchers utilized data from the SWEEP and Multi-state studies of preschool, which from 2001-2004 collected particularly rich data on quality and student outcomes in a sample of more than 700 preschools participating in state pre-k programs in 11 states that reflect a variety of quality standards and approaches to pre-k. Researchers used this data to determine what “star rating” the studied preschools would have received under each of 9 different existing state QRIS systems, as well as on two “generic” QRIS models that the researchers created to reflect common features across state QRIS Systems. The results are fascinating.


  • First, the distribution of ratings that preschool programs would have received varied considerably between different states’ QRIS models. Under one state’s QRIS model, 85% of preschools studied would have received the highest, 5-star rating, while under another, only 10% would. Under a third state’s model, 22% of preschools would have received the lowest, 1-star rating, another 78% would have earned 2 stars, and none would have earned 3 or 4 stars.
  • More importantly, researchers found little to no evidence of a that programs’ ability to earn higher ratings on either a generic QRIS or specific state models predicted improvements in student learning outcomes.
  • Finally, researchers found that most of the four categories of quality features QRIS looked at--teacher qualifications and experience, classroom environment, family activities, and group size and adult-child ratios--were also not correlated with improved student learning outcomes. Learning environment, typically measured by the ECER-S, had some correlation with children’s learning. But the association with teacher qualifications, family activities, and group sizes/adult-child interactions were weak and inconsistent (and, in the case of family activities, negative nearly as often as positive). Researchers also looked at a fifth dimension of preschool quality that is typically not included in most established QRIS--the quality of adult-child interactions as measured by the Classroom Assessment Scoring System (CLASS)--and found a stronger predictive relationship between this measure and child learning outcomes than for any of the more common components of QRIS.

In general, these results should raise some concerns about the current rush toward QRIS systems. That said, a few caveats are in order: First, the researchers were unable to fully reconstruct the QRIS ratings that programs would have received on some states’ QRIS ratings, because many of those ratings include administrative features of programs (such as their management and business practices) that SWEEP and multi-state studies did not look like. These features are unlikely to be more directly related to student learning that those researchers were able to look at however. Second, and more significant, the sample of programs that researcher were looking at was restricted to state-funded preschool programs, which typically have higher regulatory requirements and standards than the minimum requirements for licensed childcare, and therefore are not representative of all the types of programs participating in state QRIS. It’s possible that QRIS and the factors they measure do have a predictive relationship with learning and development outcomes for programs lower on the quality spectrum. That said, because of variation in the standards for state pre-k programs, there is significant variation in quality measures even among the state-funded preschool classrooms included in the SWEEP and Multi-state studies, this caveat should not be overstated. Moreover, even if this is the case, it actually serves to underscore a concern I’ve long had about QRIS: Systems designed to encompass the full range of providers under different auspices and a broad continuum of quality may not pay adequate attention to the instructional quality features that differentiate truly high-quality preschool programs that prepare children to succeed in school. These new findings seem to suggest that may be the case.

The researchers suggest three main conclusions from these findings: First that it may make sense for QRIS systems to focus less on the array of input and practice indicators that are not predictive of learning. Second, that these systems should instead focus on observational measures of adult-child interactions that predict student learning outcomes. This makes sense to me (I’ve long been concerned that many existing QRIS systems focus too much on dictating inputs and specific practices), and it’s also the direction that some “2nd generation QRIS” efforts and Early Learning Challenge grant states are pursuing. But it’s important to be realistic here: One reason that many states focus on simple input indicators and do not include interaction measures in their QRIS is that the latter are a lot cheaper and easier to assess than the former. Developing QRIS systems that include robust measures of adult-child interactions will require resources many states may be unwilling to provide at this point in time.

Finally, the researchers note that, “As QRISs become more commonplace and pre-K programs plan to expand enrollments, it is increasingly important that ratings link to children’s learning to ensure that states incentivize and improve the aspects of quality that matter most.” As I noted last week, there’s a growing recognition in the early childhood space that, if we’re serious the ultimate goal of early learning investments is improved learning results for children, we’ve got to find smart, responsible ways to start looking at those results in how we evaluate program quality. That doesn’t mean high-stakes tests or K-12 style accountability ratings for preschools. But it does mean we need to have a serious conversation about how and what learning outcomes factor into discussions about preschool quality.

The opinions expressed in Sara Mead’s Policy Notebook are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.