Opinion
Early Childhood Opinion

How Did Parents as Teachers Win an i3 Grant?

By Sara Mead — September 07, 2010 7 min read
  • Save to favorites
  • Print

Race to the Top scoring issues have come in for a lot of attention and griping over the past few weeks. But, like many folks I’ve talked to in the education policy space, I actually think there might be bigger issues brewing with the Investing in Innovation competitive grants program. Lots of folks were surprised and critical when the final list of 49 i3 grant winners came out in August. But it wasn’t really possible to do a lot of analysis, because the Department originally posted application narratives only for the largest scale-up grant winners, losers haven’t yet received their score sheets and comments to see where they lost points, and the Department’s incomprehensible score standardization process for validation and development grant winners.

Now that the Department has posted redacted narratives for the Validation and Development grant winners, that may be changing.

From the start, I was surprised by one i3 grant winner in particular: The Parents as Teachers (PAT) National Center, which won a $20.5 million grant to implement a parent-training program called BabyFACE* at 24 Bureau of Indian Affairs Schools. Now that I’ve seen PAT’s grant application, I’m still confused.

For starters, I’m not sure how this proposal was even eligible for an i3 grant. The proposed BabyFACE project is a home visiting and parent education program, focused on working with parents and children in the years prior to school entry. Now, improving early childhood education was certainly a competitive preference priority under i3, for which applicants could earn additional points. But Department of Education staff clearly indicated in i3 guidance and briefings for applicants that, in order to meet the absolute priorities for the i3 grant competition--in order to be eligible for a grant--applicants also had to address grades K-12. (Listen to comments from OII Assistant Secretary James Shelton at 1:40:50 in this recording of the Department’s Baltimore TA session for a very clear statement of this.)

PAT’s proposal does not appear to meet this requirement. To be sure, the application asserts that by working with parents to improve children’s school readiness, the project will ultimately lead to improved 3rd grade reading scores. And another program on which this proposal is based, the FACE program currently implemented in 39 BIA schools, does include a focus on prek-3rd early literacy. But the BabyFACE project proposed here provides services entirely to children and families in the pre-school years, and does not include any services or interventions for kindergarten or the early elementary grades.

So how did this application win an i3 grant? I don’t know.

The BabyFACE does appear to meet the criteria for the early learning competitive preference priority--although, as Laura Bornfreund at Early Ed Watch has noted, some of the i3 winners who received competitive preference points for the early learning priority do not actually appear to have met the criteria as laid out in the Department’s i3 guidance--another scoring issue we don’t have time to get into here (but read Laura’s post to understand this issue better).

The other thing that struck me with PAT’s application was the high reviewer scores it receive for “Strength of Research, Significance of Effect, and Magnitude of Effect.” One reviewer even gave PAT full points on this score, noting “no weaknesses noted” in the evidence for PAT’s effectiveness. The reviewer may not have found weaknesses in the evidence of PAT’s effectiveness, but that in itself is evidence of the weaknesses of the i3 competitive grant process for determining the actual quality of the research base for the effectiveness of a particular program or innovation.

Now, I don’t dispute that PAT’s research base meets the threshold standards for “moderate” evidence” required in the i3 validation grant category. PAT has been subject to numerous independent evaluations, primarily longitudinal quasi-experimental evaluations, but also three randomized controlled trials. Overall, these evaluations give a mixed, but promising, picture of PAT’s impacts on young children’s school readiness and development. But you wouldn’t know that from the application.

While there are indeed both quasi-experimental and randomized controlled trial studies that find evidence of positive impacts from PAT, some of these evaluations have found no significant impacts on child development outcomes, or have found significant outcomes on only a small number out of many indicators (a situation in which there is a high likelihood of false positives), or only on later subgroup analysis following an initial null finding. In particular, a 2001 RCT study of PAT did not find any significant differences between the treatment and control groups on numerous child development outcomes; follow-up analyses in 2002 found positive impacts for the treatment on 3 out of 45 possible outcomes. Similarly, a 2009 study found positive impacts on only two of multiple child development outcomes studied. The results of these studies are reported in the application as if they were purely positive findings. Ironically, one of the reviewers noted as a strength in comments that some of the PAT evaluations had been conducted by SRI, apparently not realizing that one of the evaluations conducted by SRI found no significant differences in child outcomes between PAT children and a control group. Beyond the mixed findings from randomized controlled studies of PAT, there are also weaknesses in the quasi-experimental evidence, including lack of baseline data, small sample sizes, and differences between treatment and control groups in some studies. One reviewer deducted a point from PAT’s score for these weaknesses, but the other did not. Further, the difference between the positive findings in quasi-experimental studies and the much more mixed results in randomized controlled trials is particularly problematic for a program like PAT, because the program’s theory of action relies on influencing parent behavior, and parents who self-select into parent training programs may be more likely to do other things that support their children’s development than those who do not, even after controlling for other variables. Had reviewers had full information on the body of research on PAT outcomes, it is unlikely they would have awarded the program such high points.

The point here is not to impugn PAT--Any organization competing for a grant like this will seek to present the evidence for its effectiveness in the best possible light.** The point is that an application process like this is not a very good way to assess the full weight of the evidence of a project’s effectiveness, because applicants can choose what to highlight or exclude from their proposals. So as long as an applicant has at least some evidence that can be presented as meeting “moderate evidence” criteria, the points that applicants actually receive depend on savvy grant-writing as much as what the full weight of the evidence actually says.

I haven’t even gotten into my questions about how reviewers were instructed to score for “number of students served” and “annual costs per student” in the “Strategy and Capacity to Bring to Scale” section of the grant! PAT requested $20.5 million to serve 2,500 children and got full points from two of three reviewers on this section. Is this good cost effectiveness for the proposed project and sufficient scale for the goals of the validation grant? I have no idea. And more importantly, I have no idea what reviewers were told about how to score this section.

Digging into the i3 grant winners’ applications and scores is a much more daunting process than digging into the RTT results, and one I don’t have a great deal of time to do. But based on what I’ve seen so far, I have some concerns. I hope that more people will take advantage of the fact that the application narratives and score sheets for all i3 winners are now online, and that the Department will also soon release the narratives and scores of i3 losers, too, to support fuller analysis. As someone who has previously made the case for an increased federal role in supporting educational innovation, I have some investment in the ideas behind i3, but because of that I also believe it’s critically important that the program work well.

*yeah, it’s education, we like cutesy acronyms.
**disclosure: I worked on two i3 grant applications, and certainly tried to present the evidence for those proposals in the best possible light.

Related Tags:

The opinions expressed in Sara Mead’s Policy Notebook are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.