Research Advisory Board Wants a Higher Bar for Innovation Grants
The national board that advises the U.S. Department of Education on its research operations voted to weigh in yesterday on the proposed rules for the new Investing in Innovation or "i3" program—and just in the nick of time.
The window for commenting on draft guidelines for the $650 million grant program was scheduled to close at midnight last night, but the six members of the National Board for Education Sciences, at their meeting earlier in the day, approved a resolution that calls for strengthening and expanding the criteria the feds hope to use to decide how to distribute and evaluate grants through the program.
Part of the $100 billion in economic-stimulus funds for education, the i3 program is designed to spur the "next generation" of improvements in education and promote the spread of proven innovations. The biggest awards, or "scale up" grants, would be worth up to $50 million each for programs that are supported by "strong" evidence of research. A second category of "validation" grants of up to $30 million each would go to programs with a "moderate" research track record. The draft rules also call for awarding "development" grants of up to $5 million each for programs with "reasonable research-based findings or theories." (For further details, see my colleague Michele McNeil's story inEdWeek.)
The problem with the draft rules, the board maintains, is that their definition of "strong evidence" and their evaluation requirements for validation grants give equal weight to random-assignment experiments and quasi-experimental studies. Only the former, which involve randomly assigning subjects to either experimental or status quo groups, are considered the "gold standard" for determining whether an intervention works. Said board vice chairman Jon Baron:
"Our thought is that awarding scale-up grants solely on the basis of quasi-experimental studies would likely lead to implementation of some programs that are not effective."
What the department ought to do instead, the board says, is express a clear preference for randomized experiments. The board is not the only group to stake out that position: The Knowledge Alliance, a Washington-based group that represents research groups, also submitted a comment calling for giving more weight to randomized research in the definition for strong evidence. That group does not, however, favor setting out the same sort of methodological hierarchy as the criteria for evaluating validation grants, according to Jim Kohlmoos, the association's president.
The board's resolution also says the program should require evaluations to include estimates of program costs, and not just benefits, and that the new fund guidelines ought to encourage ways to design evaluations so that research costs and the burdens on schools and districts are kept as low as possible.
But, as Kohlmoos points out, there's also a risk in setting the methodological criteria too high for this program: Will there be enough education programs with a research track record strong enough to qualify for the new grants? Time will tell.