Opinion
Education Opinion

Where is the Accountability in Education?

By Tom Vander Ark — January 13, 2015 3 min read
  • Save to favorites
  • Print

Andrew R. Coulson

For too long, we’ve held just teachers and “teacher quality,” accountable for student outcomes. The glare of the student results spotlight has been
intensely focused on teachers alone despite compelling evidence that instructional content and programs is almost as important to student learning. (In a
scathing

white paper

, “Choosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core,” Matthew Chingos and Grover Whitehurst blast programs and lack of
training and development.)

Not only is it past time to hold content and programs accountable, but we are out of excuses not to. A recent study published by WestEd demonstrates that
cost-effective and rigorous evaluations of new programs can now be pursued at any time in any state.

The diversity of content - scope, vehicles, approaches and instructional design - available today is far greater than when teacher-selection-committees
simply chose among “Big 3" publishers’ textbooks. Spearheaded by higher standards, as a nation we
are aiming to transform student learning outcomes to be much deeper than in the past. Yet, the only things which are going to change are content and
programs: the power of the tools and training we provide to our teachers.

Technology-driven revolutions happen when people readily adopt new, immensely more powerful tools to get work they wanted to get done, done. This came
naturally for printed books, spreadsheets, email and smart phones. But in education, it’s been extremely challenging to determine what tools actually work,
in what contexts and to what ends. This is due to gigantic variability in tool use and school culture and has led, understandably, to skepticism about
replicating anecdotal results. Instead we need credible evaluations, diving as deep as randomized student-level. But these are complex, logistically
challenging, high cost, and notoriously sparse and slow.

Now that states all have testing systems in grades 3-8, there is consistent grade-level information about proficiency rates and some ability to measure
growth rates, enabling any content or program to show its ability to add value, shortly after state assessment results are released each year.

This grade-level evaluation method is straightforward and replicable across years, states and program-types. It also works for every user (school site) in
a state, taking into account all real-world variability, easily reporting out on hundreds of schools and tens of thousands of students.

To use the method, the program must be:


  1. In a grade-level (e.g., 3rd-8th) and subject (e.g., math) that posts public grade-average test results

  2. A full curriculum program (so that summative assessments are valid)

  3. In use at 100% of the classrooms/teachers in each grade (so that the public grade-average assessment numbers are valid)

  4. New to the grade, within the first year or two of adoption

  5. Adopted at about 25 or more school sites within a state (in order to provide sufficient “n”).

When these conditions are met, a study that meets What Works Clearinghouse standards of rigor are possible without prior planning, as this WestEd study of ST Math results shows. Every program,
in every state, every year, that meets the above criteria can be studied, whether for the first time or for replication. The data is waiting in state and NCES research files to be used, in conjunction with complete and accurate records of program usage.

It may be too early for this level of accountability to be palatable for many publishers just yet. Showing robust, positive results will require true program efficacy. There will be many
findings of small effect sizes, many implementations which fail, and much lack of statistical significance. Third-party factors may confuse the results.
Publishers would need to report out failures as well as successes. But the alternative is to continue to rely on peer word-of-mouth recommendations

When this research method becomes the industry norm, imagine the renewed competition to improve the tools we give our teachers. We must hold ourselves
responsible for imposing accountability in education. Only then will we have a real education revolution.

LEAP Innovations
is a Chicago spinout of New Schools for Chicago, an innovative nonprofit sponsoring trials for new EdTech tools--a great example of well constructed trials
for new products.

Andrew R. Coulson is the Chief Strategist at MIND Research Institute. Follow Andrew on Twitter at
@AndrewRCoulson.

The opinions expressed in Vander Ark on Innovation are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.