« The Letter From: Industry Fragmentation (III): Coping Strategies | Main | It's Not Over 'Til...: The Continuing Saga of Plans to Fire DC’s Central Office »

Making Sense of School Improvement Program Evaluations: The Case of TEEM

| 3 Comments

Staci Hupp of the Dallas Morning News writes that the Texas Education Agency has released a third party review of the Texas Early Education Model (TEEM) managed by the Texas Health Science Center (THSC) in Houston. The study, conducted by Edvance Research, finds that TEEM does no better preparing kids for school than other preschool programs. (The report is available with the article.)

In the 2006-07 school year TEEM served 27,000 children. Ultimately, the program is to be offered statewide via an interesting dissemination model. Head Start and private childcare centers are offered substantial financial incentives to adopt the model. The initiative has cost Texas taxpayers $45 million since 2003. It will cost a great deal more if implemented across the state. So the evaluation matters.

Hupp reports the usual reactions from those who would like to kill TEEM off, and the developers who want to keep going. What’s a policymaker – or taxpayer - to do?

Perhaps the first point to consider is the rapid scale-up of TEEM. When the stakes are perhaps several decades of expenses associated with the education of every Texas preschooler, prudent decision makers should establish a timetable of and conditions for “go/no go” decisions. In a program’s earliest stages, that suggests working with just enough children to get the timely, reliable and accurate evaluation data required to support a decision to terminate a research effort – and no more. My own experience with New American Schools’ multi team/multiyear development and scale up effort suggests that the THSC decision to serve roughly 1200 students/100 classrooms in its first year (2003-04) and especially 4000/250 in its second (2004-05) was a waste of money. More could have been learned of TEEM’s potential – with greater confidence – if THSC had focused its effort on fewer schools.

The second point is that the period of evaluation – just the first two years of the program, SY 2003-04 and 2004-5 – is unlikely to supply clear go/no go evidence. No one familiar with program development, scale-up or evaluation would expect “slam dunk” results for or against TEEM in this time frame. The model itself is new, program leader Dr. Susan Landry described her enterprise as a start-up, initial implementation at any school is disruptive, scaling-up implementation support capacity is hard, the selection of schools for initial dissemination is rarely strategic etc, etc. After getting past the “this dog won’t hunt” hurdle, what program managers and policy makers should be looking for is clues to the factors associated with program success or failure. Not necessarily findings of statistical significance that will support defense of a doctoral dissertation, but information that in the hands of those with experience in the creative aspect of program development, offers new avenues for thinking.

Ok, so what was the finding from these two years?

The bottom line from the Executive Summary:

There was considerable variation both between and within communities with regards to student performance and teacher outcomes. For about half of the communities, students in the treatment groups (with TEEM) improved more than students in the control groups (without TEEM), and for the other half of the communities students in the control groups improved more than the students in the treatment groups on the student outcome measures. TEEM did lead to overall improvement for teachers, although there was considerable variation, with teachers in both control and treatment groups obtaining both positive and negative difference scores on the teacher outcome measure.

Saturday: Some variation of the “about as good as what we’ve always done” finding seems to be the norm in the evaluation of school improvement programs. What should we make of it? What can we do about it?

3 Comments

I initially posted the Edvance report on my blog on December 7, 2007:

Edvance Report Blasts the State Center on Its Pre-K School Readiness Initiative: Says the Center's Budget is Not Transparent and Shows the Center has Spent 40% -- $8.6 million of Your Taxpayer Dollars -- for Its Own Internal Administrative Expenses

In that post, I highlighted the many significant issues with the State Center and the TEEM model, and included an email I sent to Dr. Landry, to which I have not received a response.

I agree. I wouldn't give the Morning News much credit. They like to give themselves a lot of credit for covering the state ed agency, but they do a crummy job of it.

After examining the Texas Early Education Model (TEEM) and ties to federal, state, and private funding sources, it seems that TEEM has been designed to be of greater benefit for the adults involved than the children. Mentors/coordinators/managers/directors enjoy high salaries. Corporations seeking profits align themselves with the university.

TEEM (adult on-line training/control of curriculum and instruction/PDA student assessment via Wireless Generation) is a Texas version of Reading First for three/four year olds and not supported in Texas education law.

Millions have been funneled to TEEM/CIRCLE/SCECD/CLI in Texas and beyond based on in-house self-reports and grant self-reports without independent oversight.

Comments are now closed for this post.

Advertisement

Recent Comments

  • Marylynn: After examining the Texas Early Education Model (TEEM) and ties read more
  • Kimberly: I agree. I wouldn't give the Morning News much credit. read more
  • Mike Falick: I initially posted the Edvance report on my blog on read more

Archives

Categories

Technorati

Technorati search

» Blogs that link here