By guest blogger Stephen Sawchuk.
This post originally appeared on the Teacher Beat blog.
The developers of a new performance-based teacher-licensing test have a clear message for states that want to use it: Set the passing bar high, but not too high.
Striking that balance will be among the key decisions states must now make about the edTPA, which had its official launch at a press conference Friday in Washington. Where states set the cutoff score will determine which candidates will be granted or denied a teaching license; as such, it's the major "stake" attached to the performance-based test.
In development since 2009, the edTPA includes a 15-minute videotape of each candidate's teaching skill. Seven states—Hawaii, Washington, Minnesota, New York, Tennessee, Georgia, and Wisconsin—have formally committed to using the edTPA for certification, or to gauge the quality of teacher-preparation programs; four other states are considering adopting it. (Twenty-two other states and jurisdictions have individual programs that have piloted the test.)
A technical report issued Friday by the Stanford Center on Assessment, Learning, and Equity, or SCALE, recommended that states not set a passing bar higher than 42 out of 75 total points—a cutoff point that would allow only 58 percent of candidates to pass on their first try. The figure is based on field-test data collected over the past year.
States that chose a different cutoff point would have a different yield: A passing score of 37 would boost passing rates up to 78 percent of candidates.
The edTPA is owned by Stanford University, home to SCALE; test giant Pearson is responsible for the exam's administration.
Most of the data in the technical report is based on a 2012-13 field test of some 4,000 teacher candidates. EdTPA covers 27 different subjects and grade levels; 23 were included in the field test.
Among the field test findings:
- Candidate scores were highest in secondary science and lowest in middle school mathematics; secondary teacher candidates tended to do better than elementary teachers overall.
- Candidates had the highest scores on the lesson-planning segment of the exam and the lowest on using assessment data to tailor instruction.
- Each exam is scored by two readers, and generally they agreed on what score the candidate merited.
- Analyses showed that the exam was strongly aligned with the content it sought to measure.
Supporters of the exam include the American Association of Colleges for Teacher Education, a key partner that helped find programs willing to pilot the new exam. AACTE has also pushed for federal policy to recognize the edTPA in states' accountabilty systems for teacher preparation, contending that it is more valid than "value-added" measures for judging the quality of such programs, and gives better information to help them improve.
While many teacher-education programs have embraced the exam, some have protested Pearson's involvement, or fear the exam will lead to a standardization of teacher-prep curricula.
Meanwhile, many policy questions remain for states who plan to use the edTPA. Chief among them is whether to adhere to the group's cutoff score recommendation, or to go their own direction. The setting of cut scores is a process that's as much political as educational: For certification, the questions include whether too high a bar might prevent underrepresented populations from teaching or cause a shortage of candidates (the latter is generally not a concern in the elementary field.)
Most licensing tests today measure basic skills or content rather than teaching ability and pose few challenges, in part because candidates can retake them over and over: Federal data show pass rates on current exams average about 96 percent.
Another point worth considering: It isn't yet clear whether the exam predicts teacher effectiveness—in other words, whether teachers who achieve a certain score on average go on to help their students learn more.
In part, that's a function of the newness of this endeavor. It takes a while to have enough year-to-year data to conduct such analyses. And to be fair, most current licensing tests haven't been subject to validity studies of this type.
Still, this is not a trivial matter because at least three states—Washington, New York, and Minnesota—plan to use the results for licensing or program-approval decisions beginning in 2014.
(It's worth noting that a few research papers by SCALE found links between teachers' scores on PACT, a California exam on which edTPA was partly based, and on students' standardized-test performance in reading and math. Those studies were fairly small, though, and the two exams aren't identical.)
So Friday's announcement is, in many ways, a beginning. We'll be interested in following up on the many implications in the weeks, months, and years to come.