The panelists selected to revamp federal teacher education reporting requirements certainly have their work cut out for them.
They've wrapped up day one of negotiated rulemaking on the requirements in Title II of the Higher Education Act. Even as this item is published, we are starting day two.
This part of federal law is not particularly well known. In brief, Title II calls for states, the federal government and programs that prepare teachers to release "report cards" on the state of teacher preparation based on a variety of factors, particularly pass rates and scale scores on licensing tests. The data requirements differ based on the level of reporting, and there are lots of data points collected—more than 400 for institutions alone.
Any changes the panel considers carry some big implications for teacher education. The U.S. Department of Education hopes to integrate its push for outcomes-based "teacher effectiveness" policies into the teacher preparation sphere through this revamp of the requirements, including consideration of value-added impact data. Most of the current requirements are input-based.
Negotiated rulemaking, or "neg-reg" as it's known in Washington parlance, is a complex process in which a group of panelists come up with draft guidelines for the federal agency. Then, the agency releases a proposed rule for public comment and revision, and ultimately publishes a new, binding final rule.
Let's take a look at the issues on the table.
First, although this week's meetings are just the first day of what will be a several-months-long process, it's worth reviewing the basic mandate. Essentially, the ED wants to discard some of the more obscure pieces of information currently collected under HEA Title II education in favor of more relevant data that takes into account the impact of teacher preparation on candidate effectiveness.
Second, it wants to make sure that only programs deemed "high quality"—a definition never set down in law—can offer TEACH grants, which subsidize tuition for candidates who agree to serve in high-needs schools for four years.
Yesterday's discussion was a bit loosey-goosey at times, so I'll do my best to outline what seemed to be some of the big themes of the Day 1 discussions. Ultimately, they focused on the current Title II reporting system, its flaws, and proposed tweaks to those measures.
Mixed utility: Teacher education reporting requirements were first instituted in the 1998 HEA, with the goal of establishing some national data about where teachers are trained.
In yesterday's rulemaking session, however, the negotiators spent a lot of time weighing the various purposes these reports could be used for. Should they be used for program accountability? To help programs identify strengths and weaknesses so that they can improve their training? Or should they serve as a consumer tool, to help potential teacher candidates, parents, and school districts compare the strengths of various schools and programs?
As one negotiator, Beverly Young of the California State University system, pointed out, data collected for one purpose may not be particularly helpful for another purpose.
Accessibility & Transparency: This point was underscored by Sarah Almy, a negotiator from the Education Trust, a Washington-based advocacy group. She noted that at some level, it doesn't matter how much you tweak the data points if the resulting information is obscured, hard to find, or otherwise not accessible. (The institutional-level reports in particular, are difficult to come by.)
Will negotiators make an effort to emphasize how programs should report, or create standards for dissemination—and can they do that under the scope of current law?
Existing data limitations: Panelists said many of the current reporting requirements are not particularly useful, like the one about whether a program collects resumes as part of the admissions process, because they don't seem linked to a process for creating effective teachers.
An important point was made by Jim Cibulka, the president of the National Council for Accreditation of Teacher Education. He noted that the research literature on teacher education identifies is pretty sparse on specific attributes of training programs that seem to correlate to improved student achievement. That's obviously something of a problem when you're trying to design a reporting tool that gets at this issue.
Still, Cibulka outlined three discrete features of teacher education, highlighted in a 2010 National Academies research report as being linked to program effectiveness: Candidate qualifications, content knowledge, and clinical experiences. Those seem likely to guide future discussion.
The current reporting does touch on these requirements. But the problem, Cibulka said, is that the way data on these issues are currently collected obscures more than it reveals.
For instance, the HEA statute requires states and programs information on program admissions. But right now, the Education Department only teacher preparation programs to answer "yes" or "no" to questions about whether they have minimum admission requirements or require an interview. They don't actually have to spell out the substance of those requirements.
Another example: Programs are asked to submit "assurances" that they prepare teachers on technology and how to work with diverse populations. Most programs, of course, simply say "yes."
At least a couple of panelists, David Steiner of Hunter College and George Noell of the Louisiana Education Department, suggested revisions in this area. For example, a school could report the average grade-point average of teacher candidates as well as the minimum score, and so on with entrance-test requirements, they said.
Test scores: The HEA requires heavy reporting on licensure test score results by programs. Several panelists opined that such exams, while producing reliable data, aren't particularly valid—that is, they don't capture all that much of what beginning teachers should know and be able to do.
There was also debate about how they are currently used. When the Title II reporting requirements came into effect in 1998, programs only needed to report pass rates on the tests. But soon after, there was a perception that programs were inflating pass rates by making the tests an entry rather than exit requirements of their programs.
This perception of gaming—fair or unfair—prompted changes in the 2008 rewrite of the HEA, which required data broken out by the time frame at which candidates took the test, such as after coursework, or after completing the program. Negotiators had varied opinions about whether this change was an improvement on the original law—or made things even worse.
Also, quite a few negotiators, mostly those representing higher education institutions, pushed for some acknowledgement in the rules of performance-based licensing tests, such as the Teacher Performance Assessment 24 states are now helping to develop and pilot.
Clinical fieldwork: Negotiators just touched on this "black box" element of teacher education, and there will probably be a lot more discussion of it. Cibulka of NCATE said he found many of the current requirements on this topic—such as the number of full-time clinical instructors at each program—unhelpful, and suggested several improvements, which you can see in the list below.
New suggestions: Here's just a sampling of some of the ideas that were thrown out for possible inclusion in new reporting requirements:
• What are the mean and the minimum high school GPA/college GPA/college entrance exam scores, and how do they compare to those of the institution as a whole?
• What is the percentage of minority students enrolled in the program, and how does that compare to the institution as a whole?
• What are the training or qualifications for the selection of supervisors of student teachers?
• What is the graduating GPA of candidates, compared to those in the institution as a whole?
• Does the teacher prep program determine student teaching placements for all candidates?
• What is the protocol for evaluating candidate performance in clinical work?
• Are there formal partnerships with school districts where candidates are placed for clinical experiences?
A few other things worth mentioning: Even on this rulemaking panel, there seems to be a bit of disagreement about the state of teacher preparation and whether it's in drastic need of improvement or is mostly doing an OK job. There were a couple testy exchanges between negotiators on this point.
Likewise, there was an interesting moment at the end of the day when Eric Mann and Katie Hartley, the two current teachers on the panel, recounted their student-teaching experiences. Both said they had little to no supervision in their student-teaching experiences and felt there should have been more accountability on their colleges to oversee and guide their practice.
But two other panelists, both teacher-educators, suggested that these teachers' experiences weren't representative of teacher ed. as a whole—and certainly not of their own institutions.
Clearly, this all leaves a lot of room for debate. And one of the hardest issues hasn't even been discussed yet—whether "value-added" test scores can appropriately be introduced as a new way of gauging teacher preparation programs.
Reaching a "final consensus" on all of the issues is important, however, because of the way negotiated rulemaking works: All the negotiators have to agree in order to commit the Education Department to putting out proposed rules aligned with that consensus.
Otherwise, the Education Department can put out what it wants. And that, one suspects, is a prospect that would be pleasing to no one on the panel.