Opinion
Teaching Profession Opinion

Ramifications of the Performance/Effectiveness Distinction for Teacher Evaluation

By Justin Baeder — November 04, 2011 5 min read
  • Save to favorites
  • Print

Guest post from Rod McCloy & Andrea Sinclair

In our initial blog entry, we argued that it is essential to differentiate performance (behaviors people engage in on the job; i.e., what people do) from effectiveness (the results of performance) when conducting teacher evaluation.

In this entry, we discuss how doing so can clarify the discussion surrounding teacher evaluation.

We first must specify just what it is we intend to evaluate: performance? effectiveness? something else? It is critical that we answer this question clearly, because performance and effectiveness are different criteria determined by different variables, which suggests the potential for different interventions to improve them.

Recalling our sunglasses salespersons from our initial blog entry, we believe that the performance/effectiveness distinction has several implications for the classroom:

    • The best-performing teachers will not necessarily be the most effective teachers (and vice versa);

    • Placing effective teachers from one setting into a markedly different setting (e.g., moving highly effective teachers from a suburban school to a low-performing urban school) could lead to disappointing outcomes;

    • By focusing on teacher performance, we can maximize teacher effectiveness as a by-product (recall the suggestion to move the higher performing but less effective salesperson from Seattle to Miami).

Indeed, measuring teacher performance (practices/behaviors teachers engage in) gives us the best chance of providing (a) teachers with useful developmental feedback on their practices and (b) educators/administrators with input on teacher training programs.

Most current initiatives require schools to include students’ standardized test scores or academic achievement (indices of effectiveness rather than performance) in their teacher evaluations. For example, in states with winning applications for Race to the Top grants, this student information must constitute at least 50% of the overall teacher evaluation.

Those using such information in their teacher evaluations need to be cautious about the attributions they make based on such data. In accordance with the performance theory (Campbell, McCloy, Oppler, & Sager, 1993), one should not attempt to identify teacher-level interventions or make judgments about teacher performance by examining outcomes contaminated by influences beyond the teacher’s control. Although student achievement data could alert evaluators as to when they should look more closely at a teacher’s performance ratings to help determine if there is something about the teacher’s performance that contributed to student achievement, teacher performance measures are required to help identify the types of behaviors that might need to improve. Again, effectiveness is important and useful in its own right, but it is not the same thing as performance and should be kept distinct.

You might be asking, “But what about Value-Added Modeling (VAM)? It purports to isolate teacher impact on student performance by statistically controlling for external influences. Doesn’t this mean that VAM provides information about teacher performance?” To our minds, there are at least two shortcomings of VAM with regard to teacher performance. First, VAM is at best an indirect means of obtaining information about teacher performance. We believe it preferable to define performance explicitly, rather than taking performance to be the residual of a subtractive process via statistical control of certain select “other factors.”

If you want to measure teacher performance, then measure it directly. Doing so will force you to delineate the behaviors of interest (i.e., what you define performance to be) and increase your chances of identifying promising interventions for improving performance (and, thereby, effectiveness). Second, VAM seems to limit the definitions of both teacher effectiveness (to students’ test scores) and teacher performance (to only those behaviors that increase student achievement on tests, and this assumes that we know which behaviors those are). Thus, both effectiveness and performance as defined by VAM are likely deficient concepts.

Please do not let our concerns regarding VAM lead you to believe we are anti-testing. On the contrary, we are staunch supporters of standardized testing. Nevertheless, current VAM seems to discount the inherent complexity of teacher performance and teacher effectiveness, artificially constraining their definitions and indicators. We enthusiastically endorse the use of empirical data, but convenience (students’ test scores are available and standardized, at least within states) must not trump relevance (students’ test scores tell us little about specific teacher behaviors) when choosing data to serve as the foundation for high-stakes personnel decisions.

Our next entry will present our recommendations for developing teacher evaluation systems.

Dr. Rodney A. McCloy is a Principal Staff Scientist for the Human Resources Research Organization (HumRRO). With more than 20 years of experience conducting and directing personnel research, he serves as an in-house technical expert and a mentor to junior staff. His assessment and testing experience has spanned both cognitive and non-cognitive domains and has involved several large-scale assessment programs (Armed Services Vocational Aptitude Battery, National Assessment of Educational Progress, General Aptitude Test Battery). He has served as adjunct faculty at both The George Washington University and George Mason University. He is a Fellow of the American Psychological Association (APA) and the Society for Industrial and Organizational Psychology (SIOP). He received his Ph.D. in Industrial-Organizational Psychology from the University of Minnesota in 1990.

Dr. Andrea L. Sinclair is a Senior Scientist in HumRRO‘s Validity Investigations for Education and the Workplace (VIEW) Program. She conducts research in education, government, military, and private sector settings with a particular focus on performance measurement and program evaluation. She regularly develops performance measurement instruments, surveys, and observation and interview protocols for use in schools. In addition, she regularly advises clients on the validity and reliability of their assessment systems and on the development of competency models. She received her Ph.D. in Industrial-Organizational Psychology from Virginia Tech in 2003.

Reference:
Campbell, J.P., McCloy, R.A., Oppler, S.H., & Sager, C.E. (1993). A theory of performance. In N. Schmitt & W. Borman (Eds.), Personnel selection in organizations (pp. 35-70). San Francisco, CA: Jossey-Bass.

Related Tags:

The opinions expressed in On Performance are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.