Using Value Added to Assess Teacher Effectiveness

November 6, 2009 | Blog

The Association for Public Policy Analysis and Management — an organization not widely known outside of academia and technical policy circles — puts on truly meaty conferences. I’ve attended three APPAM conferences to date, including the Annual Fall Research Conference going on in Washington, DC this week.

Education is merely one strand at APPAM, but the sessions feature some of the biggest names in educational research addressing some very policy relevant issues. The current conference features sessions on value-added modeling, school choice, teacher certification and teacher induction, teacher performance pay, financial aid, college persistence, and more.

The session I attended yesterday on “Using Value Added To Assess Teacher Effectiveness” was excellent. It featured four papers each of which I will undoubtedly oversimplify in this brief blog post. (I encourage you to seek out the papers and read them closely — below I’ve linked to those that are available.) One by Dan Goldhaber and Michael Hansen (University of Washington) suggests that year-to-year correlations in value-added teacher effects are modest, but that pre-tenure estimates of teacher job performance do predict estimated post-tenure performance in both math and reading. A second by Julian Betts (UCSD) and Cory Koedel (University of Missouri-Columbia) suggests that bias does exist in value-added models due to student sorting, but that it can be overcome through the use of multiple years of value-added data; further, the study suggests that data from the first year or two of classroom teaching may be insufficient to make reliable judgments about teacher quality. A third by Michael Weiss of MDRC suggests that that teacher variability carries implications for measuring program effects within randomized controlled trials when those teachers are not randomly assigned. And a fourth by John Tyler (Brown University) and Tom Kane (Harvard University) found that teacher assessments made using classroom observation rubrics (such as Charlotte Danielson’s) are closely aligned with value-added ratings of teachers.


  1. Reply

    Claus von Zastrow

    November 6, 2009

    Thanks for the very useful summary. What are your thoughts about how this affects the arguments for or against the current policy incentives for performance pay?

  2. Reply

    Liam Goldrick

    November 6, 2009


    Well, one of course should never base policy upon a single paper or analysis, but my takeaways would be that student achievement could factor into a teacher pay system, as long as multiple years of data are considered. The Tyler/Kane work bodes well for classroom observation instruments which, if nothing else, will need to be used to evaluate/assess/guide support for teachers in those grades and subjects (most of them) that are not subject to standardized tests. Of course, all of this raises the question about whether test scores and student learning is the only outcome we're interested in from the education system.

  3. Reply

    Bethany T.

    November 24, 2009

    You said they talked about bias existing in value-added models due to student sorting, but that it can be overcome over time. Are there certain ways in which bias can be overcome in these situations?

Would you like to share your thoughts?

Would you like to share your thoughts?

Leave a Reply

© 2013 The EduOptimists. All Rights Reserved.