Jonathan Kantrowitz writes a blog for the Connecticut Post and the Hearst Media Group. You can count on his articles to be astute, thoughtful and extremely informative. If you haven’t bookmarked his blog, you should.
Kantrowitz’s latest piece examines the significant problems associated with the faulty teacher evaluation programs that are being pushed by the corporate education reform industry and their political allies.
Unfortunately, Connecticut Governor Dannel “Dan” Malloy has been one of the nation’s leading proponents of these inappropriate and inaccurate teacher evaluation systems.
As Jonathan Kantrowitz explains,
This week, two new studies have reported on significant flaws at the heart of teacher evaluations:
Here’s a summary of the first report:
“There are very weak associations of content alignment with student achievement gains and no associations with the composite measure of effective teaching…the tests used for calculating VAM are not particularly able to detect differences in the content or quality of classroom instruction. Empirical and conceptual work illustrates that these kind of assessments tend to be, at best, weakly sensitive to carefully measured indicators of instructional content or quality…
At a minimum, these results suggest it may be fruitless for teachers to use state test VAMs to inform adjustments to their instruction. Furthermore, this interpretation raises the question— If VAMs are not meaningfully associated with either the content or quality of instruction, what are they measuring?”
Key findings and resulting recommendations from the second report include:
* Under current teacher evaluation systems, it is hard for a teacher who doesn’t have top students to get a top rating. Teachers with students with higher incoming achievement levels receive classroom observation scores that are higher on average than those received by teachers whose incoming students are at lower achievement levels, and districts do not have processes in place to address this bias. Adjusting teacher observation scores based on student demographics is a straightforward fix to this problem. Such an adjustment for the makeup of the class is already factored into teachers’ value-added scores; it should be factored into classroom observation scores as well.
* The reliability of both value-added measures and demographic-adjusted teacher evaluation scores is dependent on sample size, such that these measures will be less reliable and valid when calculated in small districts than in large districts. Thus, states should provide prediction weights based on statewide data for individual districts to use when calculating teacher evaluation scores.
* Observations conducted by outside observers are more valid than observations conducted by school administrators. At least one observation of a teacher each year should be conducted by a trained observer from outside the teacher’s school who does not have substantial prior knowledge of the teacher being observed.
* The inclusion of a school value-added component in teachers’ evaluation scores negatively impacts good teachers in bad schools and positively impacts bad teachers in good schools. This measure should be eliminated or reduced to a low weight in teacher evaluation systems.
The new reports provide ample evidence that the Connecticut’s teacher evaluation system that Governor Malloy pushed through needs to be repealed and replaced with a far more appropriate program that will ensure that our students have the most effective effective teachers.
By clicking on the follow links, you can read additional articles that Jonathan Kantrowitz has written about the teacher evaluation issue,