When you take a break from digging out from the “Great Blizzard of 2013,” I strongly urge you to take a moment today to read Heart Newspaper columnist and fellow blogger, Wendy Lecker’s, latest commentary piece entitled “Connecticut’s teacher evaluation plan – even worse than we thought.”
Wendy’s article is the clearest description to date of the dishonest, disastrous and counter-productive evaluation system that Governor Malloy, Malloy’s Commissioner of Education, Stefan Pryor, and his State Board of Education are trying to foist upon the teachers, administrators, students and parents who are part of Connecticut’s public education system.
The waste of time, energy and money associated with this abomination is staggering.
Even in a time of unlimited public resources, the program the Malloy and his Department of Education is pushing would be inappropriate, but now, as Connecticut continues to struggle through the greatest economic troubles of our times; their plan is nothing short of a criminal waste of taxpayer funds.
As Wendy Lecker writes;
“It turns out state’s proposed teacher evaluation program is far worse than I originally believed it to be.
Connecticut’s plan involves using “indicators of student growth” to form 22.5 percent of an evaluation. For grades and subjects covered by the CMTs or CAPTs, teachers must use those scores as a basis for their evaluation.
In my previous columns, I wrongly assumed that Connecticut would use the unreliable “value-added” model (VAM) as the foundation of this 22.5 percent. However, it has come to light that Connecticut’s model is much worse.
The value-added model would be bad enough. VAM is a flawed attempt to isolate the teacher effect on a student’s test scores. We have all heard that a teacher is “the most important in-school influence on students.”
There is no denying that teachers have a profound influence on students’ ways of thinking, their emotional development and other crucial aspects of children’s intellectual growth that cannot be measured on standardized tests. However, those who trumpet this claim refer to a teacher’s influence on a student’s test scores.
But decades of evidence prove that out-of-school factors account for the vast majority of a student’s test scores. Even those claiming teachers’ outsize influence on test scores only attribute 7.5 percent to 8.5 percent of a test score to variation in teacher quality.
Therefore, VAM’s goal is to tease out that 8.5 percent. As I have previously shown, a large and growing body of evidence proves that VAM fails at this task. VAM ratings based on test scores have a 50 percent misclassification rating, with a variance based on the test, the year, the class and even the statistical model used. It is dangerous to use this measure for even 22.5 percent of a rating because it is so unstable. Because it varies so wildly, the test-score-based rating will become the tipping point in most evaluations, despite its small percentage. Moreover, being a so-called hard number, it will inevitably be the main focus of evaluations.
Apparently, in thinking that state education officials would use VAM, I was giving them too much credit.
Connecticut is not using VAM. Instead, Connecticut is using something much worse: a “student growth” model.
Here is how it works. At the beginning of the year, a teacher in a subject covered by the CMTs or CAPTs chooses a goal. It can be that X percent of the class will move from proficiency to goal. Or, it can be that the average vertical-scale score of the class will increase by X percent. (Recall from an earlier column that vertical-scale scores basically only measure whether a child is a good test-taker.) Testing experts use statistical models to predict test-score increases. Teachers, I guess, are supposed to use their intuition — about children they have just met. Then, the teacher will be evaluated on whether she meets that goal.
Let us put aside the lunacy of having a teacher predict score increases and focus on Connecticut’s model. Unlike VAM, which tries and fails to isolate teacher effect, “student growth models” do not even attempt to isolate that 8 percent. There is no mechanism in Connecticut’s system that even tries to distinguish between all the factors affecting student test scores and the one factor upon which a teacher’s job will depend.
Lecker provides even more details in her latest commentary piece.
In the coming weeks we’ll dig even deeper into this absurd plan, but if you want to get a basic primary on how the education reformers are wasting our tax dollars, undermining the teaching professional and destroying our public schools, I urge you to start by reading – and then re-reading Wendy Lecker’s great piece.
Wendy Lecker: Connecticut’s teacher evaluation plan – even worse than we thought