Perdido 03

Perdido 03

Saturday, August 15, 2015

More Things Wrong With VAM

Sheri and Bruce Lederman argued before a judge this week that VAM was "irrational...a rating created in a black box that spit out predictions that compared his wife’s students to “avatar students.”

A commenter notes there is even more wrong with VAM:

There are way more things wrong with VAM.
It is good news though that the judge is onto them.
The VAM is also not allowing the teacher to earn a good rating based on their own scores. They are rated based on how other teachers scored.
For that reason you can't know in advance what it is you have to do. The criteria that your score is based on is shifting and not concrete.
VAM is also not transparent so that a teacher can understand in advance how they will be rated as is required by law.
VAM is also purported to be a growth model, however, it is not capable of showing the kind of vertical growth that needs to be measured from year to year.
And let's not forget that the Cut Scores are decided AFTER the kids take the test. And each year the scale changes. Not very scientific if you ask me.
It sounds like the Ledermans case is going well. Finally the charade is being exposed publicly for what it really is: a convoluted SCAM.

Indeed - there is so much that is wrong with VAM as an evaluation model

It's accountability moment has come.

We shall see if it is rated "ineffective" and thrown onto the trash heap where it belongs.


  1. Well, VAM is wrong but not for many of these reasons. The scores and growth are relative to student peformance and relative, not to teacher to teacher performance, but to the performance of students who learned with other teachers. That's fine. Aside from the obvious reasons (of not using test scores) , VAM is wrong because no one knows the student catefgories in the state's stupid formula and because we don't know which category the state (or the district in most HS teachers' case) placed which of our students. It's wrong because we have no data saying which category our Student A has been place, so we don't know who Student A is being compared with across the state. This means there is no way to confirm our scores as teachers, or our student growth scores, are honest. It's wrong because when you get right down to it, it's too damn secretive to help anyone.
    I don't fear change like some folks out there. There is a place for VAM, maybe (maybe) even on a teacher eval. The relative math approach they take in calculating it is fine on principle. This, cut scores, student and teache growth percentiles, are all unknowns until after the exam. That's fine with me. But if our profession is based on improvement, and improvement is dependent on feedback, then this use of VAM is a sham. The wait time is too long, the "compares" are too secretive and the use is as a punishment for teachers not for feedback.

    1. If the system was honest, they would be transparent about it. That they're not - that NYSED twice tried to have the Lederman case dismissed on the "No Harm, No Foul" rule rather than show the data and defend it, shows you that they know it's indefensible.

    2. Exactly. This system of VAM is well beyond defense.