Perdido 03

Perdido 03

Sunday, September 14, 2014

New York State Teacher Evaluations Are Rigged

Swapna Venugopal Ramaswamy has a piece at LohHud on the absurdity of the New York State APPR teacher evaluation system, showing how the classroom observation component can be rigged by district leaders.

For instance, Scarsdale has decided that no teacher will be rated "highly effective" in the classroom observation component.

Pleasantville, on the other hand, gave out "highly effective" ratings to 99% of their teachers in the classroom observation component.

The Pleasantville superintendent defended that decision:

Pleasantville schools Superintendent Mary Fox-Alter defended her district's classroom observation scores, which use the Danielson model — saying the state's "flawed" model had forced districts to scale or bump up the scores so "effective" teachers don't end up with an overall rating of "developing."

"It is possible under the HEDI scoring band (which categorizes teachers as "highly effective," "effective," "developing" and "ineffective") to be rated effective in all three areas and yet end up as developing," Fox-Alter said, adding that she understood Danielson's concern.

"Danielson has said that teachers should live in "effective" and only visit "highly effective'," said Fox-Alter, president of the Southern Westchester Chief School Administrators.

But adhering to that philosophy might put her teachers in jeopardy, she said.

"The state's model is punitive in spirit," she said. "We had to scale the classroom observation piece to adjust for an inaccurate and flawed formula. I don't know a single district that hasn't done this in one way, shape or form."

And indeed, teachers in districts or schools with many high-needs students got the short end of the stick on the testing components of the evaluation system:

Mount Vernon, which rates 60 percent of its teachers as "highly effective," has 42 percent of them in the "ineffective" bucket when it comes to state growth measures. It is the worst-performing district under that rubric in Westchester, Rockland and Putnam counties. The second highest number of teachers rated as "ineffective" in the three counties by state standards is Yonkers, at 11 percent.

When evaluations are skewed toward the top end of the range and student performance is not, it should "raise a red flag" said Danielson. However, she also conceded that teachers often face "formidable challenges" in high-needs districts.

The problem, in the end, is that the evaluation system in New York State is complete horseshit.

Charlotte Danielson, she of the little teaching experience but much consulting experience, has put together an observation rubric that quantifies every human interaction and teacher/student behavior in the classroom.

That rubric, which is supposed to be "objective" is anything but - administrators can use that rubric to reward favorites and punish teachers they want to punish, just like they could under the old evaluation system.

It's clear when you have Scarsdale deciding that no teacher will ever get "highly effective" in the classroom observations component but Pleasantville giving out 99% "highly effectives" on the classroom observation component because they're worried teachers are going to be harmed by the state testing component (something that does happen to teachers who work with high-needs students) that this "objective" teacher evaluation system is not objective in the least.

Charlotte Danielson can claim her observation rubric is objective all she wants - the reality is, in the real world administrators will do whatever it is they want to do on the observations no matter what is on that rubric, they'll back those decisions up with "documentation," and the results will be rigged the way they want them.

In Scarsdale, that means no one is highly "effective" in the classroom observation component

In Pleasantville, that means everybody's "highly effective" in the classroom component.

Yes, it's true, the final evaluations were more varied than that when the state and local testing components were added in, but how valid is any of this when the district leaders go in with a set idea of what teachers are going to be before the observations and evaluations are even begun?

It's all rigged, despite Governor Cuomo's claim that this is an "objective" system, a "state-of-the-art evaluation system," that will show which teachers are excellent and which need to be dismissed.

The thing that needs to be dismissed is the evaluation system, along with the governor and the merry men and women in reform in Albany who created it.

11 comments:

  1. Hey, Mulgrew, what was that about the new evaluations providing "objectivity?"

    ReplyDelete
  2. At my high school in Queens, NY, most of the teachers in the history department were rated effective along the 60%. However, once the 20% and 20% were averaged in, all of the evaluations, including those that were highly effective and developing, were overwhelmingly lowered. This is a system that can be used for unethical purposes, and tweaked for reward and punishment.

    ReplyDelete
  3. WE NEED to ORGANIZE FOR A CLASS ACTION LAWSUIT.

    DAMAGES FROM DOE, UFT and PEARSON.

    ReplyDelete
  4. CAPRICIOUS AND ARBITRARY EVALUATIONS. PART OF
    OUR CONTRACT WHICH WAS NEGOTIATED IN EXHANGE FOR DELAYED RETROACTIVE PAYMENTS. DAMAGES ARE DUE TO FRAUD AND MISREPRESENTATION. CONTRACT SHOULD BE INVALIDATED. WHAT WAS THE ARBITRATOR'S DECISION FOR A FAIR CONTRACT?

    ReplyDelete
  5. Did you know that you can be rated ineffective somewhere in domain 4 if your administrator does not like your thoughts on just about anything. Also, I have personally been told by a trainer from the Danielson Group that if I had to choose between advocating for a student or complying with district policies that are hurting that student, it's always best to say screw the kid and do what the district says. How's that for highly effective?

    ReplyDelete
  6. But..but...VAM was supposed to level the playing field so that teachers who choose to tech at risk kids can show their growth against other high achieving students!! But..but..their science was supposed to work!

    ReplyDelete
  7. This is a surprise????

    Yawn.....

    ReplyDelete
  8. Thanks for the article. I agree that a classroom evaluational is really important. It will help your teachers to know how to improve and better their teaching skills. I will have to make sure that the teachers who teach my kids get regular classroom evaluations.

    http://www.observe4success.com

    ReplyDelete
  9. Scarsdale awarded every teacher 60 points in protest of the evaluations.

    ReplyDelete
  10. WARNING TO TEACHERS UNAWARE: I spent my 15 years' retirement to teach in my home state. I received a welcome home with the Charlotte Danielson Group evaulations as a "new teacher." I had been a teacher leader with exceptional evaluations for 15 years. I had experience teaching special ed students and was accepted to teach that population. However, my classes were rounded out with students from alternative school and other troubled-behavior students. One of my evaluators traveled from school to school and did not know my students. The other gave me better marks. When a behaviorist was called in to assist with my "classroom management" (which constantly counted against me) the behaviorist said she'd known these kids from birth and the class should never have been put together. She could offer no advise so I was "dinged" on the Danielson eval through the end of the year. There was a gifted group next to me. My evaluator gave the better marks to that teacher. I was, not unlike Dr. Lederman, considered ineffective. I missed by .2 being effective.
    Special ed teachers should not have alternative and behavior problem students in addition to physically challenged students. There should also be a domain for teacher response on the same evaluation page. I was not even permitted to attach an addendum response. The Danielson evaluation does not consider that educators should have freedom of speech to at least defend themselves, especially in the case of poor evaluators who evaluate while observing and who are not properly trained.

    Danielson evals are geared to getting rid of those "Bad" teachers but there are no bad teachers unless the exception is something draconian such as teaching under the influence or striking a child. To be just, I believe not only should teachers be permitted to respond to their evaluations on the same page, but there should be more encouragement to learn and grow. After all, why take out student loans on which to pay for years to come and jump through the Praxis hoops, etc. if you have a future being blackballed as a "Bad" teacher? I believe that like myself, the majority of educators are working a mission to which they have been called. As the ;years go by they become better as they learn; not suddenly "bad."

    Finally, there is no consideration that a teacher may be assigned to a grade level new to her/him, especially if that educator is unfamiliar with curriculum to be taught. Assignment of a mentor must be made very carefully and on a one to one basis to prevent personality clash, especially given the possibilities re gifted population versus behavior problem population versus special ed population, etc. The Danielson evaluation is far too rigorous and unforgiving and is too open to loss of some of the Nation's most qualified and gifted teacher leaders. It is another lesson that teaches how to dumb down our overall student population by hiring two inexperienced educators for the price of one highly qualified and experienced teacher. "You get what you pay for..."

    ReplyDelete