Monday, October 1, 2012

Even the American Enterprise Institute Gets The Problems With Many New Teacher Evaluation Systems

Huffington Post has a piece about an American Enterprise Institute report on teacher evaluation systems post-RttT entitled "The Hangover: Thinking About The Unintended Consequences Of The Nation's Teacher Evaluation Binge."

The report says we need to beware the unintended consequences of these new systems:

  there is a prevailing sentiment that holding teachers accountable for their performance will more closely align teaching with norms in other professions. However, most professional fields rely on a combination of data and managerial judgment when conducting evaluations and making subsequent personnel decisions. This is in stark contrast to the teaching profession, in which new evaluation systems have aimed to eliminate subjective judgments entirely, instead focusing solely on student performance.

 According to the paper, the best protection against biased managerial judgment is to ensure that the managers themselves are also held accountable for performance. Furthermore, in designing value-added systems, policymakers should consider whether the elements they are adding move education away from or closer to professional norms in other fields.
...
The paper’s authors point out that poorly designed evaluation requirements could also hinder other innovative models. Some schools have begun to incorporate learning-based software in their classrooms and other blended learning models; these technologies vary in design, approach, costs and teacher role. Student groups in these models are more flexible and fluid, and students receive instruction and tutoring from a variety of teachers and programs. This makes it difficult or impossible to attribute student learning gains in a particular subject to a particular teacher, and complicates teacher evaluation systems that rely on linking teachers to their students’ academic results.


Yesterday, Carol Burris wrote about all the problems APPR, the vaunted new teacher evaluation system hailed by Andrew Cuomo as "scientific" and "objective," is going to cause in New York State:

The shortcomings of evaluating teachers by test scores were apparent in the recent report of the American Institute for Research (AIR), which developed the New York growth score model. AIR, in its BETA report, shows how as the percentage of students with disabilities and students of poverty in a class or school increases, the average teacher or principal growth score decreases. In short, the larger the share of such students, the more the teacher and principal are disadvantaged by the model. I predict that when the state results are made public, you will see a disproportionate amount of teachers of students with serious learning disabilities and teachers in schools with high levels of poverty labeled ineffective on scores. And that label will be unfair.

Likewise, in the model used this year, teachers who have students whose prior test scores were higher were advantaged, while teachers whose students have lower prior achievement were disadvantaged. This phenomenon, known as peer effects, has been observed in the literature since the 1980s. There is no control for peer effects in the model. We will see patterns of low scores for teachers of disadvantaged students. Over time, the students who need the best teachers and principals will see them leave their schools in order to escape the ‘ineffective’ label.

Perhaps the best critique of the model comes from AIR itself. The BETA report concludes that “the model selected to estimate growth scores for New York State represents a first effort to produce fair and accurate estimates of individual teacher and principal effectiveness based on a limited set of data” (p. 35). Not “our best attempt,” not even a “good first attempt,” but rather a “first effort” at fairness.

And yet, across the state, teachers and principals have received scores telling them that they are ineffective in producing student learning growth.

There is nothing "scientific" or "objective" in the new system.  The VAM is an error-riddled mess.  The tests the VAM is based on are an error-riddled mess.  And the "ineffective" ratings that are handed out as a result of this system are a travesty.

Nonetheless, Cuomo and his deputies in education privatization, NYSED Commissioner John King and Regents Merryl Tisch, have declared this system to not only be sound but to be the best in the land.

When the American Enterprise Institute gets the problems with VAM and overly prescriptive evaluation systems that supposed "Democrats" like Andrew Cuomo do NOT get, you know the world is turned upside down.

Or perhaps not. 

Maybe Andrew Cuomo, like so many others Dems these days, is beholden to the interests of the  privatizers and doesn't particularly care about the harm he does to children, teachers or schools.

Remember, Cuomo sends his kids to private school, just like his education commissioner John King does.

Why should they care about the harm they cause to kids, teachers and schools - they won't have to deal with the consequences.

Unless we make them deal with the consequences - at the ballot box.

2 comments:

  1. And why does Cuomo have such high popularity ratings?

    ReplyDelete
  2. You got me. He seems like an arrogant prick to me who only cares about himself and his wealthy cronies. And he's got a paranoia streak to rival Nixon's. But he and his people have been very careful to craft his image, so most people who only pay a little (or not even that) to the news seem to think he's a good guy.

    We'll see how long that lasts. He's got enemies in Albany and in the press, and when weakness is shown, they will pounce.

    ReplyDelete