New York’s first attempt to grade teachers on their progress was flawed in several key ways, a new study commissioned by the region’s superintendents says.
The state’s formula gave less credit to teachers serving disadvantaged students, judged some teachers on the performance of too few students, failed to measure key variables such as mobility and did not clearly signal how schools can assist teachers or students, the study found.
“Our fears were realized,” said Harrison Superintendent Louis Wool, who was president of the Lower Hudson Council of School Superintendents when the study was started in the spring. “The first round of assessments did not accurately measure the value of teachers whose students are in poverty, in special or speak limited English. We are concerned that we have spent countless hours and millions and millions of dollars to produce results that are not comparable across the state and do not inform teacher practice or student learning.”
Thirty-five school districts from the Lower Hudson Valley and Long Island contributed anonymous data on 1,700 teachers and 46,800 students for the study.
The superintendents group chose the Value-Added Research Center at the University of Wisconsin-Madison to study New York’s first round of scores, released in 2012 for a limited number of school districts. The center researches how to measure the impact of teachers on student growth, a calculation known as the “value added,” and also markets its own models.
The study is a solid analysis that repeats concerns raised last year by the state’s own consultants, said Bruce Baker, a Rutgers University professor and expert on school .
“I give these superintendents credit for following through on this, raising significant concerns about potential biases in these measures,” he said. “For the state to continue to enforce these measures in the face of contradictory evidence is over-the-top ridiculous.”
Under New York’s evaluation system, 20 percent of teachers’ overall ratings is based on their impact on student progress. Sixty percent is based on classroom observations and the remaining 20 percent on locally chosen assessments.
Valhalla Superintendent Brenda Myers said the study confirmed that New York’s formula was flawed and did not provide information that would help districts improve teacher or student performance.
“After all the time and energy we put into this, we get a score on a teacher that doesn’t tell us anything,” she said. “Are we improving student learning? This is the question that keeps us up nights.”
But the study found that New York did not adequately weigh factors like poverty when measuring students’ progress.
“We find it more common for teachers of higher-achieving students to be classified as ‘Effective’ than other teachers,” the study said. “Similarly, teachers with a greater number of students in poverty tend to be classified as ‘Ineffective’ or ‘Developing’ more frequently than other teachers.”
Andrew Rice, a researcher who worked on the study, said New York was dealing with common challenges that arise when trying to measure teacher impact amid political pressures.
In other words, the evaluation system is rigged to ratchet up the number of "ineffective" teachers, especially in schools with large numbers of ELL or support service students and districts with large numbers of students coming from low income backgrounds.
The superintendents should challenge the state evaluation system in court and send this thing to the ignominious grave it so richly deserves.
And the NYSUT and UFT ought to be jumping in on this as well.
Make John King, Merryl Tisch and Andrew Cuomo defend the indefensible.
Alas, the unions seem to be on the side of King, Tisch, and Cuomo rather than the side of their members or the side of students and parents.