The formulas for measuring how much “value” a teacher adds to a student’s test scores are complex and often carry a sizable margin of error.
Earlier this month, the American Statistical Association warned that such formulas must be used with caution because teachers generally account for less than 15 percent — and in some studies, as little as 1 percent — of the variability in student test scores. Value-added models spit out precise-sounding numbers that purport to quantify a teacher’s impact on her students, but in fact the formulas “typically measure correlation, not causation,” the group concluded.
A recent study funded by the Education Department found that value-added measures may fluctuate significantly due to factors beyond the teachers’ control, including random events such as a dog barking loudly outside a classroom window, distracting students during their standardized test. A 2010 study, also funded by the Education Department, found the models misidentify as many as 50 percent of teachers — pegging them as average when they’re actually better or worse than their peers, or singling them out for praise or condemnation when they’re actually average.
Yet another challenge: Calculating scores for educators who do not teach subjects or grades assessed with standardized exams. Nationally, some 70 percent of teachers — including most high school and early elementary teachers, plus art, music and physical education teachers — fall into that category.
Despite such complications, Muñoz made clear in a call with reporters on Thursday that Obama wants student test scores, or other measures of student growth, to figure heavily into states’ evaluations of teacher prep programs.
“This is something the president has a real sense of urgency about,” she said. “What happens in the classroom matters. It doesn’t just matter — it’s the whole ballgame.” So using student outcomes to evaluate teacher preparation programs “is really fundamental to making sure we’re successful,” Muñoz said. “We believe that’s a concept … whose time has come.”
Using student test scores and value-added measurements - an exercise that is riddled with error and uncertainty - is nonetheless "a concept...whose time has come."
That's the official Obama administration line.
This despite the study funded by the Duncan USDOE that found that found VAM misidentifies as much as 50% of those its used to evaluate.
And according to the Obama administration spokesperson, this use of student performance data to gauge teacher effectiveness is "something the president has a real sense of urgency about" despite the high margins of error and problems inherent in the system.
The next time you see somebody blame Arne Duncan for the educational malfeasance that emanates from the Obama administration, remember that Duncan serves the president and something this president has a real sense of urgency about is using junk science to evaluate teachers, schools and teacher preparation programs.
That's why we got the carrot of Race to the Top dangling out money for states to change evaluation systems to test score metrics, that's why we got the stick of the NCLB waivers that forced states to adopt teacher evaluation systems tied to test scores or lose their waivers and have all of their schools declared "failing" (as is happening right now to Washington State.)
Because Obama - and the rich plutocrats behind him - want it that way.