Lately, student testing has become everyone’s favorite political punching bag. Just this week, New York State United Teachers, the state’s powerful teachers union, issued a statement decrying the proliferation of new tests and insisting on a moratorium for their use in teacher evaluations.
The union didn’t mention its dirty little secret: It’s a big part of the problem. Yet instead of owning up to its role or trying to fix the problem, the union is scapegoating state Education Commissioner John King, Gov. Cuomo and the state Board of Regents.
It’s a little like the Cookie Monster demanding to know who emptied the jar.
Here’s what happened: In 2010, New York joined more than a dozen other states in bringing its outdated teacher evaluation systems out of the Dark Ages. For nearly a century, teachers across the state had been simply deemed either “satisfactory” or “unsatisfactory” by their supervisors. Great teachers got no additional recognition, and almost nobody was ever deemed “unsatisfactory.” State law prohibited student learning outcomes from being considered.
Then, a coalition of New York leaders from both parties came together to pass legislation modernizing educator evaluations and embracing richer learning standards.
Despite the union’s longstanding desire to exclude any connection between student learning and teacher performance assessments, the law decreed that 40% of a teacher’s performance would rest on progress in student learning. Existing state tests would be a factor in the grades where they are given.
But seeking to preserve its considerable clout and ensure that as little weight as possible be given to statewide tests, NYSUT lobbyists sought and won a concession that half of that 40% would be negotiated locally at the bargaining table, where union leverage is often overwhelming.
Fast-forward to today. The union-driven provision has created a monster. To comply with collectively bargained contracts, districts are layering new tests upon tests. Some of them are useful; many are unnecessary.
Now that communities are noticing the trend and complaining that the new tests are taking too much time away from instruction, the union seems to have forgotten all about its role.
The 20%-20% was done so that the unions could tell their members that the test-based rating component for teachers would not be based all on one high stakes test score.
That was supposed to be a better alternative than 40% based on the state tests.
The authors are right to say that the 20%-20% rules are a big part of the overall testing problem.
And they are right to point out the unions agreed to the system and indeed were in at the development of it every step of the way - something I have pointed out over and over and over here at Perdido Street School.
But if the authors think switching the test score-based component to 40% state tests will alleviate the problems with the evaluation system and the overtesting, they are sorely mistaken.
While it is true that the numbers of tests are a problem, the bigger problem is the insane emphasis on testing overall - the high stakes attached to the tests for students, school, administrators and teachers.
You can get rid of all the "local assessments" and just give one state test a year in every subject to every student to evaluate the children, the schools, the administrators and the teachers.
But that will not solve the overtesting problem because the stakes will still be attached to the test scores and the insanity in the system will remain - the endless test prep, the anxiety over the scores, the unreliability of the VAM used to rate the teachers, etc.
In other words, the problems stem not only because of the number of tests given but from the stakes attached that has everybody from students and parents to teachers and administrators in fear if the numbers don't go up.
I should add that the authors fail to acknowledge that the Danielson rubric is another part of the overtesting problem too, since it mandates constant "assessment" - pre-assessments, formative assessments, interim assessments, summative assessments.
It's a nice try by Williams and Daly to blame the unions for the overtesting problem.
They're not wrong to point the finger at the leaderships of the unions and note how these guys helped develop this system (and indeed, still defend it.)
But they are wrong to say that tweaking the number of tests given will fix the problem.