Tuesday, February 5, 2013

Gates Foundation MET Study Findings Found Flawed

The King of Data, Microsoft Titan Bill Gates, wrote in the Wall Street Journal last week that if we just meticulously study every major problem and follow the data where it leads us, we can solve many vexing issues, from global warming to hunger to education.

But this week one of Gates' bigger data projects, the Measures of Effective Teaching study, was found wanting:

The recommendations in the final Measures of Effective Teaching work products may not be supported by the project's hard data, the National Education Policy Center contends in a review of the project.

The review, released last week, was written by Jesse Rothstein, of the University of California, Berkeley, and William Mathis, of the University of Colorado at Boulder. The NEPC has taken issue with several prior work products from the MET study.

In the critique, the scholars take aim at the study's randomization component, the basis for the MET report's headline finding that projections based on the three measures studied, which include "value added" test-score analysis, seemed to be quite accurate overall. But Rothstein and Mathis note that there was a high degree of noncompliance with the randomization, and also suggest that teachers of certain students appear more likely to have dropped out of the study. (Rothstein made a similar point in Education Week's story on the final MET results.)

The scholars also say that none of the three main measures studied—student surveys, value-added test-score growth, and observations of teachers—was particularly predictive of how teachers' students would do on the alternative, "conceptually demanding" tasks. That's potentially worrisome, since the tests being designed to measure the Common Core State Standards are purportedly more in line with such tasks. "There is evidently a dimension of effectiveness that affects the conceptually demanding tests that is not well captured by any of the measures examined by the MET project," the authors write.

The scholars also question one of the very premises of the MET study: its use of growth in student test scores as the baseline standard for comparing the impact of all the measures it tested.
"It is quite possible that test-score gains are misleading about teachers' effectiveness on other dimensions that may be equally or more important," the paper states.

Bruce Baker also found this Gates Foundation MET study wanting, noting:

I’ve written several previous posts explaining the absurdity of the general framework of this research which assumes that the “true indicator of teacher effectiveness” is the following year value-added score. That is, the validity of all other indicators of teacher effectiveness is measured by their correlation to the following year value added (as well as value-added when estimated to alternative tests – with less emphasis on this). Thus, the researchers find – to no freakin’ surprise – that prior year value added is, among all measures, the best predictor of itself a year later. Wow – that’s a revelation!
As a result, any weighting scheme must include a healthy dose of value-added.  But, because their “strongest” predictor of itself analysis put too much weight on VAM to be politically palatable, they decided to balance the weighting by considering year to year reliability (regardless of validity).
The hypocrisy of their circular validity test is best revealed in this quote from the study:
Teaching is too complex for any single measure of performance to capture it accurately.
But apparently the validity of any/all other measures can be assessed by the correlation with a single measure (VAM itself)!?????

This was an ideological study with a political agenda. 

The Gates Foundation people go into the study assuming test scores adequately measure teacher performance and value-added measurements accurately measure test scores. 

They then seek to prove those findings with the data and wrap themselves into pretzels to do so. 

The newspapers report the findings, even though the findings are horse hockey, and then we get politicians (or union leaders) pointing to these findings as sufficient justification for the new value-added or growth model teacher evaluation systems.

As always, the fix is in.

Findings first, data and research after.

2 comments:

  1. This is called "lying with science". Put a veneer of science on it, science is equated with Truth in most peoples minds, and voila- the solution is obvious. All they have to do is fudge the data, and ignore any studies that contradict their falsified ones. Easy to do when you control the media.

    It's the exact same thing Gates (and Monsanto and the FDA) do with GMOs. The real science on GMOs is that they are useless and poisonous. People don't wake up until they are sick and/or their kid is as stupid as a rock after 16 years of public school and American entertainment. Hopefully it's not too late for them when they finally do wake up.

    The scientific method is only one way to view the world. Best useful for mechanical and chemical things. It's certainly not the best way to raise a child. Ask BF Skinner. His poor skinner box kid killed herself at 20! But American public education is still based on behaviorist principles!
    Check out Lionni, Paolo, "Leipzig Connection," Heron Books, 1993. It's got the goods on the Rockefeller connection and public education and behaviorism and of course philanthropic capitalism!
    Peace!

    ReplyDelete
  2. Good writing is something I can appreciate. You have made your points in a smart way. I am impressed with how interesting you have been able to present this content. Thanks for sharing a nice information.
    ASMA Institute of Management MBA College

    ReplyDelete