Monday, May 9, 2011

LOS Angeles Times


It’s deja vu all over again with the Los Angeles Times and its value-added scores that supposedly tell us how effective are the teachers in the nation’s second-largest school system.
The newspaper has printed its new ratings of elementary school teachers in the Los Angeles Unified School District based on how well students did on standardized tests. The idea is to use a formula that the newspaper had devised to assess the “value” a teacher added to a student’s achievement.
The newspaper says its project takes into account the complexities of measuring teacher performance. But it essentially ignores some obvious points:
*Teachers aren’t the only factor that go into how well a student does on a test;
*The tests aren’t devised to evaluate teachers;
*There are lots of questions about how well the tests measure real student learning;
*Lots of experts say the whole value-added enterprise is not reliable and valid as a high-stakes fashion; and
*See this post by a prominent mathematician about why value-added is suspect for the purposes of evaluating teachers.
I’d say that using a value-added score to label teachers effective or ineffective -- even in their ability to raise test scores -- is high-stakes.
This time the newpaper published value-added ratings for about 11,500 third- through fifth-grade teachers. That’s nearly twice the number of teachers rated in the Times’ first value-added outing last August, and this time, the scores were calculated and displayed in a different manner.
Why? “In the interest of greater clarity and accuracy,” the paper said.
Isn’t it good to know that this new information is more accurate than the first?
The paper’s story on Saturday announcing that it was publishing its new data on Sunday noted that it was providing the “only publication of such teacher performance data in the nation” as if it were doing something bold rather than injecting a newspaper’s editorial processes into a highly controversial enterprise.
It also noted that the district’s superintendent, John E. Deasy, and others had asked the paper not to publish its own value-added teacher ratings because the district had already calculated its own version for internal purposes, using a different model.
The public might get confused, the dissenters noted, because the results might be different.
Well, they got that right. In fact, the Times published a comparison of results from no less than four value-added models for each teacher in its database. The results? “On average, the results are very similar but, in specific cases, they can vary sharply,” the newspaper says.
Now that’s comforting.
The national obsession with standardized test scores in public education to grade schools, students and, increasingly, teachers is getting worse. But, hey, why worry? It’s only the future of public education at stake.

Share/Bookmark