Showing posts with label high school progress reports. Show all posts
Showing posts with label high school progress reports. Show all posts

Wednesday, November 3, 2010

High school progress reports; more unreliable and unfair measures

Today the high school progress reports were released for 2009-2010; here is an excel spreadsheet.

Yet experts continue to have grave reservations as to their reliability, with DOE placing far too much emphasis on test scores and especially the "progress" component, which means one year's changes in test scores, credit accumulation, passing rates etc., which have found to be extremely erratic and statistically unreliable. For more on this see this study, and this previous posting.

We have also seen the extremely damaging effects of this high-stakes accountability scheme, causing principals to increasingly "game" the system, that is, encouraging their teachers to "scrub" the scores, and or raising the scores themselves to passing, and/or awarding credits to students who either didn't actually pass their courses or even take them. Two articles in the past few days have revealed this occurrence at high schools in Queens and Manhattan; these practices are disturbingly widespread, and often occur with DOE's knowledge.

Right now in the city's schools, under Bloomberg and Klein, there is a lawless atmosphere, with very little oversight. In this "wild west" environment, it is very easy to manipulate the data, especially since DOE officials do not appear to care how schools get results, as long as graduation rates and test scores improve

Finally, it is interesting to note that the twenty schools with the highest overall progress report scores had average class sizes of only 23.3 students per class, with only one of them averaging more than 25.7 students per class. Meanwhile, the twenty high schools with the lowest overall scores had an average class size of 24.9, with four schools averaging 29 students per class or more. (Click on charts to enlarge.)


As the controversial teacher data reports take class size into account as a limiting factor as to how much a teacher is expected to raise test scores; it would only be fair for the DOE to take this factor into account as well with the school progress reports, especially as many high schools are allowed to cap enrollment and thus class size at far lower levels than other high schools.

Sunday, May 16, 2010

John Allen Paolos on Tweed's ongoing innumeracy


Check out John Allen Paulos in today’s NY Times, author of "Innumeracy", about how the current obsession with data often gives us the wrong answers; and steers us in the wrong direction:
Unless we know how things are counted, we don’t know if it’s wise to count on the numbers … Consider the plan to evaluate the progress of New York City public schools inaugurated by the city a few years ago. While several criteria were used, much of a school’s grade was determined by whether students’ performance on standardized state tests showed annual improvement. This approach risked putting too much weight on essentially random fluctuations and induced schools to focus primarily on the topics on the tests. It also meant that the better schools could receive mediocre grades because they were already performing well and had little room for improvement. Conversely, poor schools could receive high grades by improving just a bit.
We are now entering the fourth year of the Tweed’s inherently flawed school “progress reports” or grading system.

Each year the formula has been significantly revamped because of the absurdity of the previous year’s grades, including this year’s grade inflation, in which 84% of elementary and middle schools got "A’s". If the authors of this system were to receive a grade themselves, it would be an "F".

The school grades are based 85% on the previous year's state test scores, which themselves have been widely derided as unreliable. The formula used has also been shown to unfairly penalize schools with large number of high-need special education students, despite the DOE's claim to fully control for the student population.

And, as Paulos points out, they are essentially "random" as they are based on only one year's worth of test scores.

Yet, inexplicably, the DOE refuses to conform to reason and alter the formula so that it is based on more than one year’s data; despite the fact that Jim Liebman promised at their inception to base them on three years’ worth of test scores.

Other troubling problems related to the way in which the grades also rely in part on survey results from teachers and parents. Recent articles in the Daily News have shown how several principals have pushed teachers into giving them favorable reviews; with the threat that otherwise, DOE may close their schools based on low grades. Parents also commonly report the same sort of pressure, either externally or internally imposed.

The school grading system also ignores critical but highly variable factors that differ widely among schools and yet are largely outside the their control, such as class size or overcrowding, which can work against increases in achievement.

This omission is unjustifiable, given the fact that DOE’s “teacher data reports” that Klein says should be used in tenure decisions include class size as a key factor, showing that even Tweed educrats recognize that class size is an important contributor to teacher effectiveness, and their claims otherwise are so much hogwash.

Yet the teacher data reports are themselves problematic; and their formula has never been publicly released. I submitted a FOIL for the formula more than a year ago, as well as the identity of the “independent” panel that DOE had claimed had attested to its reliability, and have still not received anything in return.

As the National Academy of Sciences has pointed out, in its comments to Secretary Duncan’s misbegotten grant program “Race to the Top”, no system for evaluating teachers on the basis of test scores has yet been established that is ready for prime time, given all the inherently complex and imponderable factors that go into test scores, particularly at the classroom level. Any attempt to implement such a program, they urged, should be carefully tested and independently vetted, because it could very well have unfair and damaging consequences, not just to teachers but to our kids as well.

We have already seen how art, science and music and untested subjects have been minimized in our children’s schools since the over-emphasis on high-stakes tests has been imposed; with weeks more spent on test prep and less on learning.

All parents should closely watch the evolution of the recent agreement between the New York teachers union and the state, to base 25% of teacher evaluation on state test scores and another 15% on “locally selected measures of achievement that are rigorous and comparable across classrooms."
We must hope that whatever formula is used is independently and publicly vetted, does not result in even more unfair and unreliable measures of performance, and does not lead to more testing that will waste time and money, serving no purpose except to diminish the quality of education that our children receive.
You can comment on the US DOE blog about the fallibility of basing teacher evaluation on test scores here.

Tuesday, November 17, 2009

Grade inflation and lower standards at the DOE: what else is new?


Today, there were lots of articles about the inflated high school grades (the so-called “progress reports”) which turned out to be almost as inflated as those for elementary and middle schools. 75 percent of NYC high schools got As or Bs, and only one school got an F.
Yet as I pointed out to WNYC radio, more than half of our high schools are extremely overcrowded, with the largest class sizes in the state, and among the lowest graduation rates anywhere. According to the Daily News, at more than half of the schools that received the highest scores, less than half of the kids graduate with a Regents diploma.

Moreover, there seems to be a double standard and favoritism at Tweed. DOE says they will close down large high schools that did not do well, but the one high school that got an “F”, Peace and Diversity, will be provided with more help and resources. Is that because it happens to be a small school, founded in 2004? And the DOE has a vested interest in promoting the new small schools they helped start over our large high schools, those schools that in fact, their own policies have helped destabilize?

See the response in the Post by the Michael Mulgrew, UFT head: “Mulgrew also bristled at a chart produced by the city showing that the smaller high schools opened under Bloomberg since 2002 were faring better than others, even as several of the newer schools rated D's and the lone F.
"I don't like when you try to draw distinctions when you're responsible for all of the schools but you have a vested interest in trying to tell people that the schools you created are doing well," he said.”

Unfortunately, none of the articles make the connections between these grades, the threat of school closure, and all the cheating and grade tampering scandals that have erupted in high schools in recent years. And none make the point that the cut scores are arbitrarily decided upon by the administration – so in essence, the Chancellor and his minions decide ahead of time exactly how many grades there will be in each category, easily providing the DOE with yet another way to congratulate themselves while closing down certain schools to grab their real estate.

Most interesting is the following finding, from the NY Times: “The school environment grades, which are based on attendance and results of student, parent and teacher surveys, and make up 15 percent of the grade, showed the steepest decline. This year, 55 high schools received a D or an F in school environment, compared with 12 last year.”

What’s going on here? Are the pressures of the high stakes accountability system tearing our high schools apart? If so, it wouldn’t be altogether surprising.