Tuesday, July 24, 2012

Report from 7/23 City Council Education Subcommittee on RIDE school classifications

Big huge thanks to Zach Mezera, reporter extraordinaire, who attended the City Council Education Subcommittee meeting last night and sent along this detailed, thoughtful report. What follows is Zach's excellent report verbatim - all thanks go to him.

--

Education Subcommittee Hearing — July 23, 2012

Relevant players included Councilmembers Zurier, Salvatore, Matos, and Principe; Superintendent Lusi; Chief Academic Officer Paula Shannon; and Director of Research, Planning and Accountability Marco Andrade

As the Superintendent put it, the new classifications that came with the waiver from NCLB provide more flexibility, but are more difficult to explain to the public. I’ll do my best.

Over half of the District is now “identified” for poor performance. This happened because there was a shift away from identifying schools solely by AYP, and instead toward measuring multiple factors. These are: percent of students with proficiency, percent of students with distinction, percent of NECAP participation, achievement gap closing, progress to 2017 targets, growth (K-8), improvement (9-12), and graduation rate. 

The measurement of achievement gaps merits special attention. From what I understood, there are three relevant measures: 1) The average score of students with an IEP or LEP (limited English proficiency); 2) The average score of students on free or reduced lunch (and minority status?); and 3) A district-wide “performance reference group” of students without any of the prior qualifications. So take Hope High: average the scores of IEP/LEP students in the school, average the scores of FRL students in the school, do something with those two averages, and then compare that school average to the average score of non-poor, non-IEP students across the entire district, and voila, the “gap-closing” score. [Interesting to note here that RIDE refers to this “performance reference group” as “high-performing” when actually, the PRG has nothing to do with performance. It’s a tacit admission that not having an IEP and not having subsidized lunch pretty much suggests the student is scoring well.]

Whereas the NCLB goal was to have all students (read: all schools) reach proficiency by 2014, the waiver sets more realistic goals for individual schools, based on a baseline. AYP then, as I understand, is a thing of the past, superseded by what Andrade called “a more comprehensive picture.” The classifications that are rendered by this more comprehensive process then dictate whether the school/district must reform with various options from a menu of “empirically proven reform strategies.” Superintendent Lusi stressed that what was better about this system is that by creating a system for measuring gaps, the suburbs are now held accountable as well. Although, not too many were identified in the lowest tiers anyway.

One that was ID’d as “Focus” that probably caught peoples’ attention was Nathan Bishop. The primary reason for this, it seems, was that only 9% scored proficient (in what?) in the most recent NECAP. The RIDE rules state that if a school has a proficiency score of less than 10, it automatically is placed in Focus regardless of other scores.

Some things are still unclear. 

1) If the reforms take almost a year to implement (this Spring or later), and then NECAP testing happens at the beginning of the following year, the test will fail to measure the full impact of the reforms for at least 2 years. It’s just another dumb side effect of having NECAPs at the beginning of the year.

2) How are the affluent white suburbs measuring achievement gaps? — I asked this question, but was still confused by the response. Rather than calculate a “low performing performance reference group” for places like Barrington or East Greenwich as I thought it might, it instead seems like RIDE made the “quorum” for measurement arbitrarily low. In the past, schools (or districts?) needed to have 45 students with IEPs or LEPs or FRL to be scored for gaps; now that number seems as low as 10. That means gaps should be able to be calculated for most schools, but it’s based on a sample of 10 situationally-challenged students that is likely too small to overcome natural statistical variation. I.e., the rankings of achievement gaps across the state are calculated on the backs of a very few FRL, IEP or LEP students in affluent districts.

3) How disruptive are these reforms going to be? We have to wait until the package is chosen. And because the district is mandated only 60 days to choose a package, the community won’t have many chances to engage in the process.

4) How is baseline calculated? Is it NECAP scores over the last few years? Or is it just the most recent full set of scores for the school, a la RI-CAN’s criticized report cards? I hope it’s the former, but the phrasing members used made it sound like the latter.

5) How are we going to pull this off? Credit to Councilmember Principe, who pointed out the core problem: PPSD only receives an additional $400,000 to enact reform packages at an additional 15 schools! It’s insane.

1 comment:

  1. The Bishop score refers to the school getting 9 points out of 30 for overall proficiency. And indeed, there are schools with lower overall scores not identified. It isn't at all clear how RIDE came up with this. Goff Jr. High is over 5 points lower overall, but have 11 in proficiency, so they're just in "Warning." Any system like this should be designed to avoid arbitrary cut scores, but RIDE seems to love the way they arbitrarily mix things up.

    Regarding the gap scores, there are three reference groups in the state (it isn't by district). They are Urban, Urban Ring and Suburban. Urban Ring and Suburban gaps are measured against progressively higher cut scores. Also, there are two separate scores for gaps: IEP/LEP and race/income. It seems like a lot of suburban schools hover around the cutoff of 20 for IEP/LEP. If they are just above they can get whacked for low special ed scores, just below and their race/income scores count double (30%), which may be good or bad for them depending.

    The achievement gap calculation amounts to grading on a curve for urban schools, which I don't really have a problem with given the systematic biases throughout the rest of the system. But I suspect it is really the main reason more urban schools come out looking better.

    ReplyDelete