24
Sun, Nov

Parents Like Their Public Schools, No Matter How Much the Charter School Movement Tells Them Not To

VOICES

EDUCATION - Who would have imagined that after the past two tumultuous years, when so much was written and said about how the impact of the COVID-19 pandemic had convinced American parents that public schools were “failing” institutions, that as the 2022-2023 school year begins, “Americans’ ratings of their community’s public schools reached a new high dating back 48 years.”

That’s the stunning finding in the highly respected annual survey conducted by PDK.

The finding aligns with a history of survey results showing parents are generally pleased with the public schools their children attend. Even during the height of the pandemic, in 2020 and 2021, Gallup reported that parent satisfaction with local schools declined only slightly, and “more than seven in 10 parents” still expressed satisfaction.

Puzzling over this phenomenon, Chalkbeat national reporter Matt Barnum judged the widespread assumption of parent dissatisfaction with local public schools to be one among a number of “common, fear-inducing claims about the state of American schooling [that] are inaccurate or unproven.” He concluded, “It’s not entirely clear what’s going on.”

In an attempt to explain what’s going on, education historian Jack Schneider noted that while most parents rate the schools their own children attend highly, with 70 percent assigning their schools a grade of A or B, a similar percentage grade schools in general C or D.

In considering what might be causing this “perception gap,” Schnider argued, “One obvious factor is the rise of a national politics of education.”

Indeed, as Schneider explained, prominent Republicans have long made public education a whipping post going back to at least the presidency of Ronald Reagan. Anti-public education rhetoric coming from the right has only grown more intense in recent years.

Betsy DeVos, former President Donald Trump’s Secretary of Education, became infamous, in part, for making disparaging comments about public schools and for having once called public schools “a dead end.”

Since DeVos’s tenure, Republican criticism of public education has escalated from cryptic commentary to shrill calls for ending the public system altogether.

As Amanda Marcotte reported for Salon, the conservative Fox News network has for years engaged in a campaign to convince viewers that public schools are “a scary place turning their grandkids into self-loathing sexual perverts,” all for the purpose of rolling out its ultimate message that, “It’s time to end public education entirely.”

While “conservatives have long sought to undermine public education” and “strip the public treasury bare with private school vouchers,” wrote Matt Gertz at Media Matters for America, “Fox News hosts have begun calling for the wholesale destruction of the K-12 public school system.”

“Republicans don’t want to reform public education. They want to end it,” read the headline of an article by Kathryn Joyce on how recent education policies enacted by Florida Governor Ron DeSantis, including crackdowns on public school social studies curriculum and expansions of the state’s voucher program, are “a naked attack on the very existence of public schools” that is “piloting a new education ideology for Republicans.”

So, obviously, when one of two major political parties conducts a decades-long, scorched earth campaign to disparage, and even call for the destruction of public schools, it’s little wonder that a fairly large percentage of parents might have widespread negative perceptions of schools, regardless of what their own experiences have been.

But Schneider went on to explain that, “The biggest factor shaping the perception divide, however, may be data.” By “data,” what Schneider meant was the readily available and widely publicized standardized achievement test scores that come from annual tests that are mandated by the federal government since the passage of No Child Left Behind in 2002. The scores, which are widely reported, can leave those with little training in how to interpret them a perception of school performance “that is incomplete and inaccurate,” according to Schneider.

Another form of “data” Schneider didn’t mention that likely plays an outsize role in shaping public perceptions of schools are the various school accountability systems that states now employ to rank and grade schools and districts. These school rating systems draw heavily from the test scores Schneider spotlighted and often combine them with other data, such as graduation and student suspension rates, into a single score or letter grade.

Critics of these accountability systems say they are “misleading,” “confusing,” or that they mostly reflect student demographics rather than genuine school performance.

Although it’s not yet clear how these ratings will be affected by the impact of COVID, at least one state, North Carolina, has reported that since the state resumed its ratings, after suspending them during the height of COVID-19 infections, an additional 543 schools slid into D- or F-rated status.

Because school ratings are highly visible, they can exert a strong influence on public perceptions of schools, whether or not they represent accurate assessments of school performance. And a closer look at these systems shows how they work to discredit public schools, especially those serving low-income and minority students, and often help to further political agendas rather than guide good policy decisions.

More Than a Score

Although No Child Left Behind was rewritten in 2015, its replacement, the Every Student Succeeds Act (ESSA), requires that each state establish a school accountability rating system that differentiates schools based on a number of performance indicators and use this information to identify schools that need improvement.

While each state can design its own report card, these rating systems share a common feature: They collapse multiple school performance measures into a summative rating.

Some rating systems employ five-scale schemes, such as A-F grades or 5 stars. Others use a composite index scale (such as, 1-100 ratings) or “descriptive” rankings that, for example, range from “exemplary school” to “lowest performing.” California’s rating system, which is unique because it displays multiple indicators on a “dashboard,” does not include an overall summative rating for each school, but it does include summative ratings for each of the indicators it tracks.

On the surface, summative ratings are attractive as a policy instrument because they appear to provide concise and easily understood measures of school quality.

However, collapsing various indicators into a composite score may act to obscure a great deal of information about variations in school performance. It can also hide a political agenda.

Revise the System to Make It Even Tougher

Within the federal accountability framework, states are allowed to design their rating systems and make changes to accountability formulas that affect school ratings while not necessarily reflecting changes in school performance. This leeway given to the states provides state policy makers huge loopholes to manipulate their ratings in ways that radically alter results.

For example, after Oklahoma initiated its first A-F school report card system in 2011, it tweaked the accountability formula in 2013. The tweak led to a drastic change in school performance grades. As a result of the formula change, according to a 2016 analysis, the number of C schools in 2011-2012 dropped from 21 percent to 5 percent in 2012-2013, and the number of F schools in 2011-2012 increased from 8 percent to 53 percent in 2012-2013, even though school demographics remained similar, and average math and reading achievement were stable.

Why would Oklahoma lawmakers want to change the state’s school accountability rating to create more F-rated schools?

An answer to that question perhaps emerged in 2016 after the state made yet another change to its school rating formula that again drastically changed the schools’ A-F letter grades. As the Norman Transcript reported, the formula change resulted in 40 percent of public schools receiving an A or B grade, down from 57 percent in 2012, and nearly 30 percent getting a D or F grade, compared to 8 percent in 2012, according to an analysis by the Foundation for Excellence in Education (FEE).

When asked why the sudden change occurred, FEE policy analyst Christy Hovanetz, said, “The flip-flop from top performers to under-performers reflects a ‘fairly rigorous’ grading system.” “[L]awmakers,” she said, “have continued to revise the system to make it even tougher.”

Making school accountability systems “tougher” can have negative consequences for schools, especially in Oklahoma, where schools with grades of D or F are subject to mandatory interventions that may include having their staffs reconfigured, having their management transferred to a charter school organization, or being shut down outright.

It’s telling that the policy analyst hailing the “tougher” rating system in Oklahoma works for FEE. That organization, which has since changed its name to Excel in Ed, according to the Center for Media and Democracy’s SourceWatch project, was founded by former Florida Governor Jeb Bush in 2008, shortly after he finished his tenure in office, and subsequently led by him for a number of years.

Excel in Ed’s philosophy on accountability is perhaps best summed up in a PowerPoint presentation that Hovenetz gave to North Carolina state lawmakers in 2019 in which she said, “Accountability itself does not improve student outcomes, but the data it produces should inspire action that will improve student outcomes.”

While Bush was governor, Florida was the first state to enact a grade A-F school rating system, according to the National Association of Secondary School Principals, and during his tenure, his administration made a series of changes to the rating formula that caused vast differences in outcomes.

During the early years of Florida’s new school grading system, according to Matt Di Carlo of the Shanker Institute, the percentage of schools receiving A’s rose from 12 percent of schools in 1999 to 60 percent in 2008, and there was a significant drop in the percentage of schools receiving grades of D or F.

However, Di Carlo found, “The grades changed in part because the [rating] criteria changed.”

Specifically, according to Di Carlo’s analysis, “The vast majority of these shifts occurred either between 1999 and 2000, or between 2001 and 2003. … This pattern is mostly a direct result of changes to the [rating] system in those years.”

While there may be many plausible explanations for why state officials in Oklahoma and Florida would change their states’ rating systems, it’s undeniable that the changes could have fed into political agendas.

When rating formulas were being rejiggered in Oklahoma, the governor at the time, Mary Fallin, was pushing for the state to enact education savings accounts, a form of school vouchers, and likely understood that toughening the state’s school rating system would make public schools look like worse choices for parents.

In Florida, it’s not hard to imagine that Bush had some motivation to tweak the state’s school ratings system to create more A-rated schools as he prepared, both in his ensuing consulting business with Excel in Ed and his eventual run for president, that it would be advantageous to tout Florida’s school reform effort, which he led, as “a model for the nation.”

Politics Versus Performance

There’s other evidence that state school rating systems often reflect personal and ideological preferences of state leaders.

In Indiana, in 2012, the Washington Post reported, State Superintendent Tony Bennett instructed staffers to take advantage of a loophole in the state’s system to alter the rating of a charter school, founded by a campaign donor, by eliminating the scores of some student groups. The change raised the school’s original rating from a C to an A. Bennett, who had moved on to become the state school superintendent in Florida, subsequently resigned.

States with a more liberal orientation, one study has shown, are more likely to incorporate indicators related to school quality and indicators of student success, such as growth measures, while states with a more conservative leaning maintain a focus on student test scores.

Another study examining the role of historical and political context in shaping assessment policy in Nebraska and Virginia found that the political culture in both states strongly influenced their assessment systems.

In Nebraska, a historical culture rooted in local action and collaboration influenced the design process, resulting in more local support for its implementation and delaying a shift to a state standardized assessment system in favor of local assessments.

In contrast, Virginia, with a tradition of centralization and top-down accountability, implemented a top-down policy model that emphasized standardized testing and constrained resources and opportunities for policy transformation at lower policy levels.

Questionable Educational Value

While school rating systems may be a practical means to a political end, their educational value is questionable.

Despite the proliferation of school rating systems, there is very little peer-reviewed, empirical research on their effects on student performance and school and teacher practices.

Among the studies that have been done, however, there’s evidence that collapsing multiple school performance measures into a summative rating or score is an especially poor indicator of school quality and does not sort out schools with high and equitable achievement from schools with high average achievement and large achievement gaps.

For example, yet another study of Oklahoma’s A-F rating system examined whether the system had an impact on the state’s policy agenda to close the wide gap in test scores between students enrolled in the free and reduced lunch program (a measurement of poverty) and students with a minority status compared to their better-off, white peers. The study found that “gaps moved in a direction opposite from what would be desired of an accountability system that measured achievement equity.” The report concluded, “A composite letter grade provides very little meaningful information about achievement differences.”

Summative ratings also tend to obscure the well-documented relationship between student achievement scores and demographic variables, most notably race and socioeconomic status.

An analysis of the Maryland five-star rating system, for instance, examined why no high-poverty schools earned a five-star rating, but when the researchers adjusted ratings to account for economic disadvantage, the number of five-star schools increased.

An analysis of California’s school dashboard rating system found that, despite all its nuance, “schools can earn strong overall ratings even if subgroup performance is poor”—subgroup being a catchall phrase for specific populations of students, such as low-income, Black, and Hispanic students or students who have a learning disability or don’t speak English well.

This inability of summative school ratings to distinguish school performance from student demographic variances disproportionally harms schools serving marginalized children and inflates the quality of schools serving wealthy and white students.

If scores cannot sort out schools with high and equitable achievement from schools with high average achievement and large achievement gaps, they create inaccurate judgments about school quality and unfairly sanction some schools while not holding other schools accountable.

Where Are the Democrats?

Because state school rating systems that use summative scores drawn from student test scores are unlikely to take into account the variability in student learning experiences, policymakers need to design an accountability system that does a better job of taking into account differences in schools.

For instance, an important aspect of school performance that is missing from state accountability systems are the various inputs that are critical to students’ learning experiences, including access to curriculum, diversity of textbooks, adequate staffing, and the availability of high-quality materials, equipment, technology, and facilities.

The expansion of rating systems to include inputs, often referred to as opportunity to learn standards, could provide a more nuanced appraisal of school performance.

However, since accountability is in part a political process, it is not clear that technical fixes can lead to systems that are more reliable, fairer, or more valid.

Which begs the question of where Democrats are on this issue.

While Republicans’ education messaging has been on a slippery slope from disparagement to destruction, Democrats have generally remained stuck in a compromise—forged with Republicans during the enactment of No Child Left Behind, and renewed with ESSA—that support and funding for public education needs to be balanced with “accountability.”

Democrats’ calls for schools to be accountable for “results,” based exclusively on student test scores and state report card ratings, have to a great extent contributed to the Republican campaign to continually disparage public schools. Even in the bluest states, public schools that are labeled failure, with whatever the preferred moniker happens to be, convey to parents that the education system isn’t working, and alternative education providers, such as charter and private schools, need to be accelerated.

But it’s not too late for Democrats to turn that dynamic around. A good start would be to call out Republicans for manipulating state school rating systems in order to advance their political agendas. Democrats could also propose adequate fixes to these systems so they’re more useful to their policy purposes. Or they could propose getting rid of them all together. But the status quo on state school ratings has to go.

(This article was produced by Our Schools. Gail Sunderman, PhD, is co-founder and former director of the Maryland Equity Project at the University of Maryland, a research and policy center focused on access to educational opportunities in Maryland.)