If there’s one thing I’ve learned this summer about what parents read it is that, most of the time, it has to do with the question starter “Where do I find the best…?”
Parents, I am right there with you. I love lists – the shorter the better – and ideally with pictures.
So, here’s a meta-exercise. Let’s rank school rankings.
Parents are busy people who care about getting their children the “best.” We want to find out where the “best” is, how much the “best” will cost, and how can we can get the “best” at the best price. Parents want the data quickly, and they want the data in a pre-chewed fashion. These are just a few simple reasons why lists work.
But, not all school rankings are created equally. In fact, they’re extraordinarily unequal. As parents, we often put far too much faith in the idea that other people, or “experts,” know best about what’s best.
Here’s the thing: They don’t because they’re not usually experts.
The trouble is that in order to understand how these sources generate their lists, you need someone to give a darn about the study’s methodology. But, ugh…who wants to do that?
I am such a person.
It has forever bothered me that school rankings are based on one or two-dimensional statistics such as the number of Advanced Placement courses offered in the school like the Daily Beast’s list (pre-2014), or average scores on state-mandated tests like Great Schools, and School Digger, or combination of similar metrics like the US News and World Report Best High Schools list.
These rankings are seriously limited because the methodologies are seriously flawed.
Curious Measures of School Quality
First, who determined that those factors make a school “the best?” Basing school ratings on advanced placement classes matters very little and tells parents even less about the quality of a school when their child doesn’t qualify. Even when they do qualify to take the class, odds are against them that they will earn a high enough score on the AP test to “place out” of first-year college coursework anyway.
Only 20% high school graduates, or about 607,000 students, earned a “3” or higher on an AP exam in 2013. Remember, since America only graduates about 80% of students from high school, this means that only one-fifth of that 80% of high school students take and pass an AP exam.
This article in the Atlantic exposed the ratings machine as “meaningless,” and issued a full-out take down of AP as signifying anything but the antithesis of quality and rigor. The author notes these classes are typically about memorization and regurgitation rather than critical reading, reflection, analysis or synthesis – the high-leverage skills of a quality education. (For a counter-point on AP check out this article by Jay Mathews in the Washington Post.)
At the same time, the Atlantic article credited these organizations for making slight but important adjustments to their measurements:
To their credit, US News and Newsweek/Daily Beast, which also use AP and IB courses as a measure, have made their rankings more sophisticated and reasonable by also adding other measures of a school’s quality, such as graduation rates and college-acceptance rates, and performance on state accountability tests and the proficiency rates of a school’s least advantaged students on those tests.
Despite these amendments, the measures are flawed for the simple reason that they don’t help parents make informed decisions about where to buy a home to start their family. Young families want to know where to find the best Kindergarten. These parents realize the first years in elementary school are crucial. High school is nearly a decade away.
Are We Just Looking at Rankings of the Richest Districts?
Second, these rankings identify “best” schools where very few middle class Americans can afford to buy a home.
If you’ve ever read any of my previous blog posts, or if you follow these rankings and school data at all, you know that school performance is typically in direct relationship to the wealth of the school district. Put another way, wealth and poverty matter.
Newsweek’s recently released high school rankings (again limited geographically to participating schools and limited to averages on standardized tests) have taken to creating two different school rankings. They clarify:
The question, “What are the best schools?” has two different answers depending on whether or not student poverty is taken into account. In an effort to address the effect of socioeconomic disadvantage on education, Newsweek is publishing two lists: our “absolute” list and … our “relative” list, which ranks schools based on performance while also controlling for student poverty rates.
If you’re someone who prioritizes having your children attend a public school with other children from diverse socioeconomic backgrounds, or if you’re like a friend of mine who doesn’t want to “raise a child who is afraid of poor people,” you might want to know how a school performs regardless of what’s in the savings accounts of the parents who send their children to school there.
What are the “Best Metrics” for Measuring “Best Schools?”
Finally, these rankings are flawed because they mask so many indicators of quality that families and educators should prioritize.
There are many metrics that can tell you how well a school is performing just as well or better than standardized test scores. For example, many people might want to know how parents review the school, school safety in the district, the budgeting track record of administrators, the quality of the teaching force, or the size and diversity of the district.
So, drumroll please. Here is the shortlist of the best of the best school ratings that manages to do all that:
- See above
- Sorry, that’s probably the only one, but I’m still looking.
Why Niche Wins
Niche has five winning features, and I can express them in a quick list too:
- They employ expert statisticians who
- draw from a fantastically rich dataset (NCES)
- to identify, cluster and weight important indicators
- with very little over-emphasis on any single indicator, and
- provide tools to parents on their website to dig and play in the data.
I’ll say more about each of these indicators and what they mean in a future blog post, and since you’re likely scurrying off to explore Niche’s ratings and rankings, I won’t say very much about Niche’s methodology either. Suffice it to say any group that references Bayesian probability and their subsequent weighting decisions in it’s “How do we Compute our Rankings” section is an outlier among these groups in terms of their attention to detail.
I could deconstruct any list of “the best,” but if you’re looking for a fast source with interesting and accessible data, Niche is the best of the best.