Marc Oestreich
Prior to joining Heartland, Marc was a graduate student at Purdue University studying political psychology and education policy. He enjoys defending liberty, writing about education and technology, music, designing websites, and is a fan of the NFL team in Indianapolis. Go Colts!
Latest posts by Marc Oestreich (see all)
- The ‘Right to Work’ Hoax - April 26, 2011
- An Attack on Capitalism and the Disadvantaged - April 21, 2011
- Netflix Tax a Disaster for Web and the Economy - April 5, 2011
When policy analysts at interest groups and think tanks and researchers in academia set out to look at education policy they each must think themselves so clever in calling it a “report card.” (Note: the author of this piece just co-authored a study titled “50 State Education Report Card“). In defending public education report card against another I quickly realized that everyone, literally everyone, names their report on education a “report card.”
It seems like an easy enough concept to understand, send states’ education systems home with their bad marks and let the people sort it out. The problem is, when little Johnny comes home with a dozen report cards with grades ranging from A+ to F-, mom and dad don’t really know what to do. This is the status quo in education policy. When Florida, for instance, ranked 3rd overall in achievement and gets a “B+” for education reform from the American Legislative Exchange Council, gets a solid “D” from The Heartland Institute, and ranks 8th overall from Education Week (just to pick out 3), there is some sorting out to be done.
We need to understand first that these reports are not all measures of the same thing. They may all be making claims at the same thing, but they are measuring it completely differently. I think it was Mark Twain who famously said there are 3 types of lies: “Lies, damned lies, and statistics.” We can spend exhaustive amounts of time trying to understand the motivations behind each report card and looking for bias, or we can start asking the right questions.
The first question then becomes, “what things are the right things to measure?”
Well, we know that in order to compare states’ education systems, we need to account for extraneous variables. If we look at a state like Arkansas and one like Connecticut its easy to say Connecticut is doing a better job educating. We can look at raw standardized scores from federal tests and see that Connecticut is 20+ points ahead. But lets pull this apart a bit more.
- Connecticut has an average household income above $67,000, while Arkansas sits just below $37,000.
- Connecticut spends an average of $16,000 per student each year, while Arkansas spends just over $9,000.
- Arkansas has over 57% of its students on Title I (Free and Reduced Lunch), while Connecticut has less than 30%.
So, is it fair to assess public schools on this one standardized test score and compare them state to state? No.
The ALEC State Report Card tries to take this into account and minimize the interference from these extraneous factors. They use only Title I students in their populations, to control for income. While this certainly does limit the income range significantly, it doesn’t give a complete picture of the student population. If we look just at Connecticut here, we see that more than 70% of the public school students (in the ALEC report) are being ignored. ALEC also fails to consider spending in its analysis. This is not the ideal metric for understanding public school effectiveness.
Many report cards still use only policy inputs as a rubric for grading. Half of the ALEC report is a letter grade based solely on policy inputs with no tangible outputs. The card is careful, though, to separate the two rankings rather than convolute them. Jay Greene’s famous 1995 Educational Freedom Index (EFI) was also a measure of inputs rather than outputs. This methodology (using inputs) is not a fault, but rather a line of distinction. Still, its hard to call a report with only these input variables a “report card.” This would be akin to sending Johnny home with a “potential report”. In Desire 101 he’s got an A. In Potential he’s at a solid B. In Time Studying he’s at a C. These are interesting factors, but they aren’t results.
Few reports use any measure apart from state/nationwide standardized test scores or pure policy inputs, yet neither measure tells us much about the job being done by each state’s public education system. Cue: The Heartland Report.
The Heartland Institute’s Report Card uses a host of novel metrics to separate student from school and examine schools by their efficiency as well as achievement. The report looks at the value added by 4 years of public schooling–that is the gain students get from schooling over 4 years and 4 grades. This metric holds starting factors at bay and examines just the demonstrable gains from schooling at ages where schooling is the main influence on achievement.
The report then examines the efficiency by which these gains are made, by which states graduate students, and the amount of bureaucracy employed in the system. This way of examining the value of school divided by its cost gives us the clearest picture of how a state is doing in public education. My next goal: to create a Laffer curve which finds the optimal spending level (point of diminishing returns) for each state.
Don’t be fooled by those who call their analysis a report card. In my judgment there is only one true study that can be called a report card and that is the Heartland Report.