Gaming the Numbers
Many of the statistics policy makers use today are set in concrete but made of quicksand.
Last summer, the University of Maryland came out with a report about the number of enrollees in Maryland's Medicaid program. You wouldn't think that this would be something that would require a major university study. But it turns out that there are enormous discrepancies in the figures used by the state and those derived by the federal government.
The U.S. Census Bureau's Current Population Survey indicates that 410,000 individuals in Maryland are covered by Medicaid. But if you ask the people in Maryland government, they'll tell you there are more than 700,000 beneficiaries. According to the UMd study, the state numbers are more accurate. People contacted for the census survey may not acknowledge they're on Medicaid for a variety of reasons, including the potential stigma attached.
The ramifications of this large difference--which is likely replicated in other states--go far beyond the program involved. This huge error brings up significant policy questions. The number of Medicaid enrollees also is used to calculate the number of uninsured. So, although virtually every major report about the uninsured in America claims about 45 million people fall into that group, the number is almost certainly far lower.
If this kind of misstated statistic was limited to the uninsured, that would be bad enough. But the foundations of many statistics used by officials to create and manage state programs are made of similar quicksand.
Take the poverty rate. It's used in a variety of ways by the states, including as a financial guideline for child support enforcement, determination of legal indigence for court purposes, eligibility for Medicaid, and inclusion in Head Start programs. Perhaps most important, it's the mainstay of reports issued by a whole array of advocacy groups. But pretty much everyone who has studied the formula for determining the poverty rate knows that it's flat-out wrong.
For one thing, it doesn't count the value of food stamps or Medicaid or earned income tax credits. What's more, there's only one national poverty rate, which suggests the obviously foolish notion that it costs the same to maintain a decent quality of life in Ripley, Mississippi, as it does in New York City.
If the flaws in the poverty rate are so widely known, then why hasn't it been adjusted? The problem is that "it's not clear whether there would be more or less poor," says Susan E. Mayer, dean of the Harris School of Public Policy Studies at the University of Chicago. As a result, states that pay for means-tested programs fear that their costs could go up if more people were defined as poor. On the flip side, other states could lose money in programs with federal matching dollars if fewer people were defined as poor. And advocacy groups worry that some of the people they represent might wind up losing benefits if, under a new calculus, they were no longer considered to be poor.
But Mayer maintains, sensibly enough, that advocacy groups are playing a dangerous game by trying to maintain a fictional poverty line in order to keep funding up for the time being. "It can backfire," she says. "Everyone wants to think the sky is falling, because that way maybe we'll prop up the sky. But if all the propping up doesn't seem to be working, and the sky continues to seem to be falling, in the end we'll just give up."
Unemployment figures--one of the most commonly used measures of the fiscal health of states and cities--are also problematic and critics believe they are misleading for the following reason: the longer a recession lasts, the greater the number of men and women who give up on looking for work altogether. And the most commonly used unemployment rates are based only on those people who are actively in the job market. As a result, when a state's economy remains in the doldrums, the unemployment rate is artificially constrained. The reverse is true as well. When the economy comes back, and more people think they have a chance at finding work, the improvement can be underreported.
Interestingly, the general public seems to have a high tolerance for flawed statistics. Back in the mid-1980s, Americans were consumed with fear that their children were likely to be kidnapped by a stranger at any moment. According to newspapers, politicians and milk cartons, about 1.5 million children were allegedly disappearing each year--some 50,000 of them kidnapped by strangers. These statistics held sway until the Denver Post did a Pulitzer Prize-winning story, debunking this number.
But think about this for a moment: How could anyone have ever believed those figures in the first place? Actual newspaper accounts of kidnapped children drew a huge amount of attention. But they were relatively rare. Why were reporters ignoring tens of thousands of kidnapped kids? Obviously, because they didn't exist.
Join the Discussion
After you comment, click Post. You can enter an anonymous Display Name or connect to a social profile.