National Numeracy Network > NNN Blog > Precise Ignorance

Precise Ignorance

Nathan Grawe
published Apr 23, 2014

Last week the Census Bureau made a very sad announcement. Starting in the fall of 2014 they will be using revised questions in the Current Population Survey concerning insurance coverage. The old questions, used for three decades, had been widely regarded by health experts as flawed because the wording created potential ambiguity that likely led to over-stating the number of uninsured. The new questions were tested alongside the old last year and it looks like the hypothesis was right–estimates of the uninsured were a couple percentage points lower with the new questions than with the old (10.6% vs. 12.5%).

The problem is that the wording of the new questions is so different than the old that the Census Bureau does not believe the two to be comparable. And so their data on insurance coverage will include an irreconcilable break between 2013 and 2014.

This is a real blow to evidence-based policy. The Affordable Care Act is arguably the biggest federal policy reform of the last generation. But because we have chosen to change methodologies right at the point of implementation, it will be difficult to assess its effectiveness. The fact that the old questions were biased is, by comparison, a second-order concern. So long as the bias remains more or less the same over time, the resulting data would still have been useful in estimating the ACA's effect. Instead, a highly contentious empirical debate is likely to continue needlessly for want of empirical data.

The underlying issue here is of broader QR application. Often we are offered fancy methods which promise to eliminate a bias (at least in "large" samples) at the cost of larger standard errors. One example is the use of instrumental variables as a consistent alternative to ordinary least squares. I'm often left thinking I'd rather have my more-precise-but-biased estimate. If I understand the source of the bias I can often back it out of the answer. By contrast, the imprecise-yet-accurate estimator often tells me very little. I'd rather be precisely wrong (in a predictable way) than vaguely right.

And this is exactly where we are with the new Census questions. I'll trust the survey experts that these are better questions, but the sad result will be that we can say precisely nothing.

Comment? Start the discussion about Precise Ignorance