National Numeracy Network > NNN Blog

NNN Blog

Insulting QR?


Posted: May 2 2014 by Nathan Grawe

This recent study by McDermott, Fowler, and Christakis has gotten a little news coverage recently. The authors conclude that divorce is contagious. They find that the probability of divorce increases 75% when someone in your social network gets a divorce of their own. Even if someone once removed in your network (a friend of a friend, for example) gets divorced your odds of divorce go up 33%.

What I found most curious was this review in Psychology Today. The reviewer doesn't like the conclusions in McDermott et al. "There are covert but powerful social norms and expectations, however, there is simply too much information missing in this research for me to conclude that divorce is contagious. I think this statement insults the integrity of every person who finds themselves facing this incredibly difficult choice.... I hope the readers of these articles can see past the surface level findings. Divorce is not contagious. Divorce is not an epidemic and it is not a disease that is transmitted."

Like the reviewer, I was initially skeptical of the study. After all, there are many factors that determine divorce and so it is prudent to ask, "Controlling for what?" Reading the study, there is some reason for concern. The primary control variables are age and education. However, it's a neat data set. (Read the "Sample" section of the paper linked above.) One neat aspect is that they can distinguish between person A viewing person B as a relation vs. person B viewing person A as a relation. This permits the authors to estimate separate effects depending on the direction of the relationship. As the authors explain, if a confounding relationship explains the effect then it shouldn't matter which way the relationship flows. Yet, they find a strong "contagion" effect when the divorcee is viewed as a relation. When the relationship only runs the other way, the effect size is cut to one-third and is no longer statistically significant.

While I am not ready to declare the "law of divorce contagion," the above seems like evidence I have to deal with. To dismiss definitely careful scientific research just because we don't like the results limits our potential to learn new things. And presumably that's why we are doing the research, right?

QR for Dying is Dying for QR


Posted: Apr 24 2014 by Nathan Grawe

I was recently pointed to a Bloomberg slideshow on how Americans die. The piece has high production value and includes a good bit of interesting data on the causes of deaths in America from the 1968 to 2010. So far, so good.

However, the curious thing is that the text often contradicts the data. For instance, the first graph includes a caption that notes that mortality rates have dropped 17% over the period studied, but that "it looks like the progress stops in the mid-1990s." The data, on the other hand, report that the mortality rate fell almost 8% from 1995 through 2010. In other words, over 45% of the drop in mortality was experienced in the 36% of time following 1995. Far from gains "stopping" in the mid-1990s, we've apparently done better in that time.

That we've made progress in mortality since 1995 is all the more amazing given two facts. 1) Death is ultimately inevitable: As we eliminate deaths, those that remain will be awfully difficult to avoid. 2) As the second slide shows, the population aged considerably in recent years.

In another instance (slide 10), the authors note they are surprised by the fact that mortality among those ages 45-54 has been steady since 2000 because "cancer and heart disease...have become much less deadly over the years." This caption is immediately over a graph showing that deaths from these causes in this age group have actually increased over this time period.

Slide 13 is a more common mistake. The authors assert that cars generally kill younger people, but fail to note the the younger age groups include 20 birth cohorts while the older groups include only 10. Little surprise there are about twice as many deaths in the younger groups in recent years.

I think we can understand this kind of problem as a form of confirmation bias. We all tend to see evidence for what we are looking for. Now, in this case, it's ultimately evident that the authors saw evidence for their priors that really wasn't there. But this is just a reflection of our flawed humanity–we are prone to seeing evidence for what we believe even when the evidence contradicts our views. What I appreciate about QR is that it at least provides a chance that others might more readily correct our error. Better yet, maybe we can learn to see our own errors and re-evaluate our priors.

Precise Ignorance


Posted: Apr 23 2014 by Nathan Grawe

Last week the Census Bureau made a very sad announcement. Starting in the fall of 2014 they will be using revised questions in the Current Population Survey concerning insurance coverage. The old questions, used for three decades, had been widely regarded by health experts as flawed because the wording created potential ambiguity that likely led to over-stating the number of uninsured. The new questions were tested alongside the old last year and it looks like the hypothesis was right–estimates of the uninsured were a couple percentage points lower with the new questions than with the old (10.6% vs. 12.5%).

The problem is that the wording of the new questions is so different than the old that the Census Bureau does not believe the two to be comparable. And so their data on insurance coverage will include an irreconcilable break between 2013 and 2014.

This is a real blow to evidence-based policy. The Affordable Care Act is arguably the biggest federal policy reform of the last generation. But because we have chosen to change methodologies right at the point of implementation, it will be difficult to assess its effectiveness. The fact that the old questions were biased is, by comparison, a second-order concern. So long as the bias remains more or less the same over time, the resulting data would still have been useful in estimating the ACA's effect. Instead, a highly contentious empirical debate is likely to continue needlessly for want of empirical data.

The underlying issue here is of broader QR application. Often we are offered fancy methods which promise to eliminate a bias (at least in "large" samples) at the cost of larger standard errors. One example is the use of instrumental variables as a consistent alternative to ordinary least squares. I'm often left thinking I'd rather have my more-precise-but-biased estimate. If I understand the source of the bias I can often back it out of the answer. By contrast, the imprecise-yet-accurate estimator often tells me very little. I'd rather be precisely wrong (in a predictable way) than vaguely right.

And this is exactly where we are with the new Census questions. I'll trust the survey experts that these are better questions, but the sad result will be that we can say precisely nothing.

The QR of Water


Posted: Apr 15 2014 by Nathan Grawe

The New York Times reports that China is planning a large desalination plant to provide 1 million tons of fresh water to Beijing. The paper cites planners' beliefs that the plant "could account for one-third of the water consumption of Beijing, a city of more than 22 million people." Interestingly, the cost of desalinated water isn't as great as you might expect. The Times estimates a cost of $1.29 per ton or about twice the cost of tap water. (Nice job by the journalist in providing that useful context!)

This is potentially important because we consume a lot of water. The USGS estimates that we in the US use about 400 billion gallons per day. Given that a ton is about 240 gallons, that's over 1.7 billion tons of water. At an added cost of 65 cents per ton that means we save about $1 billion every day we are able to avoid desalination to meet our water needs. But with over 96% of the earth's water accounted for by sea water, it is comforting to know that for a bit less than half the cost of Social Security we could find alternative water sources. Here's hoping it doesn't come to that!

QR in Bracketology


Posted: Mar 21 2014 by Nathan Grawe

It's March Madness time. This year, a few more people may beplaying brackets. With financial backing from Warren Buffet, Quicken Loans is offering billion to anyone who filled out a perfect NCAA bracket. As the USAToday article I've linked to notes, that's not as big a risk for Buffet as it sounds like because there are 9,223,372,036,854,775,808 (9 quintillion) different possible brackets. Of course, not all of those are equally likely. Nate Silver and the folks at fivethirtyeight.com have put a lot of effort into picking the games. Silver figures that all of his number crunching dramatically improved his odds of a perfect bracket...to 1 in "7.4 billion". (Actually it is 1 in 7,419,071,319–don't forget those last 19+ million outcomes!)

So, how's it all working out? Not surprisingly, Quicken reports at least 50 people have perfect brackets after the first 16 games. Unfortunately for Nate Silver, he didn't see Dayton beating Ohio State.

RSS


« Previous Page      Next Page »