What I found most curious was this review in Psychology Today. The reviewer doesn't like the conclusions in McDermott et al. "There are covert but powerful social norms and expectations, however, there is simply too much information missing in this research for me to conclude that divorce is contagious. I think this statement insults the integrity of every person who finds themselves facing this incredibly difficult choice.... I hope the readers of these articles can see past the surface level findings. Divorce is not contagious. Divorce is not an epidemic and it is not a disease that is transmitted."
Like the reviewer, I was initially skeptical of the study. After all, there are many factors that determine divorce and so it is prudent to ask, "Controlling for what?" Reading the study, there is some reason for concern. The primary control variables are age and education. However, it's a neat data set. (Read the "Sample" section of the paper linked above.) One neat aspect is that they can distinguish between person A viewing person B as a relation vs. person B viewing person A as a relation. This permits the authors to estimate separate effects depending on the direction of the relationship. As the authors explain, if a confounding relationship explains the effect then it shouldn't matter which way the relationship flows. Yet, they find a strong "contagion" effect when the divorcee is viewed as a relation. When the relationship only runs the other way, the effect size is cut to one-third and is no longer statistically significant.
While I am not ready to declare the "law of divorce contagion," the above seems like evidence I have to deal with. To dismiss definitely careful scientific research just because we don't like the results limits our potential to learn new things. And presumably that's why we are doing the research, right?
However, the curious thing is that the text often contradicts the data. For instance, the first graph includes a caption that notes that mortality rates have dropped 17% over the period studied, but that "it looks like the progress stops in the mid-1990s." The data, on the other hand, report that the mortality rate fell almost 8% from 1995 through 2010. In other words, over 45% of the drop in mortality was experienced in the 36% of time following 1995. Far from gains "stopping" in the mid-1990s, we've apparently done better in that time.
That we've made progress in mortality since 1995 is all the more amazing given two facts. 1) Death is ultimately inevitable: As we eliminate deaths, those that remain will be awfully difficult to avoid. 2) As the second slide shows, the population aged considerably in recent years.
In another instance (slide 10), the authors note they are surprised by the fact that mortality among those ages 45-54 has been steady since 2000 because "cancer and heart disease...have become much less deadly over the years." This caption is immediately over a graph showing that deaths from these causes in this age group have actually increased over this time period.
Slide 13 is a more common mistake. The authors assert that cars generally kill younger people, but fail to note the the younger age groups include 20 birth cohorts while the older groups include only 10. Little surprise there are about twice as many deaths in the younger groups in recent years.
I think we can understand this kind of problem as a form of confirmation bias. We all tend to see evidence for what we are looking for. Now, in this case, it's ultimately evident that the authors saw evidence for their priors that really wasn't there. But this is just a reflection of our flawed humanity–we are prone to seeing evidence for what we believe even when the evidence contradicts our views. What I appreciate about QR is that it at least provides a chance that others might more readily correct our error. Better yet, maybe we can learn to see our own errors and re-evaluate our priors.
The problem is that the wording of the new questions is so different than the old that the Census Bureau does not believe the two to be comparable. And so their data on insurance coverage will include an irreconcilable break between 2013 and 2014.
This is a real blow to evidence-based policy. The Affordable Care Act is arguably the biggest federal policy reform of the last generation. But because we have chosen to change methodologies right at the point of implementation, it will be difficult to assess its effectiveness. The fact that the old questions were biased is, by comparison, a second-order concern. So long as the bias remains more or less the same over time, the resulting data would still have been useful in estimating the ACA's effect. Instead, a highly contentious empirical debate is likely to continue needlessly for want of empirical data.
The underlying issue here is of broader QR application. Often we are offered fancy methods which promise to eliminate a bias (at least in "large" samples) at the cost of larger standard errors. One example is the use of instrumental variables as a consistent alternative to ordinary least squares. I'm often left thinking I'd rather have my more-precise-but-biased estimate. If I understand the source of the bias I can often back it out of the answer. By contrast, the imprecise-yet-accurate estimator often tells me very little. I'd rather be precisely wrong (in a predictable way) than vaguely right.
And this is exactly where we are with the new Census questions. I'll trust the survey experts that these are better questions, but the sad result will be that we can say precisely nothing.
This is potentially important because we consume a lot of water. The USGS estimates that we in the US use about 400 billion gallons per day. Given that a ton is about 240 gallons, that's over 1.7 billion tons of water. At an added cost of 65 cents per ton that means we save about $1 billion every day we are able to avoid desalination to meet our water needs. But with over 96% of the earth's water accounted for by sea water, it is comforting to know that for a bit less than half the cost of Social Security we could find alternative water sources. Here's hoping it doesn't come to that!
It's March Madness time. This year, a few more people may beplaying brackets. With financial backing from Warren Buffet, Quicken Loans is offering billion to anyone who filled out a perfect NCAA bracket. As the USAToday article I've linked to notes, that's not as big a risk for Buffet as it sounds like because there are 9,223,372,036,854,775,808 (9 quintillion) different possible brackets. Of course, not all of those are equally likely. Nate Silver and the folks at fivethirtyeight.com have put a lot of effort into picking the games. Silver figures that all of his number crunching dramatically improved his odds of a perfect bracket...to 1 in "7.4 billion". (Actually it is 1 in 7,419,071,319–don't forget those last 19+ million outcomes!)
So, how's it all working out? Not surprisingly, Quicken reports at least 50 people have perfect brackets after the first 16 games. Unfortunately for Nate Silver, he didn't see Dayton beating Ohio State.