I am preparing to lead an off campus study program this summer to study (among other things) the Industrial Revolution. (Note: I'll be posting less frequently over the next few months as a result.) In course prep I came across a great sentence. Joel Moykr (p. 21) is in the middle of an argument that what explains the Industrial Revolution is an Industrial Enlightenment. He argues that the great inventions of the annus miribalis (1769) are viewed that way only because of what happened next. After all, the history prior to the Industrial Revolution is replete with inventions which produced one-time increases in living standards without initiating persistent growth.
So, what is different about the late 18th century in Britain (and, to a great degree, Northern Europe)? In addition to favorable institutional changes which supported the rule of law, Moykr argues for an Industrial Enlightenment that valued the systematized sharing of scientific knowledge for the purpose of material improvement. This stands in contrast with earlier Aristotelian goals of mere understanding. He writes, "[T]he methods of scientific endeavor spilled over into the technological sphere: the concepts of measurement, quantification and accuracy, which had never been an important part of the study of nature, gradually increased in importance. The noted historian of science Alexandre Koyre (1968) argued that the scientific revolution implied a move from a world of 'more or less' to one of measurement and precision."
For me, that's the real crux of QR education. We are waging a war for progress based on the appreciation for the power that quantitative evidence lends to the cause of human advancement. It really does matter "what the numbers show"–not just in some narrow sense, but in a deep, philosophical way that entirely alters the way we live and approach problems. In essence, Moykr is arguing that the QR state of mind is at the center of why, after millenia of more or less stagnant living standards, we have arrived at an expectation of growth.
Recent proposals have suggested radical changes–elimination of mail service on Saturday's and/or Tuesdays. But interestingly, a more traditional solution may be more likely to work today than in the past: postage rate increases. While the regularity and annoyance of postage increases leads many to believe that stamps have seen a dramatic increase in price, in fact the price has more or less tracked inflation over the last 40 years. It may be reasonable to expect to pay more in inflation-adjusted terms given that the lost economies of scale. Estimates of the elasticity of demand–the percent change in quantity due to a one-percent change in price–suggest that the Post Office dales will fall 0.35% for each 1% increase in price. That means revenues will increase 0.65% for a 1% increase in prices.
What's more, this figure has dropped in recent years; we are getting less price sensitive in our mail demand. That makes sense if we suppose that those items which we still send through the mail are harder to replace with online alternatives.
To be sure, postage increases can't entirely fix the Postal Service's financial problems. Their deficit has been on the order of $5 to $15 billion on $66 billion of revenue. But a 10% increase in postage rates might raise over $4 billion.
Stuck on a plane, I found some interesting light reading in this month'sDelta Sky magazine. An article by Nancy Gohring entitled, "The Future of Intelligent Lighting" describes how new streetlights might be made "smarter" to improve city services while reducing costs. (The article also notes the ACLU's concerns about how these lights will be used–a fair point.) What caught my eye was a nice bit of "peripheral QR"–the use of quantitative evidence to enrich description. Specifically, I learned that the new LED lights will use about 15% as much electricity and last almost 20 times as long. The savings are made potentially large by the scope of the North American streetlight system which employs 1 billion lamps currently. (That's a great Fermi problem for those looking to give their students a little experience forming estimates!) Enlightening!
What I found most curious was this review in Psychology Today. The reviewer doesn't like the conclusions in McDermott et al. "There are covert but powerful social norms and expectations, however, there is simply too much information missing in this research for me to conclude that divorce is contagious. I think this statement insults the integrity of every person who finds themselves facing this incredibly difficult choice.... I hope the readers of these articles can see past the surface level findings. Divorce is not contagious. Divorce is not an epidemic and it is not a disease that is transmitted."
Like the reviewer, I was initially skeptical of the study. After all, there are many factors that determine divorce and so it is prudent to ask, "Controlling for what?" Reading the study, there is some reason for concern. The primary control variables are age and education. However, it's a neat data set. (Read the "Sample" section of the paper linked above.) One neat aspect is that they can distinguish between person A viewing person B as a relation vs. person B viewing person A as a relation. This permits the authors to estimate separate effects depending on the direction of the relationship. As the authors explain, if a confounding relationship explains the effect then it shouldn't matter which way the relationship flows. Yet, they find a strong "contagion" effect when the divorcee is viewed as a relation. When the relationship only runs the other way, the effect size is cut to one-third and is no longer statistically significant.
While I am not ready to declare the "law of divorce contagion," the above seems like evidence I have to deal with. To dismiss definitely careful scientific research just because we don't like the results limits our potential to learn new things. And presumably that's why we are doing the research, right?
However, the curious thing is that the text often contradicts the data. For instance, the first graph includes a caption that notes that mortality rates have dropped 17% over the period studied, but that "it looks like the progress stops in the mid-1990s." The data, on the other hand, report that the mortality rate fell almost 8% from 1995 through 2010. In other words, over 45% of the drop in mortality was experienced in the 36% of time following 1995. Far from gains "stopping" in the mid-1990s, we've apparently done better in that time.
That we've made progress in mortality since 1995 is all the more amazing given two facts. 1) Death is ultimately inevitable: As we eliminate deaths, those that remain will be awfully difficult to avoid. 2) As the second slide shows, the population aged considerably in recent years.
In another instance (slide 10), the authors note they are surprised by the fact that mortality among those ages 45-54 has been steady since 2000 because "cancer and heart disease...have become much less deadly over the years." This caption is immediately over a graph showing that deaths from these causes in this age group have actually increased over this time period.
Slide 13 is a more common mistake. The authors assert that cars generally kill younger people, but fail to note the the younger age groups include 20 birth cohorts while the older groups include only 10. Little surprise there are about twice as many deaths in the younger groups in recent years.
I think we can understand this kind of problem as a form of confirmation bias. We all tend to see evidence for what we are looking for. Now, in this case, it's ultimately evident that the authors saw evidence for their priors that really wasn't there. But this is just a reflection of our flawed humanity–we are prone to seeing evidence for what we believe even when the evidence contradicts our views. What I appreciate about QR is that it at least provides a chance that others might more readily correct our error. Better yet, maybe we can learn to see our own errors and re-evaluate our priors.