Numerical Fiction that Didn’t Take
Last Friday a dandy bit of numerical fiction hit the wires. I thought it was inflammatory and misleading on an issue of great national importance. On the other hand, I was looking forward to pointing out what a crock it was.
So you can understand my disappointment when it got only limited play and was more or less gone and forgotten by the weekend. Oh well.
As a consolation to myself, I’m going to write it up anyway. This won’t cure the letdown, but it may make me feel a little better.
The Reuters story had what I thought was a can’t-miss headline. "No Health Coverage Tied to 45,000 Deaths a Year". How could that not be the lead story for every Saturday paper in the country? Admittedly, for many blue-state types in the media it may have had a certain dog-bites-man aspect. But still, I would think that a dead body every 12 minutes would make great copy. I guess this is why I am not an editor.
This number of annual deaths due to not having health insurance came from what can generously be called a "Harvard study" that came out last week. The nature of the Harvard affiliation is a little murky, and as for the study part, well, it was published by something called Physicians for a National Health Program, a group of doctors lobbying for single-payer national health insurance. To their great credit, the study authors aren’t shy about admitting to a bias that might cause a person to question their dedication to the scientific method.
As readers of BMA know, Curmudgeon’s Law of Numerical Fiction is a three part test. One, the number must reinforce previously held beliefs. Check. Two, it must be remarkably extreme, but not ridiculous. If anything, this one may satisfy test #1 so well it is not extreme enough. And third, there can be no organized or credible opposition to the number. Double check. There are lots of healthcare related lobbying groups working seven day weeks in Washington right now, but none of them argue that not having insurance is a good idea. (Americans for Not Seeing Doctors? The National What-Me-Worry Coalition?)
The study used fairly old data from a survey taken on the health status of Americans in 1988-1994, with a follow-up in 2000. The key survey question in 1988-1994 for these researchers was "Do you have health insurance?" and the big one for the 2000 follow-up was "Are you still breathing?"
Two big groups were excluded from the data by the researchers. They tossed out everybody younger than 17 or older than 64. I guess that’s reasonable. But they also chucked the data for the approximately 1 in 7 survey respondents covered by some form of government insurance, Medicare, Medicaid, VA benefits, etc. If I was a cynic, I would suspect that the results for those people wasn’t what advocates for a single-payer system would like to publicize.
So the study has two groups, 6,655 people with private health insurance in 1988-1994 and 2,350 with no insurance when interviewed during that period. By 2000, 6,455 or 97.0% of the insured folks were still with us, while 2,272 or 96.7% of the uninsured were. Put another way, if the uninsured had the same survival rate as the insured, another 7 of them would have been alive in 2000.
The critical question to ask here is, "Is the difference between 2,272 and 2,279 still living uninsured statistically significant?"
And the answer is "Are you freaking kidding me?" As anybody versed in statistical analysis can tell you, that difference is so far inside the bounds of what you would expect from random chance that it’s just not funny. (Well, okay, it’s a little funny.)
Moreover, even if you assume that the uninsured group do tend to die faster, there are plenty of other factors that might explain it that don’t have to do with being uninsured. They were, for example, more likely to be male and to be smokers. (Then again, they were on average younger.)
Of course, the basic conceptual problem with a study like this is one of causality. Did more uninsured people die because they were uninsured or because they were the sort of people, e.g. poor and/or irresponsible, who would be uninsured? Barring the use of a controlled experiment in which people are randomly given insurance, no amount of statistical analysis can cure this.
Of the very limited coverage this study/press release got, only CNNHealth bought it hook line and sinker. Even CBS managed to explain the case against in an understandable manner.
But the best coverage may have come from the only newspaper I’ve ever written for, The Harvard Crimson. The undergrad writing their story got an actual Harvard professor to comment. With the candor that you can only expect from someone who thinks he is talking to a small campus audience, Amitabh Chandra, a professor of health policy at the Kennedy School of Government said “While I don’t disagree with the study’s basic premise, this particular paper is pure advocacy, in an effort to get the biggest number possible.”
In other words, to the highly numerate, this is fiction.
6 Comments
Other Links to this Post
RSS feed for comments on this post. TrackBack URI
By Kosmo @ The Casual Observer, September 25, 2009 @ 3:13 pm
Thanks, Frank. It’s been a long week – I needed the laugh.
By bex, September 26, 2009 @ 4:37 am
I absolutely hate it that health news stories like this don’t give error bars. Pollsters do it all the time, and folks are relatively familiar with the concept… so why are health news editors so ignorant?
Offhand, I’d say that this study would have at least a 3% margin of error… so you could realistically make the claim that NOT having health insurance means you’re more likely to be alive in 10 years.
Of course… this could be because 19-24 year olds are the least likely to have health insurance… AND usually more healthy that the general population…
By driveby, September 26, 2009 @ 2:59 pm
“…so you could realistically make the claim that NOT having health insurance means you’re more likely to be alive in 10 years…”
Um, no. That’s not what error bars do at all. You’re not allowed to pick whichever value within the given range you like best and “realistically” claim that that’s what the observations show. That is, in fact, a much worse fib than just presenting the naked result without a confidence interval. It is, in fact, an outright lie.
If you have the right interval (let us say, +/-3% with 95% confidence), the realistic claim would be that the study showed a small risk associated with being uninsured, but that risk was so small relative to the limitations of the study that it was statistically insignificant.
By zach, September 27, 2009 @ 7:24 pm
Bill Maher used that statistic in his show on 18/9. He referred to a few statistics which “Stats don’t lie” with the 45k a year die for being uninsured, 700k will go broke because of medical, and something about record profits for insurance companies. Then goes on to quote Obama saying the Insurance comapnies serve a purpose, he asked the panel “What purpose?”
By Frank Curmudgeon, September 27, 2009 @ 9:17 pm
You guys are bringing out the stats prof in me. The 95% confidence interval around the 96.7% is about +/- 1.2%, so the true number could reasonably be anywhere from 95.5% to 97.9%. Since the insured survival rate is nicely inside this interval at 97.0%, we cannot reject the null hypothesis that the two groups have the same survival rate and that the difference we see is due only to sampling error.
Driveby is absolutely correct that we can’t realistically make the claim that not having insurance keeps you alive based on this data. But the data is just as consistent with that theory as the one that there is no difference or that the uninsured die more often. The bottom line is that the headline conclusions of the study are unsupported by the data.
By Jim, September 28, 2009 @ 5:25 pm
The study does actually control for factors such as age, smoking, weight, income etc. They do publish the 95% confidence interval on the mortality factor figures given.
But the number definitely does definitely seem inflated. Still, I’m pretty sure the real # is >> 0.