• • •
"Mike and Jon, Jon and Mike—I've known them both for years, and, clearly, one of them is very funny. As for the other: truly one of the great hangers-on of our time."—Steve Bodow, head writer, The Daily Show
•
"Who can really judge what's funny? If humor is a subjective medium, then can there be something that is really and truly hilarious? Me. This book."—Daniel Handler, author, Adverbs, and personal representative of Lemony Snicket
•
"The good news: I thought Our Kampf was consistently hilarious. The bad news: I’m the guy who wrote Monkeybone."—Sam Hamm, screenwriter, Batman, Batman Returns, and Homecoming
October 13, 2006
An Iraqi View On Lancet Study
Zeyad of Healing Iraq:
One problem is that the people dismissing – or in some cases, rabidly attacking – the results of this study, including governmental officials who, arguably, have an interest in doing so, have offered no other alternative or not even a counter estimate. This is called denial. When you have no hard facts to discredit a scientific study, or worse, if you are forced to resort to absurd arguments, such as “the Iraqis are lying,†or “they interviewed insurgents,†or “the timing to publish this study was to affect American elections,†or "I don't like the results and they don't fit into my world view, therefore they have to be false," it is better for you to just shut up. From the short time I have been here, I am realising that some Americans have a hard time accepting facts that fly against their political persuasions.Now I am aware that the study is being used here by both sides of the argument in the context of domestic American politics, and that pains me. As if it is different for Iraqis whether 50,000 Iraqis were killed as a result of the war or 600,000. The bottom line is that there is a steady increase in civilian deaths, that the health system is rapidly deteriorating, and that things are clearly not going in the right direction. The people who conducted the survey should be commended for attempting to find out, with the limited methods they had available. On the other hand, the people who are attacking them come across as indifferent to the suffering of Iraqis, especially when they have made no obvious effort to provide a more accurate body count. In fact, it looks like they are reluctant to do this.
There's much more. Read it all.
Posted at October 13, 2006 05:50 PM | TrackBackWell, you said it yourself. What Zeyad doesn't realize is that we care too much to want to know.
Posted by: Donald Johnson at October 13, 2006 06:40 PMMy chest is aching. This study makes it so clear that even those of us trying to take in the full extent of the horror have been in denial. Nothing this country could ever do would make up for anything but the smallest part of the damage.
Posted by: Nell at October 13, 2006 08:26 PMI'm a little surprised that our flag-humping hawks are offended at the suggestion that their war has killed as many as 655,000 Iraqis.
Are these the same people who 3 1/2 years ago were giddy at the destruction wrought by 'Shock and Awe' -- laughing out loud when Ollie North declared that the US had just launched an 'urban renewal project' in Baghdad?
Are these the same people who, in March 2003, were chanting the mantras 'Baghdad Delenda Est' and 'let slip the dogs of war'?
Are these the same people who tittered when Rich Lowry 'joked' that we should nuke Mecca?
Are these the same people who said "fuckin' A" when Robert Kagan pronounced that the US should shitcan it's reluctance to kill large numbers of rag heads and adopt a 'pagan warrior' ethic, steely-eyed and indifferent?
What happened to our FReepers, Young Republicans and PNAC'ers? Have they lost their balls?
Polls, you love them when they go your way, you hate them when they do not.
However you feel about them the fact is that they are often quite accurate though of course there are exceptions as in the now infamous headline from the days of yore, "Dewey Wins by a Landslide."
From San Francisco Gate:
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2006/10/12/MNGUTLNP6C1.DTL&hw=johns+hopkins&sn=001&sc=1000
"The sampling is solid. The methodology is as good as it gets," said John Zogby, whose Utica, N.Y.-based polling agency, Zogby International, has done several surveys in Iraq since the war began. "It is what people in the statistics business do."
Ronald Waldman, an epidemiologist at Columbia University who worked at the Centers for Disease Control and Prevention for many years, told the Washington Post the survey method was "tried and true." He said that "this is the best estimate of mortality we have."
Frank Harrell Jr., chairman of the biostatistics department at Vanderbilt University, told the Associated Press the study incorporated "rigorous, well-justified analysis of the data."
* * *
So on the one hand we have people who are actually knowledgeable concerning statistics who say the study and its methodology is sound and on the other hand we have the incredible George Bush (incredible as in having absolutely no credibility) who dismisses it as not being credible.
Yesterday Jonathan gave us a link to Daniel Davies at Crooked Timber which is well worth a read and is quite convincing concerning the credibility of the study.
But set aside this credibility farce for a moment and consider the fact that what ever you believe the number to be, 325,000 or 650,000, is that Iraq, just like every endeavor undertaken by the republican majority, has been irrational, stupid, non-scientific and a complete utter failure.
whether you disagree or disagreee with mr. zayed's comments, the use of bold type to make your point is manipulation. please let the reader make their own decision without subliminal messaging, as we are all intelliegent enough to make our own decisions (or so i hope)
Posted by: at October 13, 2006 09:30 PMThis will be a dry, boring comment, but I'd like to dispel some misconceptions about the study. I read it and went back to some other mortality studies done in Kosovo and Africa to see the kind of assumptions the authors make.
Disclaimer: I don't have the expertise to evaluate that paper professionally. I understand the theory (I do math for a living), but the practice side of it requires expert knowledge in epidemiology that I don't have.
Zeyad's post is cogent, illuminating, and highly recommended reading. I just want to point out why the reasons he gives for being skeptical are wrong.
The survey ignores the fact that the violence is highly uneven from one region to the next. Zeyad is troubled by that, but in fact the opposite would be troubling. To bias the sample by a predicted distribution of violence would be a huge mistake. Any sampling method should be independent of the parameter one wants to measure.
Zeyad says 650K is absurd but his guesstimate is half that number. Not sure I understand how 650 can be absurd but 325 fine.
Also, the study claims 650 as the most likely number, not the right one. What matters is the confidence interval. Here's the punchline: there is less than a 2.5 percent chance that the number of excess deaths is below 420K.
The authors' high success rate in securing death certificates adds confidence that the sampled numbers are right. In other words, their tally of who died and who didn't *among the households they sampled* are probably pretty accurate. The question is: what about the overall estimation?
If you take 3 Americans at random and observe that 1 of them is blue-eyed, are you entitled to think that 100 million Americans have blue eyes? Not really. But if you pick 10,000 Americans and see that a 3rd of them are blue-eyed, but then your conclusion would be, plus or minus a few millions, highly accurate.
That's because you use random sampling: the best possible way of counting things in a survey.
They don't do that: they use "clustered sampling."
Here is the issue (I change the numbers to make them easier). You want to count how many people have blue eyes: pick 10,000 people at random, see how many of them have blue eyes and do the obvious scaling to get your estimate.
Trouble is, to go and knock on the doors of 10,000 random houses might be very dangerous, so you say, OK i'll pick 100 of them; and then I'll draw a circle around each one of them with a radius just big enough to include 100 people. That gives you a sample of 100*100=10,000 people. Bingo.
Ah yes BUT maybe the 100 neighbors are family members who tend to have blue eyes or not to have blue eyes. In other words, these neighbors might give you correlated results. BAD!
Now this is unlikely to be a problem with blue eyes but with people killed by bombs it is likely to be one.
After all if a big bomb falls on your house, your next door neighbor is more likely to die than a random person. Of course the reverse is true. If no bomb falls anywhere near you, your neighbor is more likely to be OK. So maybe the over/undercounting all cancels out. Well, yes, to some extent, but not entirely.
Of course the authors know that and so they decide to double the number of neighbors. Instead of getting 100 people in our circle, we'll get 200. More numbers, more confidence. Makes sense intuitively since, after all, if you were to increase the radius so much that every Iraqi were counted you would get the exact number.
But why a factor of 2? Why double and not triple or quadruple or multiply the sample size by 100? That magic factor is called the design effect: it's by how much you need to increase the sample size to achieve the same confidence as perfect random sampling.
That's where you need to be an epidemiologist. There are many case studies that tell you what the design effect ought to be: usually between 1.2 and 6. On the basis of the scientific literature and their own experience, the authors guess that the design effect is 2. The good news is that there are techniques for checking afterwards whether the guess was right. They do that and find that it's actually 1.6, so they are on solid grounds. In fact, they can conclude that they oversampled: they could have done with fewer households to achieve a 95% confidence interval. Or, equivalently, their lower bound of 420K is false with probability quite a bit less than 0.025.
One could argue that perhaps the rechecking (to assess the validity of the predicted design effect) went wrong (it's also a randomized process) but that is unlikely.
The weakness of their 2004 study was that the low tail was very low (8,000). This time, it's huge (because they use a bigger sample size): 400K is one hell of a scary number.
Bernard,
Thanks for taking time to make this explanation. I really did not agree with that part of Zeyad's analysis either after reading what Davies had to say though I thought the rest of what he had to say was quite good. But after all Zeyad is a dentist not a statistician.
In your analysis you said:
"After all if a big bomb falls on your house, your next door neighbor is more likely to die than a random person. Of course the reverse is true. If no bomb falls anywhere near you, your neighbor is more likely to be OK. So maybe the over/undercounting all cancels out. Well, yes, to some extent, but not entirely."
But it may not be as bad as that as according to Davies the main danger of the cluster method was and underestimate rather than an overestimate. Also according to the WaPo article it was gunshot wounds that accounted for 56 percent of the deaths while falling bombs accounted for 31 percent or that is what I assume when they say deaths from coalition airstrikes.
"Gunshot wounds caused 56 percent of violent deaths, with car bombs and other explosions causing 14 percent, according to the survey results. Of the violent deaths that occurred after the invasion, 31 percent were caused by coalition forces or airstrikes, the respondents said."
So would not the fact that the majority of deaths were from gunshot wounds make the method a bit more accurate than if it were from falling bombs, or at least that is what I gather from how you describe it.
You know we hear a lot about this precision killing both in this present war and in the Desert Storm war. But some time after Desert Storm ended I recall hearing a good deal about blanket bombing and that these precision weapons were not all that precise which makes me wonder about the same claims we are hearing in the present.
Posted by: rob payne at October 14, 2006 01:55 AM"whether you disagree or disagreee with mr. zayed's comments, the use of bold type to make your point is manipulation. please let the reader make their own decision without subliminal messaging, as we are all intelliegent enough to make our own decisions (or so i hope)"
yeah, don't confuse me with bold print. If you have to confuse me, use profuse footnotes. (but not too many daggers,Â¥* 'cause they get me all riled up.)
*I didn't use a real dagger, cause I didn't want to get anybody upset, like I get.
Posted by: Jonathan "liminal" Versen at October 14, 2006 02:08 AMplease let the reader make their own decision without subliminal messaging, as we are all intelligent enough to make our own decisions
Well...I don't think it's right to call it subliminal. It's pretty much right there on the surface.
Beyond that, I'd like to believe everyone reads big blocks of text. But based on how I look at the internet(s), I'm not so sure of that. I like to bold stuff to encourage people to read all of it once they the part I consider particularly important.
I won't back down!
Posted by: Jonathan Schwarz at October 14, 2006 09:22 AMRob: I agree with all the points you made.
In particular, the underreporting phenomenon. If one does not adjust for the design effect properly, then the net result is similar to undersampling (more or less). And it is easy to see intuitively why undersampling leads to underreporting (of rare events).
Consider the extreme case where you pick ony 1 random person: if that person is alive you'll have to conclude that no one died; else that everyone died. Since the former event will happen over 99.9 percent of the time, you'll be underreporting with 99.9 percent chance.
As you increase the number of samples the distribution ends up looking like a bell curve and the probability of under/overreporting gets to be increasingly the same (for random sampling).
Ok, I'll stop boring everyone here and I'll come back to my earlier point.
In Germany you had a minority of "willing executioners" and a majority who "didn't want to see." 60 years later, I see an eerie parallel.
When I hear Kristof praise Bush for what he's done in Darfur I wonder about the math, too. If you try to prevent a genocide against X people, does that give you licence to kill 2X people elsewhere?
Posted by: Bernard Chazelle at October 14, 2006 12:24 PMBernard,
Your posts are not boring at all. I can only speak for myself but it seems when you stop wondering about how and why things work the way they do it is time to be put on in the pasture. Thanks again for the informative and interesting posts.
I agree with Jon that bolding of selected text openly, not subliminally, chooses a part of the whole to call attention to, and that under the circumstances of modern times (where an internet connection for information is like drinking water from a fire hose, in a metaphorical sense) it is an aid to communication, not a hindrance to it.