• • •
"Mike and Jon, Jon and Mike—I've known them both for years, and, clearly, one of them is very funny. As for the other: truly one of the great hangers-on of our time."—Steve Bodow, head writer, The Daily Show
•
"Who can really judge what's funny? If humor is a subjective medium, then can there be something that is really and truly hilarious? Me. This book."—Daniel Handler, author, Adverbs, and personal representative of Lemony Snicket
•
"The good news: I thought Our Kampf was consistently hilarious. The bad news: I’m the guy who wrote Monkeybone."—Sam Hamm, screenwriter, Batman, Batman Returns, and Homecoming
April 04, 2013
A Fluctuating Gift
By: Aaron Datesman
Hey, I like you, and I want to give you a gift! (Well, there's a catch - but I'm like that.) Are you excited? Here it is: a cubic meter of air! You can't live without it. Isn't it great?
You won't mind the catch at all, it's totally a small thing. I'll even tell you the answer. The density of this cubic meter of air is 1.225 kilograms per cubic meter. Easy, right? Next year, I'm going to call you up, and I'm going to ask you to do a simple thing. I'm going to ask you to look carefully at one-tenth of your volume of air, and tell me what its density is.
The following year, I'm going to ask you the density of a one-tenth portion of that 1/10th. And so on, until you get tired of me and give the air back. But you need air, right? That's right! And I need somebody to talk to, so, I think we're good here. Right?
In 2014, the density of any 0.1 cubic meter portion of my air is 1.225 kg/m^3.
In 2015, the density of a 0.01 cubic meter portion of that 0.1 cubic meter of my air is 1.225 kg/m^3.
In 2016, the density of a 0.001 cubic meter portion of that 0.01 cubic meter portion of that 0.1 cubic meter sample is 1.225 kg/m^3.
In 2017, the density of a 0.0001 cubic meter portion of that 0.001 cubic meter portion of that 0.01 cubic meter portion of that 0.1 cubic meter sample is - wait for it! - 1.225 kg/m^3.
I won't further belabor the point. Against a downward extrapolation of volume, the density of a sample of air is constant. Or: is it?
Actually, it's not: density is a statistical quantity. The motion of air molecules is random, and fluctuations in their distribution within a volume do continually occur. (Trivial example from elementary thermodynamics: if you have eight air molecules in a box, the chance that they all wind up in the left half of the box is (1/2)^8, or 1 in 256. The density of the air on the right side in that instant of time is zero.)
After some number of years, I have grown tired of this fruitless annual exercise in air monitoring, so when my friend calls I glance at the new sample only very quickly before replying, "The density is 1.227 kg/m^3". If you look at a small volume for only a short time, you are likely to catch a fluctuation due to the random motion of air molecules. (As an engineer, I convert the small time of observation to a "frequency bandwidth". This simply measures how fast my eyes are.)
The density of a large sample, or of a small sample observed for a long period of time, will be 1.225 kg/m^3. But the choice of large volume or long observation time (equivalent to a small bandwidth) has an averaging effect. Fluctuations in the density are always present if one is able to look closely and quickly.
Does this matter? Well, consider: around 2031, the sample of air I'm asked to examine has a volume of about 1 cubic micrometer. Fluctuations in the density of air on scales a bit smaller than this are the reason the sky is blue.
Proper scale is the reason the shot noise model described in the previous post is correct: it relates the health outcome to the chemical state that exists in biologically relevant volumes (about 1 cubic millimeter) over biologically relevant time scales (about 5 milliseconds, with some caveats). The linear dose model, on the other hand, washes out meaningful fluctuations in the chemical state of biological tissue by improper averaging.
Very often THE FLUCTUATION IS THE PHYSICS. (I believe this because it's especially true in the field of superconductivity, where I have spent most of my scientific career.) The analogy with the color of the sky is exact.
I comprehend the resistance to the idea I have presented. Nevertheless, it is quite correct.
— Aaron Datesman
If you lowered the density of the air the fractional size of the density fluctuations delta N over N (N being number per volume) would get bigger. But the sky would gradually turn black. If you lower the radiation dose, maybe the harmful effects go away...
That's off the top of my head. We could probably get into some interesting discussions of atmospheric optics--it might even be good for me, having to look stuff up. But analogies are dangerous.
Anyway, Aaron, if I were going to try to model this stuff, I'd probably calculate the integral with respect to time of the density of ions and free radicals for different dose rates, the idea being that the damage done might be proportional to the density of those radicals and ions times the time they are around. And I'm guessing I'd get a linear relationship between the integral and the dose rate. But I don't know for sure--you'd have an some sort of decay factor built into the integral representing the return to equilibrium (meaning a term representing the chemical processes that eliminate all the excess ions and radicals over your 5 ms). The much larger fractional fluctuations in dose at low dose rates just means that most of the time there aren't any free radicals around (caused by radiation, that is) and then for your 5ms there are, and then things go back to normal for long stretches, while at high dose rates there are always radicals present and things never get back to normal until the radiation stops.
You'd need (or I'd need) to ask a chemist if the proposed model makes sense.
Posted by: Donald Johnson at April 5, 2013 09:31 AMAlso, I'd be stunned if no one in the mainstream has done models of that sort (not necessarily what I proposed, but whatever a real biochemist with a knack for quantitative modeling would propose.)
Posted by: Donald Johnson at April 5, 2013 09:34 AM@Donald Johnson, the Schottky formula for shot noise does exactly what you describe. That's why it gives the right answer: I mean, look at the graph! Do you not think that's rather astonishing agreement between a simple theory and epidemiological data?
I can teach you the mathematics if you like. The only reason the post doesn't include them is that my wife says these posts are too technical, and no one can understand them.
Analogies are dangerous. But they are also very, very powerful. The rule should be "Use, but audit". Which is what I have done.
Posted by: Aaron Datesman at April 5, 2013 10:06 AMI'm taking the epidemiological data with a grain of salt, since it's controversial. It ought to be easy to show with laboratory animals where we'd know for sure what the doses are. (Not that I'm crazy about animal testing, but that's a different subject. But surely there's been massive studies of radiation and lab mice or other animals)
But yes, I'd like to see exactly what the math looks like. I once studied Schottky noise a little bit, but I'm extremely rusty and the one book I might have that might include it (a Dover collection of papers by Chandrasekhar and others) I can't find. Besides, I'm lazy, so if you can type in the integral and what the results are that'd be great. (Or whatever.)
Posted by: Donald Johnson at April 5, 2013 10:18 AM@Donald Johnson, I'll probably post again next week. I'll try to put the math up then.
By the way, about your reasoning regarding the color of the sky, you are correct that the sky will eventually appear black as the average density of air goes to zero. However, the probability of a scattering event never goes all the way to zero.
It's correct to be skeptical of all epidemiological data. However, I'm reasonably willing to believe a result to which I can fit a plausible theory. I believe this is what open-minded inquiry is about.
You are correct that there is a hellish amount of experimental data (my favorite for horribleness: plutonium experiments on beagles). I'm not sure what you might find that would be relevant, for many reasons. For instance, BEIR VII states that large-scale epidemiological data would be the best possible foundation for assessment. Then they say that there really isn't any.
The most important reason I doubt there is relevant data is this, however: most of the TMI exposure was from radioactive noble gases. The radiation biology community (as Hatch states in her paper) pretty much just assumes that these substances are totally harmless. So I'd be surprised to learn that somebody has actually checked, by putting rats in a closed box with a bottle of radioxenon or whatever.
Posted by: Aaron Datesman at April 5, 2013 10:55 AMAaron, I think your model for what causes the harm must be different from my proposal. Isn't integration a smoothing process? So if the harm done is proportional to the density of ions and radicals integrated over time, the fluctuations would smooth out. The integral of the ion and free radical density with respect to time would just be the average density of the radicals and ions multiplied by the time duration of the radiation exposure. So the integral would be directly proportional to the dose rate. Linear. You'd have to have a different model for what causes the damage to get a different result.
I did a super crude model of this, for dose rates much less than 1 over 5 ms (I let 1/5 ms be called r) and for dose rates much greater than r. I called the dose rate (the rate at which particles arrive) N. The density of ions initially created by one particle is D. The total duration of the radiation exposure is T.
For both the really low and really high dose rates the integral of ion density integrated over time would just be NTD/r.
Posted by: Donald Johnson at April 5, 2013 11:10 AMWe crossposted. I'm interested to see your model-if nothing else, I'll enjoy relearning the math.
Animal testing if it's not absolutely necessary to save human life is horrific. (I suppose I might be starting an argument with people who think it's always wrong. In fact, I'd just shut up and listen and not argue.) It's kind of sickening what we did to animals in the nuclear tests in Nevada.
On my super crude model, just to forestall a possible objection, yes, it does use averages and you're at war with averages in this context. But the point is that for very low dose rates you'd have small periods of time (presumably the 5 ms periods) when there would be very high densities of ions present, and that's when the damage occurred. The rest of the time there'd be almost nothing happening. With very high dose rates there would be this almost constant level of high densities of ions the entire time. So I think it's logical. Whether nature actually works that way is a separate question.
Posted by: Donald Johnson at April 5, 2013 11:19 AM"It's kind of sickening what we did to animals in the nuclear tests in Nevada."
And yes, it's even worse what the government did to soldiers and people living downwind (and also in the Pacific).
Posted by: Donald Johnson at April 5, 2013 11:22 AM@Donald Johnson, when I post the math you'll see. Thank you for being so open-minded. I see I've conveyed my idea effectively to at least one person.
Integrating over TIME is a smoothing process that leads to the linear dose model, yes. But the proper operation is to integrate the power spectrum over FREQUENCY, which leads in a different direction. The "delta-f" term arises from this source.
Posted by: Aaron Datesman at April 5, 2013 11:31 AM"Animal testing if it's not absolutely necessary to save human life is horrific." - Donald Johnson
I have a possible additional legitimate reason. Right before a medication goes to human trials, I find it acceptable to do animal trials with the medication just to see if there are unforeseen effects in animals before the testing on humans begins.
"I see I've conveyed my idea effectively to at least one person." - Aaron Datesman
More than one. Not all of us who are skeptical of your reasoning don't find it compelling. There are also always people who read but don't post comments. I do thank you for posting your arguments. I do find them valuable and don't a-priori reject them. Also because of your posts here, my thinking on the nuclear industry as migrated though likely not as much as you would like it to have done.
Posted by: Benjamin Arthur Schwab at April 5, 2013 01:01 PMAaron, to be honest the fit between the Wing data in Table 3 and your curve is not all that great. (Referring to your previous post.) The region where the fit is good is in the region of Wing data points 6-9. But in that region the Wing data look very linear, and indeed that’s the region where your own equation “collapses” to Linear Non Threshold, the model that you are attacking. So your curve is a good approximation to the Wing data only where it is linear. In fact, the Wing data are quite linear over Wing data points 5-9, and arguably linear over Wing data points 3-9 if you guess that the glitch downward in Wing datapoint 4 is a statistical fluke.
The evidence for the non-linearity of the Wing Data hinges entirely on the region of Wing data points 1-3, where there is allegedly a spectacular rise in radiation-associated cancer risk that is inconsistent with the linear curve described by the majority of the data points, i. e. Wing points 3 and 5-9. That initial portion of the curve is the only one that shows strong nonlinearity. But even in that portion of the Wing data your curve is not a particularly good fit; the compressed scale of your graph obscures this fact to some extent.
In any case, Wing data points 1-3 are not especially firm ground for a conclusion of nonlinearity. The crucial Data point 1, for example, the zero-dose group, has a sample size, of just 6 cancer cases--drastically smaller than the rest of the dose groups. That makes it statistically very unreliable—it may just be a fluke that the cancer count is so low. Statistical analyses should always pay deep respect to the possibility that the effects they see are illusions caused by random statistical flukes. They do that by means of a “sensitivity test”: if removing one or two data points from a statistical analysis radically alters its conclusions, then the possibility that the conclusions are a mere artifact of flukes has to be strongly considered.
Also, inappropriately for a post that makes such ambitious claims, you have cherry-picked one small subset of Wing’s data to demonstrate your theory, where the totality of Wing’s data provides much less support for your thesis. Wing’s largest and most reliable data aggregate in Table 3 is the top one of “all cancer;” because that is a much larger sample size, it is much less prone to random statistical flukes than is Wing’s lung cancer data. And that all-cancer data do not show anywhere near the non-linearity of the lung cancer data. If we entertain the possibility that the zero-dose group, with its small sample size, is an unreliable random fluke, then the whole of the all-cancer data set, Wing’s most reliable, strongly supports a linear dose-response curve.
Remove points 1 and 2 from Wing’s lung cancer data, or just point 1 from his much more reliable all-cancer aggregate, and the alleged fit between the shot-noise curve and the Wing study evaporates. So your shot-noise curve doesn’t stand up very well to a basic sensitivity test.
And Wing’s study itself is an outlier in the TMI literature. It has problems, including its short time-frame, and the fact that it does not find out whether the cancer patients it counts from 1981-1985 were even living in the study area during the 1979 spew. In all likelihood, some of the cancer cases Wing counts among the higher radiation groups are in people who moved to the TMI area after the spew and thus could not have been exposed to TMI radiation. Other papers do ascertain residency during the TMI spew and cover longer time-frames so that discerned cancer effects are more reliable. For the most part these methodologically more reliable studies show no statistically significant association between TMI radiation exposures and cancer risks, and thus further undermine your claim of large supra-linear risks at low doses. ((http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1241392/pdf/ehp0111-000341.pdf) and (http://www.ncbi.nlm.nih.gov/pubmed/21855866)
So your thesis, by which you would overthrow the entirety of radiation epidemiology and consign its thousands of practitioners and their expertise and their peer review and their tenured professorships at top universities to the realm of “insanely stupid idiocy,” depends on one or two cherry-picked and possibly flukey data points from a study that is at odds with most of the literature.
Never mind the murky theory of shot-noise; just on the empirical data, you haven’t made a compelling case.
Ah, @Will Boisvert. Let me tell you where I am sitting right now: at NASA Goddard, in the Detector Systems Branch. Every person on this hallway can certainly write down the Schottky expression for shot noise from memory.
The theory is not "murky". It's that you don't understand it.
Look, I would respond to what you post here if you would just tell me plainly that you've taken and passed an undergraduate class in statistical mechanics. Or even read a textbook on the topic?
I can mail you my copy of Reif. If you do the problems, I'll mark them and send them back to you.
If not, then bah. You're not qualified to hold the opinions you have as stridently as you do.
Posted by: Aaron Datesman at April 5, 2013 01:54 PMAaron, continuing my comments in your previous post - the blue data points in your previous post are supposed to be actual empirical epidemiological data from Wing's paper, correct? i.e. they're not based on your attempt to reanalyze data from Wing's paper under a certain theoretical framework? If so, the first few data points in your graph show roughly a 25%, 70%, and 55% "increased lung cancer risk" even at the lowest dosages measured dosages. As I said before, I don't believe that Wing's paper has any such empirical data in it. He shows (to the extent that you believe his analysis) that TMI resulted in increased cancer overall, but not that even those with the lowest measured dosages had increased cancer risk. But perhaps I'm misunderstanding. Can you please point to the part of Wing's paper that you believe produces those three data points?
If the blue curve is actual empirical data from Wing, then to the degree that that data is reliable, Wing's paper clearly shows the linear model is incorrect. You don't need to do any theoretical analysis to show that.
Posted by: Winter Wallaby at April 5, 2013 02:00 PMWhat about the over all average age of the population affected? (an old pig getting nuked might not fare as well as a young pig getting nuked in the matter of DNA damage repair)
Posted by: Mike Meyer at April 5, 2013 02:31 PM@ Winter Wallaby,
Yes, it is interesting how Aaron transposes Wing's data to his own graph.
Wing’s data point two, for the radiation dose group 0-1 (0.005 avg) shows an observed-to-expected ratio of cancer cases of 0.73, in other words, a 27 percent decrease in cancer risk below the expected baseline. Aaron has graphed that data point as showing a 25 % increase in cancer risk! (I count 30 % by eyeball.) Wing’s data point for the 1-10 (avg 5.2) dose group is an OER of 1.12, thus a 12 % increased cancer count over the expected baseline; Aaron has graphed it as a 70 % increased cancer risk. Next point, 10-50 (avg. 28.1) Wing gives an OER of 1.01, a 1% increased cancer count over the expected baseline; Aaron has graphed it as 55% increased cancer risk.
p.s. I use Schottky diodes to save my transistors in my bemf hobby. If NOTHING else, it saves on expense and aggravation.
Posted by: Mike Meyer at April 5, 2013 02:41 PM@ Aaron, referring to comments in the previous thread:
--“The linear dose model tells me that I got alcohol poisoning because I drank five bottles of vodka. The truth is that I got alcohol poisoning because I drank five bottles of alcohol all at once. The first example relates to energy, the second to power.”
Aaron, what you are talking about here is the concept of “dose rate”—the notion that a dose of radiation absorbed all at once will be more damaging than the same dose spread out over several days, months or years. Radiation epidemiologists do indeed pay very close attention to that and factor it into their dose models. That’s why consensus LNT models like the BEIR VII model include what’s called a DDREF, a dose-rate dependent risk factor.
But you have misunderstood the implications of dose-rate for the issue of whether Linear Non-Threshold is an accurate model at low doses. Factoring in dose rate actually implies that LNT overestimates low-dose risks. That’s because LNT is based on data from the atomic bombings of Japan, where the vast majority of the radiation was absorbed in a brief flash of gamma and neutron radiation. (http://dels-old.nas.edu/dels/rpt_briefs/beir_vii_final.pdf) By contrast, low doses, especially those incurred during nuclear accidents, are typically absorbed over a much longer time period—days in the case of TMI, years in the case of Chernobyl and Fukushima.
So LNT assumes that low radiation doses spread over time are equivalent to the aggregate of those doses if they had been incurred all at once. In other words, it assumes that the radiation dose is taken in one huge swig of vodka, not many tiny sips spread out over months and years. But by your own vodka reasoning above, such a model must overestimate the risks of a given dose incurred at a low dose rate, not underestimate them.
That’s why at low doses LNT models factor in the DDREF, which is a departure from strict linearity. The DDREF at low doses is applied by dividing the cancer risk calculated from strict LNT by 1.5 or 2 depending on the model. But that means that the risk at low dose rates is judged to be smaller than what is calculated at high dose rates by strictly linear models.
Empirical findings clearly bear out the idea that lower dose rates are less harmful than higher dose rates for the same total dose. In radiation therapy for cancer, for example, the total radiation dose used is often huge and would kill the patient if given all at once. But when spread over the course of many weeks, the radiation doses are not lethal and indeed add only a modest extra risk of cancer, far outweighed by their effect in killing already cancerous cells.
Aaron, the assumption of strict linearity in dose-response models is indeed problematic and scientists wrestle with those problems. But you get the implications of nonlinearity completely backwards. Virtually all proposals of non-linearity, including your own, logically imply that cancer risks from low-dose (and low dose-rate) radiation are smaller, not larger, than would be predicted by a strictly linear dose response extrapolated from high doses. You are barking up the wrong tree here.
--Aaron, if genetic damage caused by radiation exhibits a linear dose response, as you write, doesn’t that imply that cancer risk, which is an outcome of carcinogenic genetic mutation, also has a linear dose response? Perhaps not if there is a threshold for carcinogenesis, where a certain number of mutations must accumulate before a cell turns cancerous. But again, that implies a quadratic shape to the dose-response curve for cancer, which would therefore be sub-linear at low doses.
@WW, the blue line is empirical data from Wing's paper. It's one of the two rows labeled "c" under Lung Cancer in the chart in the post - I believe the one for the full time period. I adjusted the values to place zero dose at zero effect, as the post implies and as I explained in a separate comment.
You are correct that the empirical data on its own is contrary to the linear dose model. However, a scientific idea is strong only if you have data AND a theory which describes the data. I have provided both.
Posted by: Aaron Datesman at April 5, 2013 03:13 PMMr. Boisvert:
Linear is not the same as affine. What you are talking about is an affine model and not a linear model. The blue line in Mr. Datesman's graph fits horribly to a linear model but the data points that you suggest do appear to fit well to an affine model.
There there might be confusion as to what the baseline is. The baseline is the mean of all the groups considered in the study and not the rate from people who received no dose from the Three Mile Island incident or even the rate of people in the area before the Three Mile Island incident, people who clearly moved to the area after the radiation stopped being released from the Three Mile Island plant, or people who live in a different but similar location except for the existence of a nuclear power plant. The last three would be the best comparisons but I don't know if that data exists. The data presented in Dr. Wing's third table cannot be used to establish risk factors above background but may be able to be used to establish risk factors from relative dose amounts. There might be other reasons that the data and conclusions might not be appropriate or compelling (I'm not expert enough to know) but arguing against a straw man is unproductive. That said, the small sample size is something to view the data and the fit with skepticism but you can do your own analysis and present the results. You can do a linear fit, an affine fit, and a Schottky fit and present measures of how well each fits. Also, the data from one study is hardly convincing. There are also reasons for looking at the lung cancer rates individually which Mr. Datesman pointed out.
Posted by: Benjamin Arthur Schwab at April 5, 2013 04:07 PMAaron
There’s another issue with your transposition of Wing’s table 3, and that is your assumption that all the observed-to-expected ratios were calculated from a common expected baseline incidence in the post-accident period.
You might be right about that, but there are good indications in Wing’s paper that you also might be wrong. (Wing is ambiguous). I think Wing is not calculating OERs with respect to post-accident cancer levels through the area—even though he says at one point that he is doing that!—but instead with respect to pre-accident cancer rates when the TMI dose was zero everywhere, after adjusting them for changes in population variables. Wing is not comparing cancer risks spatially across different dose regions, but temporally between the pre-spew period of zero radiation to post-spew periods with different radiation levels.
If there is no common baseline, but separate expected baselines for each dose group instead, then your implicit comparison between dose groups—by setting the zero dose group OER at zero cancer risk—is strictly invalid. Or, the baseline may be not the post-accident overall cancer incidence but the pre-cancer incidence adjusted with new risk parameters to account for changing population variables; that would also make the normalization of zero cancer risk to zero dose in your graph invalid.
Here’s my reasoning.
In your footnote you write “For some reason, Wing chose to compare the cancer incidence in individual dose groups to the average incidence among the exposed population.” That may be correct; it seems to jibe with the following line on p. 5: “The null value of 1.0 indicates that the study tracts in a particular dose group have the average postaccident cancer incidence level for the entire 10-mile area.”
But Wing’s materials and methods section suggests that he might haved used a very different method for calculating expected baseline cases—that he estimated nine separate expected cancer baselines for each of the nine dose groups, based on the pre-accident cancer rates for just the study tracts in that dose group. Here’s the relevant passage from p. 3:
“Observed cases and ratios of observed to expected cases for each dose category are also presented….Expected counts for the 1981-1985 and 1984-1985 post-accident periods were calculated from the regression models by applying the coefficients for all variables in the model except the dose-time period interaction term to the age and sex specific person-year distribution of study tracts in each dose group during each post-accident period. Thus, the expected count represents the number of cancers that would have occurred after the accident if the study tracts in each dose group had the estimated incidence rates based on that dose group’s pre-accident incidence level and considering the overall age and sex specific changes in cancer after the accident. For Model 2, the expected count is also based on the socio-economic level of the study tracts in each dose group.”
Here’s what that sounds like to me. Wing starts with the cancer rates for the study tracts in each dose group during the years 1976-9, before the spew and with zero TMI radiation. Than he adjusts them for demographic and economic changes that occurred in those dose-group study tracts after the spew, in 1981-1985. For example, if the age distribution, sex ratios or incomes of the people living in those study tracts have changed in a way that might affect cancer rates, he makes little adjustments to the pre-spew cancer rate to reflect that, using one of two different regression models. He takes that adjusted cancer rate from the pre-spew period for those particular study tracts and uses it as the expected baseline rate for the post-spew period, again just for the dose group that contains those particular study tracts. Then he moves on to the next dose group and calculates a whole new expected baseline rate. He uses those 9 separate baseline rates to calculate OERs for each of the 9 dose groups.
That seems to contradict what Wing wrote on p. 5, which indicates he is simply dividing observed incidence rates in each dose group by the observed post-accident incidence rate for the whole 10-mile study area.
But Wing also supplies in table 3 different OER estimates based on different regression models for each data point, incorporating different demographic and economic variables. That wouldn’t make any sense if he were simply calculating a ratio of two empirically observed post-accident cancer rates, the dose-group rate divided by the overall 10-mile area rate. Therefore, the denominators of the OERs must be estimates based on pre-accident cancer rates, adusted according to different models of changed population parameters between pre- and post-accident periods, as he outlines on p. 3.
Finally, it’s possible that Wing is using a single common expected baseline incidence denominator for all the nine dose groups. That is, he just takes the pre-spew cancer rate for the whole 10-mile area, updates it with his regression model for population changes in the whole area post-spew, and uses that one baseline expected incidence for all the nine dose groups. (Alternatively he might do the calculation nine times for each dose group, then average.)
I don’t know how to resolve the apparent contradiction in Wing’s paper, except by asking him. My own guess is that the line on p. 5 is just a mistake, because he clearly lays out elaborate procedures to calculate post-accident baselines from pre-accident cancer rates. If I am right then it means each OER refers not to the post-accident zero-dose group as the zero cancer-risk baseline, but to the pre-accident cancer incidence rate.
If that’s true, Aaron, then it is invalid to place the zero-dose group at zero cancer risk and assume every OER elevated above that is an increase in cancer risk. What Wing’s data points would indicate is whether the cancer risk for that dose range is elevated, the same (an OER of 1.00) or less than the cancer risk before the spew in the state of zero radiation exposure that obtained over the whole study area. My suspicion is that there are nine baselines for the nine dose groups. If so than the zero dose group with an OER of 0.45 could even have a higher absolute cancer risk than higher dose groups with higher OERs because the expected baseline is different for each one and hence incommensurable.
So when an OER says 0.73 for the 0-1 dose group, it would mean those study tracts had a 27 % lower cancer rate than they did in the pre-spew state of zero TMI radiation dose. The 1.01 OER means that dose group had a 1 % elevated risk over the cancer rate in the same study tracts before the spew, when the radiation dose was zero. And so on. If on the other hand Wing is using a single area-wide expected baseline rate, that means that all the OERs are compared to a single area-wide cancer rate, but again derived from the pre-spew cancer rate, when the TMI dose everywhere was zero. (What does it mean that the zero dose group has an OER of 0.45 compared with the pre-spew, zero-radiation cancer rate, instead of 1.00? A statistical fluke! Remember statistical flukes explain almost all of statistics.)
When interpreted in this way, which I think is probably correct, Wing’s data make more sense than the way you have graphed them. The OER’s for each dose group would reflect the true increased or decreased cancer risk for that dose group, relative to the pre-spew state of zero TMI radiation, with all the extraneous changes in population variables factored out, leaving just the radiation component. You could just read off the excess cancer risk for each radiation dose range from the OER number itself, without reference to OERs for other dose groups. That’s nine independent estimates of dose-linked cancer risks, which I think is why he would have used it. And the radiation cancer risks you see would run from negative at low doses—consistent with hormesis or, more plausibly, statistical flukes--to 40-50 % elevated at high dose ranges, which is more reasonable in light of the literature. It would look more linear with risks dwindling nicely down to zero except for flukes, and there would be no need to transpose the data to a different register that presents cancer risks as vastly higher than Wing’s actual data do.
So to me this all raises another question mark about the conclusions you have drawn from Wing’s paper.
@ Benjamin Schwab,
Having read the Wing paper with a fine toothed comb, I believe that the baseline for his Table 3 OERs actually is the pre-accident cancer rates when there was zero TMI radiation over the whole study area. That would render invalid Aaron's graphic rendering of cancer risks with respect to a zero risk origin at zero radiation dose, and make the shot-curve fit invalid too. I could be wrong; read my comment above and see what you think.
Posted by: Will Boisvert at April 5, 2013 05:58 PMAaron, OK I see how you're transforming to "increased lung cancer risk." You're using the column with dosage = 0 as the baseline, assuming that's how much cancer every population would have got if TMI had never occurred, and so subtracting that off. That doesn't work, as the error bars are too huge at the dosage = 0 point - they're based on 6 observed cases of cancer in 1981-1985 (4 in 1984-1985). If, for example, one more person at a dosage of 0 had gotten cancer in 1985, the 1984-1985 O/E for dosage of 0 would have been (5/4) * 0.66 = 0.83 instead of 0.66, and you would have concluded from that data row that dosages of 0.005 decreased cancer risk (0.72
Your blue curve gets shifted up or down based on this high error data point. As you've drawn it, it looks concave down, which appears to invalidate the linear model. But if you shift the entire curve down (within the large error bars of the dosage = 0 data point), it becomes a straight line through the origin, which would then appear to validate the linear model. The large error bar at dosage = 0 isn't a problem if you/Wing just want to show that the curve trends up, and that more radiation is worse for you. But it requires a lot more sensitivity to show whether the curve is concave up, concave down, or linear through the origin. You don't have that sort of sensitivity here.
(Will B.: the baseline for O/E in Wing's paper is the post-accident cancer rates after TMI.)
Posted by: Winter Wallaby at April 5, 2013 06:57 PMWell, there were big movements back then to quit smoking and taking the lead out of gasoline. TMI has leaked radioactives since the day it was built. To use just pre-event data that does not include pre-construction data must, by its very nature, skew results.
Posted by: Mike Meyer at April 5, 2013 07:07 PMWinter Wallaby, I know what I am saying seems crazy, but the Wing baselines are not the post-accident cancer rates; they are the pre-accident cancer rates, Wing is comparing post-accident cancer rates to what they would be if pre-accident cancer rates still held, adjusted for changes in population parameters that happened after the spew.
I know it looks like Wing is using post-accident rates as a baseline--especially because Wing seems to say that on p. 5!
But if you read the paper carefully, especially the materials and method section, and consider carefully everything it says, and really understand it instead of skimming it because it is boring, then you will see it: Wing is calculating expected baseline cancer rates from the pre-accident rates, not the post-accident rates. The line on p. 5 where it mentions post-accident rates is a mistake.
It's an easy error to make because Wing's paper is obscure and contradictory; that's why everyone has misunderstood how he has calculated the all-important baselines.
Once you see how Wing actually calculated his baselines, you will see that Aaron's graph is quite wrong. The post-accident zero dose OER is not a valid reference point. Wing's do not fit the shot-noise curve, and they certainly do not show the outlandishly high cancer risks Aaron has graphed.
Read my comment carefully--sorry it is so long--and then read Wing's paper carefully. I guarantee you will see that I am right.
Posted by: Will Boisvert at April 5, 2013 07:51 PMWinter Wallaby, your point about the uncertainty and huge error bars for Aaron's zero-dose data point is spot-on.
Posted by: Will Boisvert at April 5, 2013 07:56 PMI for one end up feeling there's a SERIOUS health issue with today's modern nuclear reactors&their radioactive emissions. But hey, know what America was before The White Man came? Pristine.
Posted by: Mike Meyer at April 6, 2013 12:57 AM@Will- It seems you have no interest in telling us anything about yourself (note that Aaron has asked several times as well), so perhaps you'll answer this instead: What is your ultimate goal here?
Quite clearly you think Aaron is wrong, but why the need to keep rehashing it thread after thread? You're not going to change his mind on this, so why bother?
Posted by: Aric at April 6, 2013 10:05 AMMr. Biosvert
Granted what Dr. Wing et al does is weird. Since the group cannot determine what the cancer rates would be if the accident at Three Mile Island never happened (how could they), they apply the average dose factor and fit that to be 1. The rest of the dose observed to expected is relative to this 1 value.
Also, I checked. A square root fit (as I believe Mr. Datesman uses) fits the data for lung cancer much better then an affine fit for the data from 1981-1985 though the sample sizes are too small (from my amateur understanding) to rule out the linear model. The sample sizes for 1984-1985 appear to be too small to differentiate the fitness for the different fits that I tried.
Sometime in the next few days I will write up the amateur analysis that I did and post it on-line.
Posted by: Benjamin Arthur Schwab at April 6, 2013 11:09 AM@Winter Wallaby, if you think shifting a curve downward by some amount changes its shape, you know less about fitting data to a theory than Will Boisvert does - and that's a low bar.
I don't generally read the Boisvert comments, for reasons I've explained. However, I did read the long comment. It qualifies as "spew" in opinion. Wing is correct on page 3 but wrong on page 5? Come on. @Will Boisvert, you are welcome to write to Steve Wing to ask him why he wrote something misleading in his paper. If he writes back to you, please forward me his response off-line. You have my e-mail address.
On the broader issue, you can read Wing's response to Managno's letter here:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1470191/?page=1
He says quite plainly that Mangano is correct, the data would fit a log relationship better than a linear relationship. Now, the actual relationship is sqrt(N+N^2), but this is very close to log(N) over the domain examined where the shot noise dominates.
The data is all there; anybody can open up Excel and play with it. There is no need to take my word for it.
Posted by: Aaron Datesman at April 6, 2013 11:18 AM"Quite clearly you think Aaron is wrong, but why the need to keep rehashing it thread after thread? "
That's what tends to happen when people disagree. One side says X, the other says not X, the first says X, the other says not X. I've seen it a lot.
Posted by: Donald Johnson at April 6, 2013 11:29 AMTODAY, Fukushima is leaking radioactive water again. Plus TEPCO has lost power to the pumps cooling spent rods a couple of times in the last three weeks.
Perfect laboratory to study those health issues. The best part IS, most likely, none of those victims are American until the wind blows&water flows.
SERIOUS health issues.
@Donald- Quite clearly. But in this case it's clear that one side has a much better understanding of the material (as well as a background in it), so I'm curious why Will keeps tilting at this particular windmill. I mean, it's one thing to disagree, but to continue to do so after being shown you don't understand the fundamental science behind the topic? (read: Shot Noise and why it applies). I don't get that, unless he has a paid agenda or simply likes hearing himself talk.
Posted by: Aric at April 6, 2013 11:52 AMI'll have The Secret Fukushima Soy Sauce on my Kolbe Veal and for dessert, a giant Hershey Bar (laced with Strontium 90, the perfect calcium substitute).(&I want Pacific Coast sea salt on my Big Macs from now on just to make sure I eat healthy)
Posted by: Mike Meyer at April 6, 2013 12:11 PMAric, Will is presenting the mainstream view that Aaron opposes. That's a good thing, IMO. His credentials are irrelevant. It'd be nice if some BEIR authors showed up, but it's unlikely I suppose.
Posted by: Donald Johnson at April 6, 2013 01:32 PMDonald Johnson: AGREED.
Posted by: Mike Meyer at April 6, 2013 02:37 PMI don't know that it is a good thing, Donald, as he's arguing religion in a scientific debate due to not being able to do the science himself. Which makes a lot of sense... Being a writer he read a lot about the subject, formed an opinion and is now parroting back a mishmash of arguements that support that view, and when confronted with something that contradicts that view he falls back on dogma rather than being open like you were. And frankly the only credentials I'm looking for is something along an undergrad degree in science or engineering. That's an awfully low bar, and touched on in a comic someone linked earlier about a Philosophy degree not being enough to overturn General Relativity.
Posted by: Aric at April 6, 2013 02:56 PMLet me rephrase the above a bit... By "which makes a lot of sense..." I mean Will and his position now make sense to me. I also agree that having someone argue mainstream view is worthwhile, but with the caveat that it needs to be argued by someone capable of understanding the other side. Which Will is not, and is why Aaron generally skips his posts. Had an interesting chat with Aaron about it the other day, which also plays into my now understanding Will better.
And one other thought, Donald- With respect to BEIR authors, I think that will come about sooner than you think.... I don't know if you recall, but early on in this series of posts about things nuclear Aaron made mention that all of this was merely a way for him to organize his thoughts and work out the presentation of his arguments for a book he was writing on the subject.
Posted by: Aric at April 6, 2013 03:30 PMWill Boisvert isn't the problem or even a problem, the nuclear industry is the problem. WE ALL will be eating the products of these meltdowns&meltdowns DESTINED to come for the rest of OUR projected lives. No changing that without a starship. Were it left up to me, I'd bury all the nukes right where they lay, as deep as I could. I'd make no attempt to move them anywhere.
Posted by: Mike Meyer at April 6, 2013 05:16 PMIn the greater sense, I suppose you're right Mike. But self-appointed experts who don't know when they're out of their depth annoy the heck out of me, as they distract from more interesting/insightful discussion like was going on between Aaron and Donald. I don't mind *that* sort of discussion one bit! :-)
http://farm1.static.flickr.com/98/206636171_0021c26a2e_m.jpg
Posted by: godoggo at April 6, 2013 06:08 PMAaron, thanks for those links to the Wing response and from there to the initial Mangano letter.
You’re right, Wing does allow that Mangano’s supralinear log curve is a better fit than a linear curve to at least some of Wing’s data. But he rejects the way that both you and Mangano would interpret that result.
Wing writes in response to Mangano’s claims:
“1) Do the cancer incidence patterns reflect low dose radiation? 2) Is the study design appropriate for distinguishing the shape of radiation dose-response relationships at low levels? 3) Is the original scaling (vs. magnitude) of dose estimates correct? We answer no to the first two question and discussed reasons for uncertainty regarding the third in our paper.”
Wing goes on to note “The appearance of large elevations in lung cancer incidence within 7 years of exposure is not consistent with previous studies of low-level radiation….Furthermore, the design of the TMI study is not well suited to distinguishing between shapes of dose-response association….Given the uncertainty of dose estimates, heterogeneity within study blocks, and other limitations of study design, we caution against overintepreting these findings in terms of low-level radiation’s biological mechanisms.”
So Aaron, Wing warns us not to do exactly what you are doing—using his data as a basis for claiming supra-linear low-dose radiation risks, and attributing them to novel radio-biological mechanisms like shot-noise. It’s misleading of you to cite him in support but not tell readers that he explicitly rejects your position.
We can see why he rejects it if we examine again how weakly his data support yours and Mangano’s claims.
Note that Mangano uses a different data set than you do. He also leaves out the zero-dose point and the highest-dose point, “the inclusion of which distorts the logarithmic relationship.” That’s standard operating procedure for Mangano—if a data point spoils the result he’s looking for, he just arbitrarily discards it. And of course he ignores Wing’s all-cancer data, which is a much more reliable data set than the lung cancer data simply because it has more data and is thus less vulnerable to statistical flukes.
Wing does say “The goodness of fit statistics would have been larger (and corresponding p-values smaller) had we fit regression models using the log of dose.” Note that Wing does not say that they did in fact do those fits, but seems to just assume what the results would have been if they had. Did they do the fits? Did they do the fit on the all-cancer data, the most reliable? Hard to say.
Regardless of the shapes of the various datasets, the supralinearity in all cases hinges on one or two possibly flukey data points with huge error bars that jut down on the low-dose end of the data. Apply a simple sensitivity test and you’ll see that you get radically different results depending on which data point you leave out. (Mangano knows that well.) That means that the Wing data, especially the sparse lung cancer data, are a weak basis for a major revision of the dose-response consensus. A judicious assessment of that issue would include countless other studies on dose-response relationships, including these TMI studies that show little to no consistent sign of radiation-linked cancer or other effects at TMI. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1241392/pdf/ehp0111-000341.pdf) and (http://www.ncbi.nlm.nih.gov/pubmed/21855866).
Let’s also note that the fundamental thrust of Wing’s TMI research is that doses at TMI were in the high-dose range, not the low-dose range, and that’s why they caused discernible cancer effects. I don’t believe that either, because all the instrumental radiaton measurements show the doses were tiny; Wing is just going on possibly unreliable anecdotes that lack any baseline. What’s my explanation for Wing’s TMI cancer associations? Statistical flukes. The effects go away in studies that examine more cases over a longer period with more rigorous protocols, like the ones I cite above.
Folks, when you see small public-health effects that pop into and out of significance depending on how you design the study or slice and dice the data, as with the TMI cancer effects, they are almost always statistical noise. Remember, the central lesson of statistics is that what looks like a meaningful signal is often just a random fluke.
Look out, Will is doing *science*!
Posted by: Aric at April 7, 2013 10:02 AMLet me correct that for you, Will:
"Folks, when you see someone with no background on what the're commenting on complaining about the application of statistical mechanics by someone who *does* know what they're talking about, it is almost always just noise. Remember, the central lesson is that Wikipedia doesn't actually make you an expert."
Will, you do realize Aaron doesn't read your posts, right? And the fact that he won't engage you is *not* proof that you're right? But rather evidence that you're wrong to the point that it's not worth responding to yet again?
Btw, I noticed a funny thing yesterday... *All* of your comments on ATR have been in response to posts from Aaron. As in no comments on *anyone else's* posts. And of these comments, *all* were pushing a pro-nuke position, even when the post itself had nothing to do with nuclear power. Mind you, this is going back *years*. No surprise he ignores you.
Aric: I beg to differ. I believe that Will has answered many of my post.
Posted by: Mike Meyer at April 7, 2013 01:52 PMGoogle answered mine, but that doesn't mean my phone's qualified to debate the subect. I just think Will's best off sticking with what he knows... Book and movie reviews: http://www.oakton.edu/user/4/pboisver/BillBoisvert/reviews.htm
Posted by: Aric at April 7, 2013 02:31 PMI don't know who bothers to read comments down this far, but there are two things worth pointing out:
1. There are only three peer-reviewed studies of the data set Wing uses. The first, by Maureen Hatch, was awful and should be ignored. Wing's is the second. There is a third, after Wing's and out of the University of Pittsburgh. I admit to not having read it.
It's difficult to regard the Wing study as an outlier when, at worst, the opinions expressed are one-half of all of them. I'll be frank about why I like the Wing study: he's clearly an open-minded scientist. This is obvious by comparison of his paper to the one written by Hatch. (It doesn't hurt him in my estimation that he knows who Elena Burlakova is. Few Americans in the field do.)
I should also note that Wing offers experimental evidence, using chromosomal analysis, that some of the doses at TMI were of the order of 1 Gray (ie, high).
2. The empirical data supporting the linear dose model mostly comes from the Atomic Bomb Casualty Commission and its successor, the Radiation Effects Research Foundation. There are serious problems with this data, including the fact that the survey didn't even begin for several years after August, 1945.
The total cohort monitored by the ABCC was about 90,000 people - 50k exposed, 40k not. The Wing TMI study included 160,000 people, almost all of whom were exposed.
That is: it's larger, and quite possibly has greater statistical power.
@Will Boisvert's claims about curve fitting remind me of C- students when I was teaching high school. Bottom line.
He is correct, however, about this: Wing did caution Mangano that the lung cancer data, while clearly supralinear, should not be interpreted to support a non-linear dose response. This is a sensible statement to make in part because Mangano does not offer an analytical theory predicting a supralinear response, and seems to know about as much about curve fitting as Boisvert.
I have offered an analytical theory.
Additional tidbit: the Wing data reveals about 190 EXCESS cases of lung cancer through 1985. I wonder whether the friends and loved ones of those affected realized that their suffering was just sliced and diced statistical noise?
And @Will Boisvert, have you finished Reif yet? Can you tell me what a Fourier transform is? Because you'll have to have an opinion about them in a few hours. I'll give you a head start on the Dirac delta function, too.
Will Boisvert is wasting the time of everybody who reads these comments, not only me. Everyone who participates should understand this.
Posted by: Aaron Datesman at April 7, 2013 07:09 PMAaron Datesman: I read this far down and I also appreciate&read both YOURS and Will's comments.
Glad I did because NOW I see there most likely ISN'T ANY data from before July 1945 on cancer deaths and most certainly none on background radiation. If any does exist its probably buried in some Manhatten Project Secret Archives.
Needed to create you the little observation so as to thank you very much once again regarding the exceptional secrets you have documented on this site. It was really extremely open-handed with people like you to present unreservedly exactly what many of us might have supplied as an e book to end up making some profit on their own, certainly considering that you could possibly have done it if you ever considered necessary. These techniques additionally worked as a fantastic way to fully grasp someone else have the same dreams like my own to find out whole lot more concerning this problem. I know there are many more enjoyable times up front for individuals who read through your website.
Posted by: Joseph Bell at April 8, 2013 12:07 AMAaron, okay, let’s take Wing for gospel and look at the body count of excess cancer deaths in his paper as you suggest. Then let’s look at that from a larger perspective and see how bad the risk really is.
You count 190 excess lung cancer cases in the Wing study. I don’t think it’s that many, from my calculation using OERs in table 3. ( [O-E = O (1-1/R)] from the second line of OERs.) I get 65 for a net excess.
For all cancers, from the second line I get 126 net excess cancer cases, 198 if you leave out the OERs less than one. Taking the bigger number and assuming that roughly half of cancer cases are fatal, that’s about 100 cancer fatalities over 5 years, or about 20 per year. (Assuming that all those excess cancer cases are from TMI radiation, which is not a good assumption.) So how does that danger, 20 fatalities per year from the TMI spew, look in comparison to other ordinary risks that we face? How does it stack up compared to, say, the risk of driving?
Well, in 1983, there were 42,569 traffic fatalities in the US, among a population of 234 million. (http://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year) Assuming Wing’s study population of 160,000 had a normal share, that would be about 29 traffic fatalities per year, compared to the 20 TMI cancer fatalities per year.
All of this means that, by Wing’s data, getting caught in the TMI spew was substantially less dangerous than owning a car is.
And that’s for the people in the maximally contaminated epicenter of a spew. That scale of risk from nuclear power occurs once a generation to perhaps a few hundred thousand people. But auto fatality risks are incurred across the whole country every year. Nuclear power in the United States may be killing a few dozen people every year, but automobiles are still killing about 30,000 people every year, a thousand times as many. Add in the thousands of people whose lives are shortened by cancer and heart disease caused by air pollution from cars.
So I don’t get it. Why should we worry about the tiny risk from another TMI, when we all happily drive cars, which are a thousand times more likely to kill us? Why don’t you go on a crusade to abolish cars instead of nuclear power?
That’s why anti-nuclear alarmism is profoundly irrational, no matter whose stats you use. Like all phobias, anti-nuclear phobias are a fun-house mirror that crazily distort people’s perceptions of risk—making them shriek in terror at trivial risks while blithely wallowing in dangers that are objectively thousands of times greater.
The great danger of that irrationality now is that it may deprive us of an enormously valuable technology that provides reliable low carbon energy, abates air pollution and gives us our best chance to stop global warming. Here’s a new study that estimates that nuclear power has saved almost two million lives over previous decades just by abating air pollution. (http://pubs.acs.org/doi/abs/10.1021/es3051197?journalCode=esthag) It’s by James Hansen, the famous client scientist and another fellow, both of whom work at NASA’s Goddard Institute--just like you, Aaron. I know you won’t listen to anything I say, but maybe you could go talk to them about it.
Aaron, I don’t think the advanced calculus you’re using provides much insight into this issue. Simple arithmetic shows how tragically wrong your anti-nuclear position is.
"Aaron, I don’t think the advanced calculus you’re using provides much insight into this issue. "
That's funny Will, because I was thinking the exact thing. But about *you*.
Seriously, give it a rest.
Posted by: Aric at April 8, 2013 09:16 PM@Will Boisvert, sigh.
Why was the Wing study restricted to the ten-mile radius around TMI? And, to your knowledge, how far did the plume from TMI actually reach?
Posted by: Aaron Datesman at April 9, 2013 07:01 AM@ Aaron,
“To your knowledge, how far did the plume from TMI actually reach?”
Beats me, Aaron. Do you have any information on where it reached and, more meaningfully, what radiation dose it delivered?
What I doubt, however, is that it reached everywhere in the United States without any dilution or any diminution in its radioactivity from decay. Was it just as intense in Maine and Florida and Montana and California as it was within ten miles of TMI in Pennsylvania? I don’t think so.
The plume must have been greatly attenuated from whatever already slight intensity it had at 10 miles out by the time it reached 20 miles out. It would have been much more dilute still at 30 miles out and so forth. So whatever risks Wing found within his 10-mile study area must be drastically smaller outside it, plummeting further every mile we go from TMI.
We’ve established upthread that the risks from getting caught in the plume within Wing’s 10-mile study area were substantially smaller than the risks of driving a car for the people that lived there. That’s about the maximum risk anyone in the country could have faced from TMI; outside that area the risks were even smaller, drastically smaller than driving a car, and must have dropped to essentially zero before very many miles out.
Since the risks of traffic fatalities I cited obtained for the nation as a whole and its entire population, it follows that the overall risk of traffic fatalities for the nation as a whole must be colossally greater, by orders of magnitude, than the overall risk from TMI radiation.
To use a bit of math notation:
Driving risk > TMI risk in Wing study area >>>>> TMI risk for nation as a whole.
So again, hysteria over TMI radiation is very irrational when compared to everyday risks, like driving a car, that we accept without question. We don’t need exotic math to see that—just plain old arithmetic and common sense.
@ Aaron, on the Wing paper, and his claims of high radiation doses at TMI.
--Note that Wing cautions not just against inferring dose-response curves from his data, but also specifically warns against further imputing them to “analytical theories” about biological mechanisms. He might feel that your shot-noise theory merely compounds your misuse of his data rather than mitigating it.
--Wing also skips cancer cases in 1979 and 1980, because of the the latency period—that’s just too short a time for any of those cancers to be TMI-related. In fact, the known latencies of radio-sensitive cancers imply that many and perhaps most of the cancer cases in 1981-5 could not have been caused by low-dose radiation, because they happened too soon, as Wing says in his response to Mangano. That’s why it’s so important to him to posit high doses in the study area, contrary to consensus findings of very low doses. But the thrust of latency considerations is that, overall, many of the excess cancers in his study period could not have come from TMI because there just wasn’t enough time. Coupled with the fact that some of them probably came from newcomers who were never exposed to the spew, his excess cancer numbers must be viewed with skepticism.
--You’re right, Wing does cite Russian studies of 29 blood samples taken from TMI people who complained of acute radiation symptoms, collected some 15 years after the accident. The studies looked at what they pegged as anomalies in the chromosomes of TMI blood cells—the more anomalies, they reasoned, the higher the TMI radiation exposure. They calibrated their estimates to chromosome anomalies in Chernobyl workers with known exposures to Chernobyl radiation. Wing insists the studies imply radiation exposures of 600-900 milliGrays, some 600-900 times higher than consensus figures.
But that data is indirect and iffy, as this response to the Wing paper from Talbott points out. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1240214/pdf/ehp0108-a0542e.pdf ) She writes that “There was no local control group [of Pennsylvanians] and no adjustment for confounding (smoking, occupational exposure) or other environmental insults during this 15-year elapsed period. Hence we cannot rule out a spurious cause and effect.”
So anecdotes and poorly controlled Russian studies of samples taken 15 years after the fact are not the strongest evidence for high doses. Otherwise, as Wing admits in his paper, all the actual instrumental readings from the time confirm estimates of extremely low radiation exposures. His criticism of them is that some of the radiation monitors malfunctioned, there weren’t very many functioning ones off the plant site (about 20) and that dense radiation plumes might have snaked unobserved between them, so that we cannot absolutely rule out high doses on the ground of instrumental radiation readings that show low doses.
But Talbott notes an interesting check on the consensus radiation estimates. Eastman Kodak collected unexposed film from the area and looked for fogging, which could occur if local radiation doses had exceeded 5 mrem in the spew period. No fogging was found, consistent with very low radiation levels.
The consensus findings, supported by instrumental readings and other objective checks, are that the TMI doses reached a maximum of 1 milliSievert. That’s a pretty tiny dose: average annual background radiation is 2.4 mSv, and 2-3 times more than that for people who live on the Colorado plateau. So living in Denver for a single year would increase a person’s radiation exposure by 2 to 3 times as much as the maximum dose anyone got from TMI.
If anti-nukes really believed in the dangers of low-dose radiation, they would demand the evacuation of Denver. Yet another weird irrationality of anti-nuclear propaganda.
Speaking of propaganda, Will... You seem full of it. In more ways than one, I might add. Btw, are you sure you're not on the payroll of someone in the industry?
Posted by: Aric at April 9, 2013 03:20 PMI ask because Aaron's been asked about the possibilty of the disclosure of financial conflicts of interest of commenters here.
Posted by: Aric at April 9, 2013 03:23 PMFolks, in this post and the next is a look at four major papers in the TMI literature that Aaron has not talked about. They give a broader understanding of TMI that undercuts alarmist claims about the cancer effects.
--Hatch has written two studies (http://cipi.com/PDF/hatch1990%20no%20ocr.pdf) and (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1405170/pdf/amjph00206-0049.pdf)
The Wing paper got all its data and much of its analysis from the first Hatch paper. Some differences are that Hatch grouped the data into 4 exposure groups instead of Wing’s 9; and of course, the guts of the all-important regression models used to estimate expected standardized incidence ratios are different in the two papers. Hatch did find a modest statistically significant risk associated with radiation exposure, with odds ratio of 1.11 (1.03-1.21 95% CI), a bit higher association for lung cancer, not much for other cancer types. Hatch’s group did a rather meticulous calculation of radiation doses that matched up nicely with consensus findings that TMI doses were below 1 milliSievert; Wing relied on its relative dose estimates while arguing that the absolute doses were much higher (unpersuasively, in my opinion). The takeaway from the first Hatch paper is: “Overall, the pattern of results does not provide convincing evidence that radiation releases from the Three Mile Island nuclear facility influenced cancer risk during the limitied period of follow-up.” (i. e., through 1985 as in Wing’s study.)
--The second Hatch paper, based on the same data, annoyed a lot of anti-nukes because it posited a link between TMI cancer effects and “stress” rather than the radiation itself. That’s not quite as silly as it sounds. Hatch looks at stress hormones, but more prosaic effects could flow from stress: if anxiety over the spew led people to drink and smoke more to settle their nerves, that could lead to higher cancer rates. Also, there’s the issue of “heightened monitoring.” Assumptions about increased cancer risk from the spew could cause patients and doctors to become more vigilant about looking for cancers. They might find small, non-aggressive, slow-growing cancers that would normally not have been diagnosed before they spontaneously regressed or the patient died of something else. Heightened monitoring could thus cause cancer diagnosis and incidence rates to climb even if the underlying cancer rate did not. Hatch’s evidence for all this does not seem especially persuasive to me, but they are effects that epidemiologists should watch out for as biasing factors. Wing does not seem to take account of possible increases in smoking and drinking, or heightened monitoring. The takeaway from the paper: it found a modest increase in cancer risk correlated to proximity (not to radiation exposure) with an odds ratio of 1.4 (1.3-1.6, 95%CI) which it attributed to stress, (not very convincingly to me).
Folks, two more major TMI papers from a U. of Pittsburgh study. This is a longitudinal cohort study following about 30,000 people living within 5 miles of TMI during the spew, 93 % of that population. Several papers have come out of it.
This one (http://www.ncbi.nlm.nih.gov/pubmed/21855866) looked at cancer incidence from 1982 to 1995, 13 years as opposed to Wing’s 5 years and a period encompassing a much longer latency, so likelier to contain genuine TMI cancers. It found no rise in cancer incidence at all in the cohort. Relative risk for maximum estimated TMI gamma radiation was 1.00 (0.97-1.01 95% CI) and for likely TMI gamma relative risk was 0.99 (0.94-1.03 95% CI). They looked at a bunch of cancer subtypes, and found a TMI risk for leukemia in men, but not in women. Elevated risk was also found for lung and respiratory cancers, but it correlated to background radiation, not TMI dose. The takeaway: “Increased cancer risks from low-level radiation exposure within the TMI cohort were small and mostly statistically non-significant”, but keep an eye on the leukemia in men. (Note that if you follow a lot of cancer subtypes that each have a small number of cases, you are bound to find a few statistically significant correlations just because of random statistical flukes.)
--This Pittsburgh study (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1241392/pdf/ehp0111-000341.pdf) looked at mortality (not incidence) from cancer, heart disease and all causes from 1979 to 1998, so 19 years, encompassing much more latency than Wing’s five-year study. The Pittsburgh study did find a slightly elevated mortality rate for all cancer in men for the whole cohort, with a standardized mortality ratio (SMR) of 103.7, a difference from 100 that was not statistically significant. There was no rise at all in all-cancer SMR for women. There was a rise in respiratory and BTL cancer subtypes in men that was statistically significant. Both men and women had significantly elevated SMRs for heart disease (111 and 127) and all non-cancer deaths (108 and 116). But none of these elevated SMRs, for cancer or other causes, correlated with TMI radiation dose; getting a bigger TMI dose did not increase your mortality risk. The takeaway: “Although the surveillance within the TMI cohort provides no consistent evidence that radioactivity released during the nuclear accident has had a significant impact on the overall mortality experience of these residents, several elevations persist, and certain potential dose-response relationships cannot be definitively excluded.”
--So what does this pointillist picture of TMI in this and the previous comment add up to? To me it’s a question mark, and a rather small one. There may be some cancer risk from the TMI spew. If it’s there, it’s inconsistent, appearing in one study then disappearing in the next depending on how you design the study and slice and dice the data. Sometimes a cancer subtype pops out in a study, but in the next it’s a different subtype. When effects show up, they are sometimes correlated to radiation exposure, and sometimes not. All of these effects flicker on the border of statistical significance.
When you see effects that are very small and inconsistent as in the TMI studies, it usually means there is nothing there. A genuine effect will show up significantly and consistently, but a zero effect will often look like a small effect because statistical flukes will masquerade as minor, inconsistent correlations—that’s the nature of randomness. Whenever we see a small, inconsistent public health effect from any putative cause, we should be skeptical that it exists at all.
Again, I don’t see much to worry about in the TMI literature as a whole.