March 13, 2020
Nuclear facilities emit very large amounts of tritium, 3H, the radioactive isotope of hydrogen. Much evidence from cell/animal studies and radiation biology theory indicates that tritium is more hazardous than gamma rays and most X-rays. However the International Commission on Radiological Protection (ICRP) continues to underestimate tritium’s hazard by recommending a radiation weighting factor (wR) of unity for tritium’s beta particle emissions. Tritium’s exceptionally high molecular exchange rate with hydrogen atoms on adjacent molecules makes it extremely mobile in the environment. This plus the fact that the most common form of tritium is water, ie radioactive water, means that, when tritium is emitted from nuclear facilities, it rapidly contaminates all biota in adjacent areas. Tritium binds with organic matter to form organically bound tritium (OBT) with long residence times in tissues and organs making it more radiotoxic than tritiated water (HTO). Epidemiology studies indicate increases in cancers and congenital malformations near nuclear facilities. It is recommended that nuclear operators and scientists should be properly informed about tritium’s hazards; that tritium’s safety factors should be strengthened; and that a hazard scheme for common radionuclides be established.
However there is a serious problem here. If similarly increased health effects had been observed near, say, a lead smelting factory or an asbestos mine, would they be dismissed by referring to these rationales? I rather doubt it. In other words, what is occurring here is that hidden biases in favour of nuclear power are in play. In my view, such conflicts of bias should be declared at the outset just as conflicts of interest are nowadays.
The Abuse of Statistical Significance Tests
Many epi studies of cancer near NPPs have found increased risks but dismissed them as not “statistically significant”. This wording often misleads lay readers into thinking that a reported increase is unimportant or irrelevant. But, in statistics, the adjective “significant” is a specialist word used to convey a narrow meaning, ie that the likelihood of an observation being a fluke is less than 5% (assuming a p = 5% test were used). It does not mean important or relevant.
The misuse of statistical significance is an important issue for four reasons. First, because the use of statistical significance tests has often led to the wrong result, especially in clinical trials, and the same is true in epidemiology studies in my experience. Several authrs have reported that the rejection of findings for significance reasons can often hide real risks (Axelson, 2004; Whitley and Ball, 2002).
Second, as Nature states “the rigid focus on statistical significance encourages researchers to choose data and methods that .. .yield statistical non-significance for an undesired result, such as potential side effects of drugs — thereby invalidating conclusions.” This damning verdict applies with equal force to the undesired result of observed increases in health effects in an epidemiology study. For decades, some scientists, sadly including those employed at UK government agencies, have dismissed risk findings in epidemiology studies near nuclear facilities by concluding they showed no “significant” raised risks or that excess risks were “not significant”, or similar phrases.
A third reason also mentioned in the Nature article, is that we must re-examine past studies which used lack of statistical significance to dismiss observed increases as these conclusions are now unreliable. This verdict applies, for example, to past studies by the UK Government’s Committee on the Medical Aspects of Radiation in the Environment (COMARE) studies which observed leukemia increases near UK nuclear facilities but dismissed them because they were not statistically significant. These include, for example,
COMARE (2011) Committee on Medical Aspects of Radiation in the Environment Fourteenth Report. Further Consideration of the Incidence of Childhood Leukaemia Around Nuclear Power Plants in Great Britain. HMSO: London.
The fourth reason is the vital factor of size in epidemiological studies, ie the numbers of observed cases of ill effects in a population. This is because the probability (ie p-value) that an observed effect may be a fluke is affected by both the magnitude of effect and the size of study (Whitely and Ball 2002; Sterne and Smith, 2001). If the study size is small, its findings often will not be statistically significant regardless of the presence of the adverse effect (Everett et al, 1998).
I have argued that tests for statistical significance have been misused in epidemiological studies on cancers near nuclear facilities. These in the past have often concluded that such effects do not occur or they downplayed any effects which did occur. In fact, copious evidence exists throughout the world – over 60 studies – of raised cancer levels near NPPs. This is discussed in my scientific article in 2014 on a hypothesis to explain cancers near NPPs. Most (>75%) of these studies found cancer increases but because they were small, their findings were often dismissed as not statistically significant. In other words, they were chucked in the bin marked “not significant” without further consideration. I conclude by asking open-minded scientists and observers to reconsider their views about the above 60+ studies and the misleading COMARE reports showing raised cancer levels near NPPs. Just as people were misled about tobacco smoking in previous decades, perhaps we are being misled about raised cancers near NPPs nowadays.