Combining Risk Estimates from Multiple Epidemiologic Studies: A Distributional Approach.* A. I. Shlvakhter, Department of Physics, Harvard University, Cambridge, MA 02138
Results of epidemiologic studies are usually presented in the form of 95 percent confidence intervals (95% Cl) for the relative risk, RR. The result is termed a statistically significant positive finding if the lower bound of 95% Cl is above one; however, the reported 95% Cl account only for random errors caused by a finite number of subjects in the study. Although investigators always try to minimize the uncertainties caused by various possible biases (coming from such sources as selection, misclassification, and confounding), the effect of the remaining biases on the results is hard to quantify and it is discussed only qualitatively. These biases can be viewed as the analogues of systematic uncertainties in physical measurements. In physical measurements, uncertainties associated with random and systematic errors are combined together and this "combined standard uncertainty" then serves as the basis for calculating intervals corresponding to the required level of confidence. Even so, analysis of several datasets of physical measurements where the true values have subsequently become known, and the incidence of unsuspected errors can be derived, demonstrates a strong tendency for researchers to underestimate uncertainties in their results'. Frequent occurrence of contradictory results in epidemiology suggests that remaining biases may be even more widespread in observational studies than are the unsuspected errors in physical measurements. In this paper I propose a new procedure for presenting the results of multiple epidemiologic studies of the same outcome which may help evaluate how convincing is the evidence of elevated risk. The idea is to assume that the true risk is not elevated and to consider the observed RR values as deviations from this assumed "true" value. A set of the 95% Cl is transformed into a frequency distribution of the deviations of logarithms of the reported values of relative risk, ln(RR), from the null value ln(RR)=O (RR= 1) divided by the reported standard deviation of ln(RR). This distribution is compared with the distributions of errors in physical measurements'. Comparison of these distributions can, by analogy, help better understand how convincing the evidence of elevated risk in observational studies really is. The evidence of the elevated risk should be considered strong only if the mean of the distribution of RR values is higher than RR= 1 and the distribution has much longer positive tails than the distribution of errors in physical measurements. I apply this procedure to several sets of studies, in particular to the studies of the association of leukemia with exposure to electromagnetic fields and studies of the association of lung cancer with environmental tobacco smoke. Distributions of RR values in both sets are similar to the distribution of errors in physical measurements. This suggests that both sets of studies are inconclusive and cannot serve as the sole basis for policy decisions.
*Work supported by the Biological Effects Branch of the U.S. Department of Air Force through the contract F33615-92-C-0602.
1. A. I. Shlyakhter "Improved framework for uncertainty analysis: accounting for unsuspected errors," Risk Analysis, in press.
2. Howard, R. A., "Information Value Theory," IEEE Trans. Systems Science and Cybernetics, Vol. SSC-2, pp. 22-26, August 1966.
3. Linville, Charles D., "Exploration of Hypothesized Post-Research Probability Distributions in the Study of Global Climate Change", Doctoral Qualifying Paper, Dept. of Engineering and Public Policy, Carnegie Mellon University, 1993.
4. Hammit, James K., "Can More Information Increase Uncertainty?", unpublished manuscript, Dept. of Health Policy and Management, Harvard University.
5. Henrion, Max and Baruch Fischhoff, "Assessing Uncertainty in Physical Constants", Am. J. Phys., 54 (9), September 1986.
6. Liechtenstein, S. B. Fischhoff, and L. D. Phillips, in Judgment Under Uncertainty: Heuristics and Biases, Cambridge, 1982.
7. Committee on Environment and Natural Resources Research of the National Science and Technology Council, Our Changing Planet: The FY 1995 U.S. Global Change Research Program, 1994.
8. Dowlatabadi, Hadi and M. Granger Morgan, "A Model Framework for Integrated Studies of the Climate Problem," Energy Policy, March 1993.
9. Peck, Stephen C. and Thomas J. Teisberg, "Summary of Global Warming Uncertainties and the Value of Information: An Analysis Using CETA," from Costs, Impacts, and Benefits of CO2 Mitigation. Kaya, Y. et al., Editors, HASA, 1993.
10. Chao, Hung-po, "Managing the Risk of Global Climate Catastrophe: An Uncertainty Analysis," unpublished manuscript, Electric Power Research Institute, 1994.