In epidemiology, Mendelian randomization (commonly abbreviated to MR) is a method using measured variation in genes to examine the causal effect of an exposure on an outcome. Under key assumptions (see below), the design reduces both reverse causation and confounding, which often substantially impede or mislead the interpretation of results from epidemiological studies.[1] The study design was first proposed in 1986 and subsequently described by Gray and Wheatley as a method for obtaining unbiased estimates of the effects of an assumed causal variable without conducting a traditional randomized controlled trial (the standard in epidemiology for establishing causality). These authors also coined the term Mendelian randomization.
One of the predominant aims of epidemiology is to identify modifiable causes of health outcomes and disease especially those of public health concern. In order to ascertain whether modifying a particular trait (e.g. via an intervention, treatment or policy change) will convey a beneficial effect within a population, firm evidence that this trait causes the outcome of interest is required. However, many observational epidemiological study designs are limited in the ability to discern correlation from causation - specifically whether a particular trait causes an outcome of interest, is simply related to that outcome (but does not cause it) or is a consequence of the outcome itself. Only the former will be beneficial within a public health setting where the aim is to modify that trait to reduce the burden of disease. There are many epidemiological study designs that aim to understand relationships between traits within a population sample, each with shared and unique advantages and limitations in terms of providing causal evidence, with the "gold standard" being randomized controlled trials.[2]
Well-known successful demonstrations of causal evidence consistent across multiple studies with different designs include the identified causal links between smoking and lung cancer, and between blood pressure and stroke. However, there have also been notable failures when exposures hypothesized to be a causal risk factor for a particular outcome were later shown by well conducted randomized controlled trials not to be causal. For instance, it was previously thought that hormone replacement therapy would prevent cardiovascular disease, but it is now known to have no such benefit[3] Another notable example is that of selenium and prostate cancer. Some observational studies found an association between higher circulating selenium levels (usually acquired through various foods and dietary supplements) and lower risk of prostate cancer. However, the Selenium and Vitamin E Cancer Prevention Trial (SELECT) showed evidence that dietary selenium supplementation actually increased the risk of prostate and advanced prostate cancer and had an additional off-target effect on increasing type 2 diabetes risk.[4] Mendelian randomization methods now support the view that high selenium status may not prevent cancer in the general population, and may even increase the risk of specific types.[5] Such inconsistencies between observational epidemiological studies and randomized controlled trials are likely a function of social, behavioral, or physiological confounding factors in many observational epidemiological designs, which are particularly difficult to measure accurately and difficult to control for. Moreover, randomized controlled trials (RCTs) are usually expensive, time-consuming and laborious and many epidemiological findings cannot be ethically replicated in clinical trials. Mendelian randomization studies appear capable of resolving questions of potential confounding more efficiently than RCTs [6]
Mendelian randomization (MR) is fundamentally an instrumental variables estimation method hailing from econometrics. The method uses the properties of germline genetic variation (usually in the form of single nucleotide polymorphisms or SNPs) strongly associated with a putative exposure as a "proxy" or "instrument" for that exposure to test for and estimate a causal effect of the exposure on an outcome of interest from observational data. The genetic variation used will have either well-understood effects on exposure patterns (e.g. propensity to smoke heavily) or effects that mimic those produced by modifiable exposures (e.g., raised blood cholesterol). Importantly, the genotype must only affect the disease status indirectly via its effect on the exposure of interest.[7] As genotypes are assigned randomly when passed from parents to offspring during meiosis, then groups of individuals defined by genetic variation associated with an exposure at a population level should be largely unrelated to the confounding factors that typically plague observational epidemiology studies. Germline genetic variation (i.e. that which can be inherited) is also temporarily fixed at conception and not modified by the onset of any outcome or disease, precluding reverse causation. Additionally, given improvements in modern genotyping technologies, measurement error and systematic misclassification is often low with genetic data. In this regard Mendelian randomization can be thought of as analogous to "nature's randomized controlled trial".
Mendelian randomization requires three core instrumental variable assumptions.[8] Namely that:
To ensure that the first core assumption is validated, Mendelian randomization requires distinct associations between genetic variation and exposures of interest. These are usually obtained from genome-wide association studies though can also be candidate gene studies. The second assumption relies on there being no population substructure (e.g. geographical factors that induce an association between the genotype and outcome), mate choice that is not associated with genotype (i.e. random mating or panmixia) and no dynastic effects (i.e. where the expression of parental genotype in the parental phenotype directly affects the offspring phenotype).
Mendelian randomization is usually applied through the use of instrumental variables estimation with genetic variants acting as instruments for the exposure of interest.[9] This can be implemented using data on the genetic variants, exposure and outcome of interest for a set of individuals in a single dataset or using summary data on the association between the genetic variants and the exposure and the association between the genetic variants and the outcome in separate datasets. The method has also been used in economic research studying the effects of obesity on earnings, and other labor market outcomes.
When a single dataset is used the methods of estimation applied are those frequently used elsewhere in instrumental variable estimation, such as two-stage least squares.[10] If multiple genetic variants are associated with the exposure they can either be used individually as instruments or combined to create an allele score which is used as a single instrument.
Analysis using summary data often applies data from genome-wide association studies. In this case the association between genetic variants and the exposure is taken from the summary results produced by a genome-wide association study for the exposure. The association between the same genetic variants and the outcome is then taken from the summary results produced by a genome-wide association study for the outcome. These two sets of summary results are then used to obtain the MR estimate. Given the following notation:
\hat{\pi}g\equiv
g
(X)
\hat{\Gamma}g\equiv
g
(Y) ;
\hat{\sigma}g\equiv
\hat{\beta}MR\equiv
X
Y ;
and considering the effect of a single genetic variant, the MR estimate can be obtained from the Wald ratio:
\hat{\beta}MR=
\hat{\Gamma | |
g }{ \hat{\pi} |
g }~.
When multiple genetic variants are used, the individual ratios for each genetic variants are combined using inverse variance weighting where each individual ratio is weighted by the uncertainty in their estimation.[11] This gives the IVW estimate which can be calculated as:
\hat{\beta}IVW=
| ||||||||||
g \hat{\Gamma} |
g \sigma
2 } | |
y,g |
~.
Alternatively, the same estimate can be obtained from a linear regression which used the genetic variant-outcome association as the outcome and the genetic variant-exposure association as the exposure. This linear regression is weighted by the uncertainty in the genetic-variant outcome association and does not include a constant.
\hat{\Gamma}g=\betaIVW \hat{\pi}g+ug weighted by
1 | |
~~\hat{\sigma |
2 | |
y,g |
}~.
These methods only provide reliable estimates of the causal effect of the exposure on the outcome under the core instrumental variable assumptions. Alternative methods are available that are robust to a violation of the third assumption, i.e. that provide reliable results under some types of horizontal pleiotropy.[12] Additionally some biases that arise from violations of the second IV assumption, such as dynastic effects, can be overcome through the use of data which includes siblings or parents and their offspring.[13]
The Mendelian randomization method depends on two principles derived from the original work by Gregor Mendel on genetic inheritance. Its foundation come from Mendel’s laws namely 1) the law of segregation in which there is complete segregation of the two allelomorphs in equal number of germ-cells of a heterozygote and 2) separate pairs of allelomorphs segregate independently of one another and which were first published as such in 1906 by Robert Heath Lock. Another progenitor of Mendelian randomization is Sewall Wright who introduced path analysis, a form of causal diagram used for making causal inference from non-experimental data. The method relies on causal anchors, and the anchors in the majority of his examples were provided by Mendelian inheritance, as is the basis of MR.[14] Another component of the logic of MR is the instrumental gene, the concept of which was introduced by Thomas Hunt Morgan.[15] This is important as it removed the need to understand the physiology of the gene for making the inference about genetic processes.
Since that time the literature includes examples of research using molecular genetics to make inference about modifiable risk factors, which is the essence of MR. One example is the work of Gerry Lower and colleagues in 1979 who used the N-acetyltransferase phenotype as an anchor to draw inference about various exposures including smoking and amine dyes as risk factors for bladder cancer. [16] Another example is the work of Martijn Katan (then of Wageningen University & Research, Netherlands) in which he advocated a study design using Apolipoprotein E allele as an instrumental variable anchor to study the observed relationship between low blood cholesterol levels and increased risk of cancer. In fact, the term “Mendelian randomization” was first used in print by Richard Gray and Keith Wheatley (both of Radcliffe Infirmary, Oxford, UK) in 1991 in a somewhat different context; in a method allowing instrumental variable estimation but in relation to an approach relying on Mendelian inheritance rather than genotype. In their 2003 paper, Shah Ebrahim and George Davey Smith use the term again to describe the method of using germline genetic variants for understanding causality in an instrumental variable analysis, and it is this methodology that is now widely used and to which the meaning is ascribed. The Mendelian randomization method is now widely adopted in causal epidemiology, and the number of MR studies reported in the scientific literature has grown every year since the 2003 paper. In 2021 STROBE-MR guidelines were published to assist readers and reviewers of Mendelian randomization studies to evaluate the validity and utility of published studies.[17]