Blinded experiment explained

In a blind or blinded experiment, information which may influence the participants of the experiment is withheld until after the experiment is complete. Good blinding can reduce or eliminate experimental biases that arise from a participants' expectations, observer's effect on the participants, observer bias, confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constraints.

During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. For example, a patient who experiences a side effect may correctly guess their treatment, becoming unblinded. Unblinding is common in blinded experiments, particularly in pharmacological trials. In particular, trials on pain medication and antidepressants are poorly blinded. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. The CONSORT reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies do so.[1]

Blinding is an important tool of the scientific method, and is used in many fields of research. In some fields, such as medicine, it is considered essential.[2] In clinical research, a trial that is not a blinded trial is called an open trial.

History

The first known blind experiment was conducted by the French Royal Commission on Animal Magnetism in 1784 to investigate the claims of mesmerism as proposed by Charles d'Eslon, a former associate of Franz Mesmer. In the investigations, the researchers (physically) blindfolded mesmerists and asked them to identify objects that the experimenters had previously filled with "vital fluid". The subjects were unable to do so.

In 1817, the first blind experiment recorded to have occurred outside of a scientific setting compared the musical quality of a Stradivarius violin to one with a guitar-like design. A violinist played each instrument while a committee of scientists and musicians listened from another room so as to avoid prejudice.[3] [4]

An early example of a double-blind protocol was the Nuremberg salt test of 1835 performed by Friedrich Wilhelm von Hoven, Nuremberg's highest-ranking public health official,[5] as well as a close friend of Friedrich Schiller.[6] This trial contested the effectiveness of homeopathic dilution.[5]

In 1865, Claude Bernard published his Introduction to the Study of Experimental Medicine, which advocated for the blinding of researchers.[7] Bernard's recommendation that an experiment's observer should not know the hypothesis being tested contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist.[8] The first study recorded to have a blinded researcher was conducted in 1907 by W. H. R. Rivers and H. N. Webber to investigate the effects of caffeine.[9] The need to blind researchers became widely recognized in the mid-20th century.[10]

Background

Bias

A number of biases are present when a study is insufficiently blinded. Patient-reported outcomes can be different if the patient is not blinded to their treatment.[11] Likewise, failure to blind researchers results in observer bias.[12] Unblinded data analysts may favor an analysis that supports their existing beliefs (confirmation bias). These biases are typically the result of subconscious influences, and are present even when study participants believe they are not influenced by them.[13]

Terminology

In medical research, the terms single-blind, double-blind and triple-blind are commonly used to describe blinding. These terms describe experiments in which (respectively) one, two, or three parties are blinded to some information. Most often, single-blind studies blind patients to their treatment allocation, double-blind studies blind both patients and researchers to treatment allocations, and triple-blinded studies blind patients, researcher, and some other third party (such as a monitoring committee) to treatment allocations. However, the meaning of these terms can vary from study to study.[14]

CONSORT guidelines state that these terms should no longer be used because they are ambiguous. For instance, "double-blind" could mean that the data analysts and patients were blinded; or the patients and outcome assessors were blinded; or the patients and people offering the intervention were blinded, etc. The terms also fail to convey the information that was masked and the amount of unblinding that occurred. It is not sufficient to specify the number of parties that have been blinded. To describe an experiment's blinding, it is necessary to report who has been blinded to what information, and how well each blind succeeded.[15]

Unblinding

"Unblinding" occurs in a blinded experiment when information becomes available to one from whom it has been masked. In clinical studies, unblinding may occur unintentionally when a patient deduces their treatment group. Unblinding that occurs before the conclusion of an experiment is a source of bias. Some degree of premature unblinding is common in blinded experiments.[16] When a blind is imperfect, its success is judged on a spectrum with no blind (or complete failure of blinding) on one end, perfect blinding on the other, and poor or good blinding between. Thus, the common view of studies as blinded or unblinded is an example of a false dichotomy.[17]

Success of blinding is assessed by questioning study participants about information that has been masked to them (e.g. did the participant receive the drug or placebo?). In a perfectly blinded experiment, the responses should be consistent with no knowledge of the masked information. However, if unblinding has occurred, the responses will indicate the degree of unblinding. Since unblinding cannot be measured directly, but must be inferred from participants' responses, its measured value will depend on the nature of the questions asked. As a result, it is not possible to measure unblinding in a way that is completely objective. Nonetheless, it is still possible to make informed judgments about the quality of a blind. Poorly blinded studies rank above unblinded studies and below well-blinded studies in the hierarchy of evidence.[18]

Post-study unblinding

Post-study unblinding is the release of masked data upon completion of a study. In clinical studies, post-study unblinding serves to inform subjects of their treatment allocation. Removing a blind upon completion of a study is never mandatory, but is typically performed as a courtesy to study participants. Unblinding that occurs after the conclusion of a study is not a source of bias, because data collection and analysis are both complete at this time.[19]

Premature unblinding

Premature unblinding is any unblinding that occurs before the conclusion of a study. In contrast with post-study unblinding, premature unblinding is a source of bias. A code-break procedure dictates when a subject should be unblinded prematurely. A code-break procedure should only allow for unblinding in cases of emergency. Unblinding that occurs in compliance with code-break procedure is strictly documented and reported.[20]

Premature unblinding may also occur when a participant infers from experimental conditions information that has been masked to them. A common cause for unblinding is the presence of side effects (or effects) in the treatment group. In pharmacological trials, premature unblinding can be reduced with the use of an active placebo, which conceals treatment allocation by ensuring the presence of side effects in both groups.[21] However, side effects are not the only cause of unblinding; any perceptible difference between the treatment and control groups can contribute to premature unblinding.

A problem arises in the assessment of blinding because asking subjects to guess masked information may prompt them to try to infer that information. Researchers speculate that this may contribute to premature unblinding.[22] Furthermore, it has been reported that some subjects of clinical trials attempt to determine if they have received an active treatment by gathering information on social media and message boards. While researchers counsel patients not to use social media to discuss clinical trials, their accounts are not monitored. This behavior is believed to be a source of unblinding.[23] CONSORT standards and good clinical practice guidelines recommend the reporting of all premature unblinding.[24] [25] In practice, unintentional unblinding is rarely reported.

Significance

Bias due to poor blinding tends to favor the experimental group, resulting in inflated effect size and risk of false positives. Success or failure of blinding is rarely reported or measured; it is implicitly assumed that experiments reported as "blind" are truly blind. Critics have pointed out that without assessment and reporting, there is no way to know if a blind succeeded. This shortcoming is especially concerning given that even a small error in blinding can produce a statistically significant result in the absence of any real difference between test groups when a study is sufficiently powered (i.e. statistical significance is not robust to bias). As such, many statistically significant results in randomized controlled trials may be caused by error in blinding.[26] Some researchers have called for the mandatory assessment of blinding efficacy in clinical trials.

Applications

In medicine

Blinding is considered essential in medicine,[27] but is often difficult to achieve. For example, it is difficult to compare surgical and non-surgical interventions in blind trials. In some cases, sham surgery may be necessary for the blinding process. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains.

Studies of blinded pharmacological trials across widely varying domains find evidence of high levels of unblinding. Unblinding has been shown to affect both patients and clinicians. This evidence challenges the common assumption that blinding is highly effective in pharmacological trials. Unblinding has also been documented in clinical trials outside of pharmacology.[28]

Pain

A 2018 meta-analysis found that assessment of blinding was reported in only 23 out of 408 randomized controlled trials for chronic pain (5.6%). The study concluded upon analysis of pooled data that the overall quality of the blinding was poor, and the blinding was "not successful." Additionally, both pharmaceutical sponsorship and the presence of side effects were associated with lower rates of reporting assessment of blinding.[29]

Depression

Studies have found evidence of extensive unblinding in antidepressant trials: at least three-quarters of patients were able to correctly guess their treatment assignment.[30] Unblinding also occurs in clinicians.[31] Better blinding of patients and clinicians reduces effect size. Researchers concluded that unblinding inflates effect size in antidepressant trials.[32] [33] [34] Some researchers believe that antidepressants are not effective for the treatment of depression and only outperform placebos due to systematic error. These researchers argue that antidepressants are just active placebos.[35] [36]

Acupuncture

While the possibility of blinded trials on acupuncture is controversial, a 2003 review of 47 randomized controlled trials found no fewer than four methods of blinding patients to acupuncture treatment: 1) superficial needling of true acupuncture points, 2) use of acupuncture points which are not indicated for the condition being treated, 3) insertion of needles outside of true acupuncture points, and 4) the use of placebo needles which are designed not to penetrate the skin. The authors concluded that there was "no clear association between type of sham intervention used and the results of the trials."[37]

A 2018 study on acupuncture which used needles that did not penetrate the skin as a sham treatment found that 68% of patients and 83% of acupuncturists correctly identified their group allocation. The authors concluded that the blinding had failed, but that more advanced placebos may someday offer the possibility of well-blinded studies in acupuncture.[38]

In physics

It is standard practice in physics to perform blinded data analysis. After data analysis is complete, one is allowed to unblind the data. A prior agreement to publish the data regardless of the results of the analysis may be made to prevent publication bias.

In social sciences

Social science research is particularly prone to observer bias, so it is important in these fields to properly blind the researchers. In some cases, while blind experiments would be useful, they are impractical or unethical. Blinded data analysis can reduce bias, but is rarely used in social science research.[39]

In forensics

In a police photo lineup, an officer shows a group of photos to a witness and asks the witness to identify the individual who committed the crime. Since the officer is typically aware of who the suspect is, they may (subconsciously or consciously) influence the witness to choose the individual that they believe committed the crime. There is a growing movement in law enforcement to move to a blind procedure in which the officer who shows the photos to the witness does not know who the suspect is.[40] [41]

In music

See main article: Blind audition. Auditions for symphony orchestras take place behind a curtain so that the judges cannot see the performer. Blinding the judges to the gender of the performers has been shown to increase the hiring of women.[42] Blind tests can also be used to compare the quality of musical instruments.[43] [44]

See also

Notes and References

  1. Bello . Segun . Moustgaard . Helene . Hróbjartsson . Asbjørn . The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications . Journal of Clinical Epidemiology . October 2014 . 67 . 10 . 1059–1069 . 10.1016/j.jclinepi.2014.05.007 . 24973822 . 1878-5921.
  2. Web site: Oxford Centre for Evidence-based Medicine - Levels of Evidence (March 2009) - CEBM. 11 June 2009. cebm.net. https://web.archive.org/web/20171026195451/http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/. 26 October 2017. live. 2 May 2018.
  3. Book: Fétis, François-Joseph . vanc . 1868. Biographie Universelle des Musiciens et Bibliographie Générale de la Musique, Tome 1. Paris. Firmin Didot Frères, Fils, et Cie. Second. 249. 2011-07-21.
  4. Book: Dubourg, George . vanc . 1852 . The Violin: Some Account of That Leading Instrument and its Most Eminent Professors.... Fourth . London . Robert Cocks and Co . 356–357 . 2011-07-21 .
  5. December 2006. Inventing the randomized double-blind trial: the Nuremberg salt test of 1835 . Journal of the Royal Society of Medicine. 10.1177/014107680609901216. 1676327. Stolberg. M.. 99. 12. 642–643. 17139070.
  6. Biographie Des Doctor Friedrich Wilhelm Von Hoven (1840), ISBN 1104040891
  7. Book: Bernard, Claude . Introduction à l'étude de la médecine expérimentale . Dagognet . François . 2008 . Flammarion . 978-2-08-121793-5 . Champs . Paris.
  8. Daston. Lorraine . vanc . Scientific Error and the Ethos of Belief. Social Research. 72. 1. 2005. 18. 10.1353/sor.2005.0016 . 141036212 .
  9. Rivers WH, Webber HN . The action of caffeine on the capacity for muscular work . The Journal of Physiology . 36 . 1 . 33–47 . August 1907 . 16992882 . 1533733 . 10.1113/jphysiol.1907.sp001215 .
  10. Book: Alder, Ken . vanc . A Companion to Western Historical Thought. 2006. Shortly after the start of the Cold War [...] double-blind reviews became the norm for conducting scientific medical research, as well as the means by which peers evaluated scholarship, both in science and in history. . Blackwell Companions to History. The History of Science, Or, an Oxymoronic Theory of Relativistic Objectivity. 307. Wiley-Blackwell. Kramer. Maza. Lloyd S.. Sarah C.. 978-1-4051-4961-7. 2012-02-11.
  11. Hróbjartsson . A . Emanuelsson . F . Skou Thomsen . AS . Hilden . J . Brorson . S . Bias due to lack of patient blinding in clinical trials. A systematic review of trials randomizing patients to blind and nonblind sub-studies. . International Journal of Epidemiology . August 2014 . 43 . 4 . 1272–83 . 10.1093/ije/dyu115 . 24881045 . 4258786 .
  12. Bello . S . Krogsbøll . LT . Gruber . J . Zhao . ZJ . Fischer . D . Hróbjartsson . A . Lack of blinding of outcome assessors in animal model experiments implies risk of observer bias. . Journal of Clinical Epidemiology . September 2014 . 67 . 9 . 973–83 . 10.1016/j.jclinepi.2014.04.008 . 24972762 . free .
  13. MacCoun . Robert . Perlmutter . Saul . Blind analysis: Hide results to seek the truth . Nature . 7 October 2015 . 526 . 7572 . 187–189 . 10.1038/526187a . 26450040 . 2015Natur.526..187M . free .
  14. Schulz KF, Chalmers I, Altman DG. 34932997. February 2002. The landscape and lexicon of blinding in randomized trials. Annals of Internal Medicine. 136. 3. 254–9. 10.7326/0003-4819-136-3-200202050-00022. 11827510.
  15. Moher . David . Hopewell . Sally . Schulz . Kenneth F. . Montori . Victor . Gøtzsche . Peter C. . Devereaux . P. J. . Elbourne . Diana . Egger . Matthias . Altman . Douglas G. . CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials . BMJ (Clinical Research Ed.) . 23 March 2010 . 340 . c869 . 10.1136/bmj.c869 . 20332511 . 2844943 . 1756-1833.
  16. Bello . Segun . Moustgaard . Helene . Hróbjartsson . Asbjørn . Unreported formal assessment of unblinding occurred in 4 of 10 randomized clinical trials, unreported loss of blinding in 1 of 10 trials . Journal of Clinical Epidemiology . 81 . 42–50 . 10.1016/j.jclinepi.2016.08.002 . 27555081 . 1878-5921 . 2017 .
  17. Schulz . Kenneth F. . Grimes . David A. . 11578262 . Blinding in randomised trials: hiding who got what . Lancet . 23 February 2002 . 359 . 9307 . 696–700 . 10.1016/S0140-6736(02)07816-9 . 11879884 . 0140-6736.
  18. Kolahi . J . Bang . H . Park . J . Towards a proposal for assessment of blinding success in clinical trials: up-to-date review. . Community Dentistry and Oral Epidemiology . December 2009 . 37 . 6 . 477–84 . 10.1111/j.1600-0528.2009.00494.x . 19758415 . 3044082 . 1600-0528.
  19. Dinnett EM, Mungall MM, Kent JA, Ronald ES, McIntyre KE, Anderson E, Gaw A . 36252366 . Unblinding of trial participants to their treatment allocation: lessons from the Prospective Study of Pravastatin in the Elderly at Risk (PROSPER) . Clin Trials . 2 . 3 . 254–259 . 2005 . 16279148 . 10.1191/1740774505cn089oa .
  20. Quittell . Lynne M. . The Scientific and Social Implications of Unblinding a Study Subject . The American Journal of Bioethics . 3 October 2018 . 18 . 10 . 71–73 . 10.1080/15265161.2018.1513589 . 30339067 . 53014880 . 1526-5161.
  21. Double . D. B. . Placebo mania. Placebo controlled trials are needed to provide data on effectiveness of active treatment. . BMJ: British Medical Journal . 19 October 1996 . 313 . 7063 . 1008–9 . 8892442 . 2352320 . 0959-8138 . 10.1136/bmj.313.7063.1008b .
  22. Rees . Judy R. . Wade . Timothy J. . Levy . Deborah A. . Colford . John M. . Hilton . Joan F. . Changes in beliefs identify unblinding in randomized controlled trials: a method to meet CONSORT guidelines . Contemporary Clinical Trials . February 2005 . 26 . 1 . 25–37 . 10.1016/j.cct.2004.11.020 . 15837450 .
  23. Web site: Ledford . Heidi . A question of Control . Nature Magazine . 24 April 2019.
  24. Moher . David . Altman . Douglas G. . Schulz . Kenneth F. . CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials . BMJ . 24 March 2010 . 340 . c332 . 10.1136/bmj.c332 . 20332509 . 2844940 . en . 0959-8138.
  25. Web site: E6(R2) Good Clinical Practice: Integrated Addendum to ICH E6(R1) Guidance for Industry . fda.gov . 21 April 2019. 2019-04-05.
  26. Siegfried . Tom . Odds are, it's wrong: Science fails to face the shortcomings of statistics . Science News . 2010 . 177 . 7 . 26–29 . 10.1002/scin.5591770721 . en . 1943-0930.
  27. Web site: Oxford Centre for Evidence-based Medicine - Levels of Evidence (March 2009) - CEBM. 11 June 2009. cebm.net. 2 May 2018. live. https://web.archive.org/web/20171026195451/http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/. 26 October 2017.
  28. JUL 2009 . The Pharmaceutical Journal31 . An example of problems that arise from clinical trials and how to avoid them . Pharmaceutical Journal . 31 July 2009 . 283 . 129–130 . 24 April 2019 . en.
  29. Colagiuri . Ben . Sharpe . Louise . Scott . Amelia . The Blind Leading the Not-So-Blind: A Meta-Analysis of Blinding in Pharmacological Trials for Chronic Pain . The Journal of Pain . 20 . 5 . 489–500 . September 2018 . 10.1016/j.jpain.2018.09.002 . 30248448 . 52813251 . 22 April 2019 . en . 1526-5900. free .
  30. Perlis . Roy H. . Ostacher . Michael . Fava . Maurizio . Nierenberg . Andrew A. . Sachs . Gary S. . Rosenbaum . Jerrold F. . Assuring that double-blind is blind . The American Journal of Psychiatry . 2010 . 167 . 3 . 250–252 . 10.1176/appi.ajp.2009.09060820 . 20194487 . 207628021 . 1535-7228.
  31. White . K. . Kando . J. . Park . T. . Waternaux . C. . Brown . W. A. . Side effects and the "blindability" of clinical drug trials . The American Journal of Psychiatry . December 1992 . 149 . 12 . 1730–1731 . 10.1176/ajp.149.12.1730 . 1443253 . 0002-953X.
  32. Moncrieff . Joanna . Wessely . Simon . Hardy . Rebecca . Meta-analysis of trials comparing antidepressants with active placebos . British Journal of Psychiatry . 2 January 2018 . 172 . 3 . 227–231 . 10.1192/bjp.172.3.227 . 9614471 . 4975797 . 0007-1250.
  33. Greenberg . RP . Bornstein . RF . Greenberg . MD . Fisher . S . A meta-analysis of antidepressant outcome under "blinder" conditions. . Journal of Consulting and Clinical Psychology . October 1992 . 60 . 5 . 664–9; discussion 670–7 . 10.1037/0022-006X.60.5.664 . 1401382 . 0022-006X.
  34. Moncrieff . J . Wessely . S . Hardy . R . Active placebos versus antidepressants for depression. . The Cochrane Database of Systematic Reviews . 2004 . 2012 . 1 . CD003012 . 10.1002/14651858.CD003012.pub2 . 14974002 . 1469-493X. 8407353 .
  35. Ioannidis . JP . Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials? . Philosophy, Ethics, and Humanities in Medicine . 27 May 2008 . 3 . 14 . 10.1186/1747-5341-3-14 . 18505564 . 2412901 . 1747-5341 . free .
  36. Kirsch . Irving . Antidepressants and the Placebo Effect . Zeitschrift für Psychologie . 2014 . 222 . 3 . 128–134 . 10.1027/2151-2604/a000176 . 25279271 . 4172306 . 2190-8370.
  37. Dincer . F . Linde . K. . Sham interventions in randomized clinical trials of acupuncture—a review . Complementary Therapies in Medicine . December 2003 . 11 . 4 . 235–242 . 10.1016/S0965-2299(03)00124-9 . 15022656 . en.
  38. Vase . L . Baram . S . Takakura . N . Takayama . M . Yajima . H . Kawase . A . Schuster . L . Kaptchuk . TJ . Schou . S . Jensen . TS . Zachariae . R . Svensson . P . Can acupuncture treatment be double-blinded? An evaluation of double-blind acupuncture treatment of postoperative pain. . PLOS ONE . 2015 . 10 . 3 . e0119612 . 10.1371/journal.pone.0119612 . 25747157 . 4352029 . 1932-6203. 2015PLoSO..1019612V . free .
  39. Web site: 'Blind analysis' could reduce bias in social science research. Sanders. Robert. 2015-10-08. Berkeley News. en-US. 2019-08-29.
  40. Melissa . Dittmann . vanc . Accuracy and the accused: Psychologists work with law enforcement on research-based improvements to crime-suspect identification. . Monitor on Psychology . American Psychological Association . July–August 2004 . 35 . 7 . 74 .
  41. Web site: Brendan I. . Koerner . vanc . Under the Microscope . Legal Affairs . July–August 2002. 2 May 2018.
  42. Web site: Miller . Claire Cain . vanc . Is Blind Hiring the Best Hiring? . The New York Times . 26 April 2019 . 25 February 2016.
  43. https://www.discovermagazine.com/the-sciences/violinists-cant-tell-the-difference-between-stradivarius-violins-and-new-ones "Violinists can't tell the difference between Stradivarius violins and new ones"
  44. 10.1073/pnas.1114999109 . 109 . Player preferences among new and old violins . 2012 . Proceedings of the National Academy of Sciences . 760–63 . Fritz . C.. 3 . 3271912 . 22215592. 2012PNAS..109..760F . free .