Stanford Research Institute (now SRI International) in Menlo Park, California carried out research on various phenomena characterized by the term parapsychology from 1972 until 1991. Early studies indicating that phenomena such as remote viewing and psychokinesis could be scientifically studied were published in such mainstream journals as Proceedings of the IEEE and Nature. This attracted the sponsorship of such groups as NASA (by way of Jet Propulsion Laboratory) and The Central Intelligence Agency.
In 1991, the research program was transferred to SAIC as part of the Stargate Project.[1] While the SRI projects were classified at the time, the research materials were subsequently made public in 1995, and a summary of the early history of SRI and the origins of Stargate was published the following year.[2] Scientists and skeptical writers would later find serious flaws in the methodology used at SRI, leaving the work largely discredited.
Harold Puthoff and Russell Targ were the most widely known of the researchers involved with SRI. Originally known for their work with lasers,[3] their work with parapsychology centered around the phenomena of remote viewing and psychokinesis.
Early experiments in remote viewing involved one subject in the laboratory, the "percipient", attempting to draw or describe a scene, known as the "target", observed by a "sender" from a remote location outside of the laboratory. Protocol stipulated that an independent panel of judges was to determine how close each percipient's description or drawing was to the actual target. Reports that the Soviet Union had successfully used technological devices to augment such psychic communication[4] moved the Central Intelligence Agency in the USA to attempt to keep pace with their Cold War rivals.[5] Puthoff and Targ went on to explore the effect devices such as the Faraday cage and magnetometer would have on the accuracy of the images received by their percipients.[6]
Support for the study of psychokinesis came from NASA on a contract administered by JPL in the early 1970s. Protocol called for an automated system that would display images of random objects to an individual. Prior to the image being displayed, the subject would predict what object would appear and then attempt to bias the outcome in favor of the chosen object. However, these protocols were not consistently observed and support was discontinued.[7] Other experiments with psychokinesis included attempts by Ingo Swann to psychically influence the readouts of a magnetometer.[8] [9]
New York artist Ingo Swann met with Puthoff and Targ in 1972 and participated in their remote viewing experiments.[10] In June of that same year, Puthoff and Targ took Swann to a large magnetometer to see what changes Swann could make in the readouts of the machine. While the readouts did show some fluctuations, there was no evidence that this was due to any efforts on the part of Swann.[11] Nevertheless, Puthoff and Targ announced to a gathering in Geneva, Switzerland that they had indeed definitively established psychokinesis as a real phenomenon.[12] The builder of the machine, who had been present during Swann's visit, would later report that while there had been fluctuations these were in no way unexpected or outside the normal parameters.[13]
Uri Geller began work with SRI in the early 1970s and was the primary focus of Puthoff and Targ's 1974 article in the journal Nature. This article described numerous remote viewing trials undertaken by Geller and the extraordinary results they had gotten during the six weeks he spent at the laboratories.[14] [15] Geller quickly gained notoriety with his apparent gifts for remote viewing and psychokinesis—the latter usually in the form of the bending or otherwise altering the form of metal objects.
This attracted other scientists to SRI to see if his abilities lived up to the claims. One of these researchers was skeptic Ray Hyman, who has written extensively on both Geller and the field of parapsychological research. He summed up his observations in an address given at Simon Fraser University in 2007. "It turned out that no one, none of the scientists there had ever witnessed Geller bending anything without getting his hands on it first. And typically, from what I could figure out, he not only had his hands on it but he was able to disappear into the bathroom for a while, . . . They simply took his word for it."[16] It seemed that no one, excepting Puthoff and Targ, had actually witnessed Geller successfully exhibiting his abilities under controlled conditions. Geller's reputation rested largely on hearsay rather than direct observation.[17] Notably, Geller also refused to attempt many of the tests of clairvoyance; and on others, he performed no better than chance.[18] Magician and paranormal investigator James Randi, who viewed some of the video recordings of Geller's time in the lab, attributes Geller's successes to "simple tricks". Even Edgar Mitchell, who was present for the experiments and was a supporter of Geller, noted that Puthoff and Targ were sloppy in their research.
Geller would go on to worldwide fame for performing acts such as spoon bending, and aportations for both television and live audiences,[19] at times offering Puthoff and Targ's evaluation as substantiation of his abilities.[20]
Published in 1974 in the journal Nature, the article "Information transmission under conditions of sensory shielding"[15] had been circulating among scientific journals since 1972.[21] Although it had been rejected multiple times by other journals, the editor of Nature accepted the paper simply as an example of the type of work currently being done in the field of parapsychology. Far from endorsing the conclusions reached by the SRI researchers, the editor stated plainly in a lengthy opinion piece that ran at the beginning of the publishing issue: "Publishing in a scientific journal is not a process of receiving a seal of approval from the establishment".[22] [23] The editor then enumerated the objections against publication voiced by the referees. These objections included references to the lack of substantive evidence, problematic data collection, weak statistical calculations and relationships, and many others.[22] [23] As to why the paper was published, several reasons were provided, including the fact that "the paper is presented as a scientific document by two qualified scientists, writing from a major research establishment apparently with the unqualified backing of the research institute itself" and that the paper "would allow parapsychologists, and all other scientists interested in researching this arguable field, to gauge the quality of the Stanford research and assess how much it is contributing to parapsychology", as well as noting that readers of Nature would expect the publication of "high-risk" papers.
The paper was problematic even among Puthoff and Targ's colleagues at SRI. Two other scientists also worked on tests that involved Geller and other remote viewing subjects. Charles Rebert, an expert on electroencephalography (EEG), and Leon Otis, a psychologist, held much more strictly to rigid scientific methods during the tests with which they were associated. Rebert and Otis went so far as to document their objections to what they termed as "fraudulent and slipshod" work and to demand that any experiments they had been involved in be stricken from the paper before publication.[24] [25]
Despite the editorial disclaimers published in the same issue as Puthoff and Targ's paper, their most famous test subject, Uri Geller, continues to tout the publication of these experiments in the respected journal as evidence of his claims of psychic powers.[26]
Various attempts to replicate the remote viewing findings were carried out from the mid-1970s until the 1990s. Several of these follow-up studies, which involved viewing in group settings, reported some limited success. They included the use of face-to-face groups,[27] [28] and remotely linked groups using computer conferencing.[29]
One well-documented attempt at replication of the remote viewing experiments was conducted by Ray Hyman and James McClenon in 1980. Hyman and McClenon were not only interested in remote viewing as a phenomenon in itself, but also in how the methods used by the researchers could affect the outcomes of the trials. This study found no evidence for the efficacy of remote viewing. It did, however, highlight methods and practices, both incidental and by design, that had the potential to produce false positives.[30]
The various debates in the mainstream scientific literature prompted the editors of Proceedings of the IEEE to invite Robert Jahn, then Dean of the School of Engineering at Princeton University, to write a comprehensive review of psychic phenomena from an engineering perspective. His paper,[31] published in February 1982, includes numerous references to remote viewing replication studies at the time. Subsequently, flaws and mistakes in Jahn's reasoning were exposed by Ray Hyman in a critical appraisal published several years later in the same journal.[32]
The descriptions of a large number of psychic studies and their results were published in March 1976, in the journal Proceedings of the IEEE.[33] Together with the earlier papers, this provoked intense scrutiny in the mainstream scientific literature. Numerous problems in the overall design of the remote viewing studies were identified, with problems noted in all three of the remote viewing steps (target selection, target viewing, and results judging). A particular problem was the failure to follow the standard procedures that are used in experimental psychology.[34]
Several external researchers expressed concerns about the reliability of the judging process. Independent examination of some of the sketches and transcripts from the viewing process revealed flaws in the original procedures and analyses. In particular, the presence of sensory cues being available to the judges was noted.[35] A lengthy exchange ensued, with the external researchers finally concluding that the failure of Puthoff and Targ to address their concerns meant that the claim of remote viewing "can no longer be regarded as falling within the scientific domain".[36] [37] Procedural problems and researcher conflicts of interest in the psychokinesis experiments were noted by science writer Martin Gardner in a detailed analysis of the NASA final report.[38] Also, sloppy procedures in the conduct of the EEG study were reported by a visiting observer during another series of exchanges in the scientific literature.[39]
In his book Flim Flam! James Randi presents a detailed criticism of the methods employed by Puthoff and Targ:[12] Peepholes through walls, overly helpful laboratory assistants, and incautious conversations between researchers were common occurrences in Puthoff and Targ's laboratories. Randi also contacted the builder of the magnetometer used in the Swann experiments and established that the phenomena claimed as psychokinetic were no more than the normal fluctuations of the machine.[40]
Ray Hyman and James McClenon's 1980 replication study identified many of the same problems in methodology as James Randi had, particularly in the area of researchers giving subjects in remote viewing trials verbal cues that hinted at what the target images were. Although this was a small study with only eight participants, Hyman was particularly interested in how cuing from researchers affected both the subjects' answers during the trial and their attitudes toward psychic phenomena at the end of the trial. After reviewing the literature generated by researchers at SRI and conducting his own replication study, Hyman summed up his findings as, "The bottom line here is that there is no scientifically convincing case for remote viewing."[41]
Publication in scientific journals is often viewed by both the scientific community and by the public at large as a mark of legitimacy for researchers. Proponents of Puthoff and Targ claim 28 published papers, 15 of which showed positive results. An in-depth review of these papers showed that only 13 of the 28 total papers were published under commonly accepted standards of peer review. Of these 13, nine showed positive results. Three of these nine, however, were "retrospective experiments"; meaning that they were "experiments not specifically planned in advance, but apparently reconstructed from separate trials".[42] These retrospective experiments appeared to suffer from the sharpshooter fallacy—the creation of the target after the answers have been given. Of the remaining six studies, only two were found to show actual statistical significance due to the use of inappropriate statistical analyses. Those remaining two studies have yet to be fully replicated.[43]