Total Information Awareness (TIA) was a mass detection program by the United States Information Awareness Office. It operated under this title from February to May 2003 before being renamed Terrorism Information Awareness.[1]
Based on the concept of predictive policing, TIA was meant to correlate detailed information about people in order to anticipate and prevent terrorist incidents before execution.[2] The program modeled specific information sets in the hunt for terrorists around the globe.[3] Admiral John Poindexter called it a "Manhattan Project for counter-terrorism".[4] According to Senator Ron Wyden, TIA was the "biggest surveillance program in the history of the United States".[5]
Congress defunded the Information Awareness Office in late 2003 after media reports criticized the government for attempting to establish "Total Information Awareness" over all citizens.[6] [7]
Although the program was formally suspended, other government agencies later adopted some of its software with only superficial changes. TIA's core architecture continued development under the code name "Basketball". According to a 2012 New York Times article, TIA's legacy was "quietly thriving" at the National Security Agency (NSA).[8]
TIA was intended to be a five-year research project by the Defense Advanced Research Projects Agency (DARPA). The goal was to integrate components from previous and new government intelligence and surveillance programs, including Genoa, Genoa II, Genisys, SSNA, EELD, WAE, TIDES, Communicator, HumanID and Bio-Surveillance, with data mining knowledge gleaned from the private sector to create a resource for the intelligence, counterintelligence, and law enforcement communities.[9] [10] These components consisted of information analysis, collaboration, decision-support tools, language translation, data-searching, pattern recognition, and privacy-protection technologies.[11]
TIA research included or planned to include the participation of nine government entities: INSCOM, NSA, DIA, CIA, CIFA, STRATCOM, SOCOM, JFCOM, and JWAC.[11] They were to be able to access TIA's programs through a series of dedicated nodes. INSCOM was to house TIA's hardware in Fort Belvoir, Virginia.[12] Companies contracted to work on TIA included the Science Applications International Corporation, Booz Allen Hamilton, Lockheed Martin Corporation, Schafer Corporation, SRS Technologies, Adroit Systems, CACI Dynamic Systems, ASI Systems International, and Syntek Technologies.
Universities enlisted to assist with research and development included Berkeley, Colorado State, Carnegie Mellon, Columbia, Cornell, Dallas, Georgia Tech, Maryland, MIT, and Southampton.
TIA's goal was to revolutionize the United States' ability to detect, classify and identify foreign terrorists and decipher their plans, thereby enabling the U.S. to take timely action to preempt and disrupt terrorist activity.
To that end, TIA was to create a counter-terrorism information system that:[13]
See main article: Project Genoa. Unlike the other program components, Genoa predated TIA and provided a basis for it.[14] Genoa's primary function was intelligence analysis to assist human analysts. It was designed to support both top-down and bottom-up approaches; a policymaker could hypothesize an attack and use Genoa to look for supporting evidence of it or compile pieces of intelligence into a diagram and suggest possible outcomes. Human analysts could then modify the diagram to test various cases.
Genoa was independently commissioned in 1996 and completed in 2002 as scheduled.
See main article: Project Genoa II. While Genoa primarily focused on intelligence analysis, Genoa II aimed to provide means by which computers, software agents, policymakers, and field operatives could collaborate.[15]
Genisys aimed to develop technologies that would enable "ultra-large, all-source information repositories".[16] Vast amounts of information were to be collected and analyzed, and the available database technology at the time was insufficient for storing and organizing such enormous quantities of data. So they developed techniques for virtual data aggregation to support effective analysis across heterogeneous databases, as well as unstructured public data sources, such as the World Wide Web. "Effective analysis across heterogenous databases" means the ability to take things from databases which are designed to store different types of data—such as a database containing criminal records, a phone call database and a foreign intelligence database. The Web is considered an "unstructured public data source" because it is publicly accessible and contains many different types of data—blogs, emails, records of visits to websites, etc.—all of which need to be analyzed and stored efficiently.[16]
Another goal was to develop "a large, distributed system architecture for managing the huge volume of raw data input, analysis results, and feedback, that will result in a simpler, more flexible data store that performs well and allows us to retain important data indefinitely".[16]
Scalable social network analysis (SSNA) aimed to develop techniques based on social network analysis to model the key characteristics of terrorist groups and discriminate them from other societal groups.[17]
Evidence extraction and link discovery (EELD) developed technologies and tools for automated discovery, extraction and linking of sparse evidence contained in large amounts of classified and unclassified data sources (such as phone call records from the NSA call database, internet histories, or bank records).[18]
EELD was designed to design systems with the ability to extract data from multiple sources (e.g., text messages, social networking sites, financial records, and web pages). It was to develop the ability to detect patterns comprising multiple types of links between data items or communications (e.g., financial transactions, communications, travel, etc.).[18] It is designed to link items relating potential "terrorist" groups and scenarios, and to learn patterns of different groups or scenarios to identify new organizations and emerging threats.[18]
Wargaming the asymmetric environment (WAE) focused on developing automated technology that could identify predictive indicators of terrorist activity or impending attacks by examining individual and group behavior in broad environmental context and the motivation of specific terrorists.[19]
Translingual information detection, extraction and summarization (TIDES) developed advanced language processing technology to enable English speakers to find and interpret critical information in multiple languages without requiring knowledge of those languages.[20]
Outside groups (such as universities, corporations, etc.) were invited to participate in the annual information retrieval, topic detection and tracking, automatic content extraction, and machine translation evaluations run by NIST.[20] Cornell University, Columbia University, and the University of California, Berkeley were given grants to work on TIDES.
Communicator was to develop "dialogue interaction" technology to enable warfighters to talk to computers, such that information would be accessible on the battlefield or in command centers without a keyboard-based interface. Communicator was to be wireless, mobile, and to function in a networked environment.[21]
The dialogue interaction software was to interpret dialogue's context to improve performance, and to automatically adapt to new topics so conversation could be natural and efficient. Communicator emphasized task knowledge to compensate for natural language effects and noisy environments. Unlike automated translation of natural language speech, which is much more complex due to an essentially unlimited vocabulary and grammar, Communicator takes on task-specific issues so that there are constrained vocabularies (the system only needs to be able to understand language related to war). Research was also started on foreign-language computer interaction for use in coalition operations.[21]
Live exercises were conducted involving small unit logistics operations with the United States Marines to test the technology in extreme environments.[21]
The human identification at a distance (HumanID) project developed automated biometric identification technologies to detect, recognize and identify humans at great distances for "force protection", crime prevention, and "homeland security/defense" purposes.[22]
The goals of HumanID were to:[22]
A number of universities assisted in designing HumanID. The Georgia Institute of Technology's College of Computing focused on gait recognition. Gait recognition was a key component of HumanID, because it could be employed on low-resolution video feeds and therefore help identify subjects at a distance.[23] They planned to develop a system that recovered static body and stride parameters of subjects as they walked, while also looking into the ability of time-normalized joint angle trajectories in the walking plane as a way of recognizing gait. The university also worked on finding and tracking faces by expressions and speech.[24]
Carnegie Mellon University's Robotics Institute (part of the School of Computer Science) worked on dynamic face recognition. The research focused primarily on the extraction of body biometric features from video and identifying subjects from those features. To conduct its studies, the university created databases of synchronized multi-camera video sequences of body motion, human faces under a wide range of imaging conditions, AU coded expression videos, and hyperspectal and polarimetric images of faces.[25] The video sequences of body motion data consisted of six separate viewpoints of 25 subjects walking on a treadmill. Four separate 11-second gaits were tested for each: slow walk, fast walk, inclined, and carrying a ball.[23]
The University of Maryland's Institute for Advanced Computer Studies' research focused on recognizing people at a distance by gait and face. Also to be used were infrared and five-degree-of-freedom cameras.[26] Tests included filming 38 male and 6 female subjects of different ethnicities and physical features walking along a T-shaped path from various angles.[27]
The University of Southampton's Department of Electronics and Computer Science was developing an "automatic gait recognition" system and was in charge of compiling a database to test it.[28] The University of Texas at Dallas was compiling a database to test facial systems. The data included a set of nine static pictures taken from different viewpoints, a video of each subject looking around a room, a video of the subject speaking, and one or more videos of the subject showing facial expressions.[29] Colorado State University developed multiple systems for identification via facial recognition.[30] Columbia University participated in implementing HumanID in poor weather.[25]
The bio-surveillance project was designed to predict and respond to bioterrorism by monitoring non-traditional data sources such as animal sentinels, behavioral indicators, and pre-diagnostic medical data. It would leverage existing disease models, identify abnormal health early indicators, and mine existing databases to determine the most valuable early indicators for abnormal health conditions.
As a "virtual, centralized, grand database",[31] the scope of surveillance included credit card purchases, magazine subscriptions, web browsing histories, phone records, academic grades, bank deposits, gambling histories, passport applications, airline and railway tickets, driver's licenses, gun licenses, toll records, judicial records, and divorce records.[10]
Health and biological information TIA collected included drug prescriptions, medical records,[32] fingerprints, gait, face and iris data,[10] and DNA.[33]
TIA's Genisys component, in addition to integrating and organizing separate databases, was to run an internal "privacy protection program". This was intended to restrict analysts' access to irrelevant information on private U.S. citizens, enforce privacy laws and policies, and report misuses of data.[34] There were also plans for TIA to have an application that could "anonymize" data, so that information could be linked to an individual only by court order (especially for medical records gathered by the bio-surveillance project).[35] A set of audit logs were to be kept, which would track whether innocent Americans' communications were getting caught up in relevant data.[8]
The term total information awareness was first coined at the 1999 annual DARPAtech conference in a presentation by the deputy director of the Office of Information Systems Management, Brian Sharkey. Sharkey applied the phrase to a conceptual method by which the government could sift through massive amounts of data becoming available via digitization and draw important conclusions.[36]
TIA was proposed as a program shortly after the September 11 attacks in 2001, by Rear Admiral John Poindexter.[37] A former national security adviser to President Ronald Reagan and a key player in the Iran–Contra affair, he was working with Syntek Technologies, a company often contracted out by the government for work on defense projects. TIA was officially commissioned during the 2002 fiscal year.[38] In January 2002 Poindexter was appointed Director of the newly created Information Awareness Office division of DARPA, which managed TIA's development.[39] The office temporarily operated out of the fourth floor of DARPA's headquarters, while Poindexter looked for a place to permanently house TIA's researchers.[12] Soon Project Genoa was completed and its research moved on to Genoa II.[40] [41]
Late that year, the Information Awareness Office awarded the Science Applications International Corporation (SAIC) a $19 million contract to develop the "Information Awareness Prototype System", the core architecture to integrate all of TIA's information extraction, analysis, and dissemination tools. This was done through its consulting arm, Hicks & Associates, which employed many former Defense Department and military officials.
TIA's earliest version employed software called "Groove", which had been developed in 2000 by Ray Ozzie. Groove made it possible for analysts from many different agencies to share intelligence data instantly, and linked specialized programs that were designed to look for patterns of suspicious behavior.[42]
On 24 January 2003, the United States Senate voted to limit TIA by restricting its ability to gather information from emails and the commercial databases of health, financial and travel companies.[43] According to the Consolidated Appropriations Resolution, 2003, Pub. L. No. 108-7, Division M, § 111(b) passed in February, the Defense Department was given 90 days to compile a report laying out a schedule of TIA's development and the intended use of allotted funds or face a cutoff of support.[44]
The report arrived on May 20. It disclosed that the program's computer tools were still in their preliminary testing phase. Concerning the pattern recognition of transaction information, only synthetic data created by researchers was being processed. The report also conceded that a full prototype of TIA would not be ready until the 2007 fiscal year.[11] Also in May, Total Information Awareness was renamed Terrorism Information Awareness in an attempt to stem the flow of criticism on its information-gathering practices on average citizens.[45]
At some point in early 2003, the National Security Agency began installing access nodes on TIA's classified network.[4] The NSA then started running stacks of emails and intercepted communications through TIA's various programs.
Following a scandal in the Department of Defense involving a proposal to reward investors who predicted terrorist attacks, Poindexter resigned from office on 29 August.[46]
On September 30, 2003, Congress officially cut off TIA's funding and the Information Awareness Office (with the Senate voting unanimously against it)[47] because of its unpopular perception by the general public and the media.[7] [48] Senators Ron Wyden and Byron Dorgan led the effort.[49]
Reports began to emerge in February 2006 that TIA's components had been transferred to the authority of the NSA. In the Department of Defense appropriations bill for the 2004 fiscal year, a classified annex provided the funding. It was stipulated that the technologies were limited for military or foreign intelligence purposes against non-U.S. citizens.[50] Most of the original project goals and research findings were preserved, but the privacy protection mechanics were abandoned.[4] [8]
Genoa II, which focused on collaboration between machines and humans, was renamed "Topsail" and handed over to the NSA's Advanced Research and Development Activity, or ARDA (ARDA was later moved to the Director of National Intelligence's control as the Disruptive Technologies Office). Tools from the program were used in the war in Afghanistan and other parts of the War on Terror. In October 2005, the SAIC signed a $3.7 million contract for work on Topsail.[36] In early 2006 a spokesman for the Air Force Research Laboratory said that Topsail was "in the process of being canceled due to lack of funds". When asked about Topsail in a Senate Intelligence Committee hearing that February, both National Intelligence Director John Negroponte and FBI Director Robert Mueller said they did not know the program's status. Negroponte's deputy, former NSA director, Michael V. Hayden, said, "I'd like to answer in closed session."
The Information Awareness Prototype System was reclassified as "Basketball" and work on it continued by SAIC, supervised by ARDA. As late as September 2004, Basketball was fully funded by the government and being tested in a research center jointly run by ARDA and SAIC.[51]
Critics allege that the program could be abused by government authorities as part of their practice of mass surveillance in the United States. In an op-ed for The New York Times, William Safire called it "the supersnoop's dream: a Total Information Awareness about every U.S. citizen".[52]
Hans Mark, a former director of defense research and engineering at the University of Texas, called it a "dishonest misuse of DARPA".[53]
The American Civil Liberties Union launched a campaign to terminate TIA's implementation, claiming that it would "kill privacy in America" because "every aspect of our lives would be catalogued".[54] The San Francisco Chronicle criticized the program for "Fighting terror by terrifying U.S. citizens".[55]
Still, in 2013 former Director of National Intelligence James Clapper lied about a massive data collection on US citizens and others.[56] Edward Snowden said that because of Clapper's lie he lost hope to change things formally.[56]
In the 2008 British television series The Last Enemy, TIA is portrayed as a UK-based surveillance database that can be used to track and monitor anybody by putting all available government information in one place.