The first smart antennas were developed for military communications and intelligence gathering. The growth of cellular telephone in the 1980s attracted interest in commercial applications. The upgrade to digital radio technology in the mobile phone, indoor wireless network, and satellite broadcasting industries created new opportunities for smart antennas in the 1990s, culminating in the development of the MIMO (multiple-input multiple-output) technology used in 4G wireless networks.
The earliest success at tracking and controlling wireless signals relied on the antennas’ physical configuration and motion. The German inventor and physicist Karl F. Braun demonstrated beamforming for the first time in 1905. Braun created a phased array by positioning three antennas to reinforce radiation in one direction and diminish radiation in other directions.[1] Guglielmo Marconi experimented with directional antennas in 1906.[2] Directional antennas were rotated to detect and track enemy forces during World War I. The British admiralty used goniometers (radio compasses) to track the German fleet.[3] Edwin H. Armstrong invented the superheterodyne receiver to detect the high frequency noise generated by German warplanes’ ignition systems. The war ended before Armstrong's creation was ready to help direct antiaircraft fire.[4] Multiple elements (a fed dipole, a director, and reflectors) were assembled in the 1920s to create narrow transmit and receive antenna patterns. The Yagi-Uda array, better known as the Yagi antenna, is still widely used. Edmond Bruce and Harald T. Friis developed directional antennas for shortwave and microwave frequencies during the 1930s.
AT&T's decision to use microwave to carry inter-city telephone traffic led to the first large-scale commercial deployment of directional antennas (based on Friis’ horn reflector design[5]) in 1947. Directional antennas with alternating polarization enabled a single pair of frequencies to be reused over many consecutive hops. Microwave links are less expensive to deploy and maintain than coaxial cable links.[6]
The first mechanically scanned phased array radar (using a rotating Yagi antenna) was demonstrated in the 1930s.[7] The first electronically scanned radars used electromechanical devices (such as mechanical tuners or switches) to steer the antenna's beam.
Germany built the Wullenweber circular array for direction finding during the early years of World War II.[8] The Wullenweber could electronically scan the horizon 360° and determine the direction of any signal with reasonably good accuracy. Circular arrays were enhanced during the Cold War for eavesdropping purposes.[9] The American Physicist Luis Walter Alvarez developed the first ground-controlled approach (GCA) system for landing aircraft in bad weather based on an electronically steered microwave phased array antenna. Alvarez tested and deployed the system in England in 1943.[10] Near the end of the war, Germany's GEMA built an early warning phased array radar system (the PESA Mammut 1) to detect targets up to 300 km away.[11] The polyrod fire control antenna was developed by Bell Laboratories in 1947 using cascaded phase shifters controlled by a rotary switch (spinning at ten revolutions per second) to create a continuous scanning beam.
A major push to meet national security response time and coverage requirements called for the development of an all-electronic steerable planar phased array radar.[12] The USSR's launch of Sputnik in 1957 suggested the need for ground-based satellite surveillance systems. Bendix Corporation responded by building its Electronically Steerable Array Radar (ESAR) in 1960. Enhanced beamforming techniques, such as multiple-beam Butler matrices, were developed for detecting and tracking objects in space.
The launch of Explorer 1 by the United States in 1958 suggested another application: space-based radar systems for detecting and tracking aircraft, ships, armored vehicles, ballistic missiles, and cruise missiles. These systems required the development of special techniques for canceling the radar clutter seen from space, nulling ground-based jammers, and compensating for Doppler shifts experienced by fast-moving satellites.
Space-based radar systems spurred the development of smaller, lighter weight, and less costly components: monolithic microwave integrated circuits (MMICs) for operation at frequencies in the 1 GHz to 30 GHz (microwave) and 30 GHz to 300 GHz (millimeter wave) ranges. The high power levels needed for detection are easier to achieve at microwave frequencies. The narrow beams required for high resolution target tracking are best achieved at millimeter wave frequencies. Companies such as Texas Instruments, Raytheon, RCA, Westinghouse, General Electric, and Hughes Electronics participated in the early development of MMICs.
The first all-solid state radar was built for the United States Marines in 1972 by General Electric. It was a mobile 3-D radar system with its array mounted on a rotating platform for scanning the horizon. The first all-solid state phased array radar was the PAVE PAWS (precision acquisition vehicle entry - phased array warning system) UHF radar built in 1978 for the United States Air Force.[13] Phased array antennas are also used in radio astronomy. Karl Jansky, discoverer of the radio waves emanating from the Milky Way galaxy, used a Bruce array for experiments he conducted in 1931.[14] Modern phased array radio telescopes typically consist of a number of small, interconnected antennas such as the Murchison Widefield Array in Australia, constructed in 2012.[15]
L. C. van Atta was first to describe a retrodirective antenna, the van Atta array, which redirects (rather than reflects) a signal back in the direction from which it came, in his 1959 patent.[16] The signal can be modulated by the redirecting host for purposes such as radio-frequency identification and traffic control (radar target echo enhancement).[17] The first adaptive array, the side-lobe canceller, was developed by Paul Howells and Sid Applebaum at General Electric in 1959 to suppress radar jamming signals.[18] Building on Norbert Wiener’s work with analog filters, in 1960 Stanford University professor Bernard Widrow and PhD student Ted Hoff developed the least mean squares (LMS) algorithm that automatically adjusts an antenna's directivity pattern to reinforce desired signals.[19] Ted Compton at Ohio State University developed an adaptive antenna technique for recovering direct sequence spread spectrum signals in the presence of narrowband co-channel interference. Compton's method, reported in 1974, only requires knowledge of the desired signal's pseudorandom noise (PN) code—not its direction of arrival.[20] In the late 1970s, Kesh Bakhru and Don Torrieri developed the maximin algorithm for recovering frequency hopping signals in the presence of narrowband co-channel interference.[21] A 1977 paper by Bell Labs researchers Douglas O. Reudink and Yu S. Yeh described the advantages of scanning spot beams for satellites. The authors estimated that scanning spot beams could save 20 dB in link budget which in turn could be used to reduce transmit power, increase communication capacity, and decrease the size of earth-station antennas.[22] Satellite spot beams are used today by direct broadcast satellite systems such as DirecTV and Dish Network.
The Strategic Defense Initiative (SDI), proposed in 1983, became a major source of funding for technology research in several areas. The algorithms developed to track intercontinental ballistic missiles and direct x-ray laser weapons were particularly relevant to smart antennas.
See main article: Digital antenna array. These are antenna arrays with multi channels digital beamforming, usually by using FFT.
The theory of the 'digital antenna arrays' (DAA) started to emerge as a theory of multichannel estimation. Its origins goback into methods developed in the 1920s that were used to determine direction of the arrival of radio signals by a set oftwo antennas based on the phase difference or amplitudes of their output voltages. Thus, the assessment of the directions ofarrival of a single signal was conducted according to pointedtype indicator readings or according to the Lissajous curves,drawn by beam on the oscilloscope screen.[23]
In the late 1940s this approach caused the emergence of the theory of three-channel antenna analyzers that provided thesolution to the problem of signal separation of air target and “antipode” reflected from the underlying surface by solvingsystem of equations which were obtained with the help of complex voltages of three-channel signal mix.[23]
The growing complexity of solving such radar challenges, as well as the need to implement effective signal processing bythe end of the 1950s predetermined the use of electronic computers in this field. For example, in 1957, Ben S. Meltontand Leslie F. Bailey published a very significant article in this field,[24] where authors offered options of implementation of algebraic operations for signal processing with the help of electronic circuits, their equivalents, with the aim to develop signal correlator on the base of certain analogue computer.[23]
The replacement of analogue computer facilities by digital technologies three years after in 1960 was embodied in theidea of using high-speed computers to solve directional finding problems, initially to locate earthquake epicenter. B. A. Boltwas one of the first who implemented this idea in practice,[25] he has developed a program for IBM 704 for seismic direction finding based on the method of least squares.[23] Almost simultaneously a similar approach was used by Flinn, research fellow of the Australian National University.[23] [26]
Despite the fact that in the mentioned experiments the interface between sensors and computer was implemented with the help of data input cards, such decision was a decisive step on the way of the appearance of the DAA. Then, there was only to solve the problem of direct digital data, obtained from sensing elements, input into computer, excluding the stage of preparation of punch card and operator assistance as a surplus link.[23]
Apparently, it was Polikarpov B.I. who first drew attention to the potential possibilities of multichannel analyzers in theformer USSR[27] Polikarpov B.I. shows the principal possibility of signal sourcesresolution with an angular distance less than aperture angle of the antenna system.[23]
However, a specific solution to the problem of superRayleigh resolution of the emission sources was proposed by Varyukhin V.A. and Zablotskiy M.A. only in 1962, they invented corresponding method of measuring of directions to sources of electromagnetic field.[28] This method was based on the processing of information contained in the distribution of complex voltage amplitudes at the outputs of amplitude, phase and phase-amplitude multichannel analyzers and it permitted to determine the angular coordinates of sources within the width of the main lobe of the receiving antenna system.
Further Varyukhin V.A. developed a general theory of multichannel analyzers, based on the processing of information contained in the distribution of complex voltage amplitudes at the outputs of the digital antenna array. An important milestone in the recognition of the scientific results of Varyukhin V.A. was the defence of his doctor of science dissertation, held in 1967.[23]
A distinctive feature of developed by him theoretical foundations is the maximum automation of the process of assessment of the coordinates and parameters of signals, whereas an approach based on the generation of the response function of seismic multichannel analyzer and assessment of its resolution capabilities on the basis of visual impressions was just arisen at that time.[23] What is meant here is a Capon method[29] and developed further multiple signal classification (MUSIC), Estimation of signal parameters via rotational invariance techniques (ESPRIT) methods and other projection methods of spectral estimation. Of course, it is ungrateful to make a conclusion about the priority and importance of various alternative scientificapproaches in the process of development of a general theory of the DAA, taking into account classified nature of themajority works and the lack of the possibility to study scientific heritage of that time, even taking into accountInternet. Proposed here historical journey only slightly raised the veil of time over the true development of scientificresearch and its main aim was to point general niche and time frame of the inception of the theory of multichannel analysisthrough the lens of historical background. A detailed presentation of the historical stages of development of the DAA theory deserves standalone consideration.
A 1979 paper by Ralph O. Schmidt of Electromagnetic Systems Laboratory (ESL, a supplier of strategic reconnaissance systems) described the multiple signal classification (MUSIC) algorithm for estimating signals’ angle of arrival.[30] Schmidt used a signal subspace method based on geometric modeling to derive a solution assuming the absence of noise and then extended the method to provide a good approximation in the presence of noise.[31] Schmidt's paper became the most cited and his signal subspace method became the focus of ongoing research.
Jack Winters showed in 1984 that received signals from multiple antennas can be combined (using the optimum combining technique) to reduce co-channel interference in digital mobile networks.[32] Up to this time, antenna diversity had only been used to mitigate multipath fading. However, digital mobile networks would not become common for another ten years.
Richard Roy developed the Estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm in 1987. ESPRIT is a more efficient and higher resolution algorithm than MUSIC for estimating signals’ angle of arrival.[33] Brian Agee and John Treichler developed the constant modulus algorithm (CMA) for blind equalization of analog FM and telephone signals in 1983.[34] CMA relies on knowledge of the signal's waveform rather than channel state information or training signals. Agee extended the CMA to adaptive antenna arrays over the next few years.[35] [36]
During the 1990s companies such as Applied Signal Technology (AST) developed airborne systems to intercept digital cellular phone calls and text messages for law enforcement and national security purposes. While an airborne system can eavesdrop on a mobile user anywhere in a cellular network, it will receive all mobile stations reusing the same user and control frequencies at roughly the same power level. Adaptive antenna beamforming and interference cancellation techniques are used to focus on the target user.[37] AST was acquired by Raytheon in 2011.[38]
In 1947, Douglas H. Ring wrote a Bell Laboratories internal memorandum describing a new way to increase the capacity of metropolitan radio networks.[39] Ring proposed dividing a city into geographic cells, using low power transmitters with omnidirectional antennas, and reusing frequencies in non-adjacent cells. Ring's cellular radio scheme did not become practical until the arrival of integrated circuits in the 1970s.
As the number of mobile phone subscribers grew in the 1980s and 1990s researchers investigated new ways to increase mobile phone network capacity. Directional antennas were used to divide cells into sectors. In 1989, Simon Swales at Bristol University in the United Kingdom proposed methods for increasing the number of simultaneous users on the same frequency. Receive signals can be distinguished based on differences in their direction-of-arrival at the cell site antenna array. Transmit signals can be aimed at the intended recipient using beamforming.[40] Soren Anderson in Sweden presented a similar scheme based on computer simulations the following year.[41] Richard Roy and Björn Ottersten at Arraycomm patented a space-division multiple access method for wireless communication systems in the early 1990s. This technology was employed in Arraycomm's IntelliCell product line.[42]
Richard Roy and French entrepreneur Arnaud Saffari founded ArrayComm in 1992 and recruited Marty Cooper, who led the Motorola group that developed the first portable cell phone, to head the company. ArrayComm's smart antennas were designed to increase the capacity of wireless networks employing time-division duplex (TDD) such as the PHS (Personal Handy-phone System) networks that were deployed throughout Asia.[43] Bell Labs researcher Douglas O. Reudink founded Metawave Communications, a maker of switched beam antennas for cellular telephone networks, in 1995. Metawave claimed that by focusing capacity on areas with the highest traffic it could boost cell capacity up to 75%. Though Metawave managed to sell switched beam antennas to at least one major carrier, the company went out of business in 2004.[44] In 1997, AT&T Wireless Group announced plans to offer fixed wireless service at speeds up to 512 kbit/s. Project Angel promised non-line of sight (NLOS) coverage using beamforming and orthogonal frequency-division multiplexing (OFDM). Service was launched in ten cities in 2000. However, by 2002 AT&T sold its fixed wireless service business to Netro Corp.[45]
Smart antenna research led to the development of 4G MIMO. Conventional smart antenna techniques (such as diversity and beamforming) deliver incremental gains in spectral efficiency. 4G MIMO exploits natural multipath propagation to multiply spectral efficiency.
Researchers studying the transmission of multiple signals over different wires in the same cable bundle helped create a theoretical foundation for 4G MIMO. Specifically, techniques for cancelling the effects of crosstalk using knowledge of the source signals were investigated. The “wireline MIMO” researchers included Lane H. Brandenburg and Aaron D. Wyner (1974),[46] Wim van Etten (1970s),[47] Jack Salz (1985),[48] and Alexandra Duel-Hallen (1992).[49] Though optimizing the transmission of multiple data streams over different wire pairs in the same bundle requires compensating for crosstalk, the transmission of multiple data streams over different wireless paths due to multipath propagation is a far greater challenge because the signals become mixed up in time, space, and frequency.
Greg Raleigh’s 1996 paper was first to propose a method for multiplying the capacity of point-to-point wireless links using multiple co-located antennas at each end of a link in the presence of multipath propagation. The paper provided a rigorous mathematical proof of MIMO capacity based on a precise channel model and identified OFDM as the most efficient air interface for use with MIMO. The paper was submitted to the IEEE in April 1996 and presented in November at the 1996 Global Communications Conference in London.[50] Raleigh also filed two patent applications for MIMO in August of the same year.
Raleigh discovered that multipath propagation could be exploited to multiply link capacity after developing an improved channel model that showed how multipath propagation affects signal waveforms. The model took into account factors including radio propagation geometry (natural and man-made objects serving as “local reflectors” and “dominant reflectors”), antenna array steering, angle of arrival, and delay spread.[51] Bell Labs researcher Gerard J. Foschini’s paper submitted in September 1996 and published in October of the same year also theorized that MIMO could be used to significantly increase the capacity of point-to-point wireless links.[52] Bell Labs demonstrated a prototype MIMO system based on its BLAST (Bell Laboratories Layered Space-Time) technology in late 1998.[53] Space–time block code (also known as the Alamouti code) was developed by Siavash Alamouti and is widely used in MIMO-OFDM systems. Alamouti's 1998 paper showed that the benefits of receive diversity can also be achieved using a combination of transmit diversity and space-time block codes.[54] A key advantage of transmit diversity is that it does not require multiple antennas and RF chains in handsets.
See main article: OFDM. OFDM emerged in the 1950s when engineers at Collins Radio Company found that a series of non-contiguous sub-channels are less vulnerable to inter-symbol interference (ISI).[55] OFDM was studied more systematically by Robert W. Chang in 1966.[56] Chang used Fourier transforms to ensure orthogonality. Sidney Darlington proposed use of the discrete Fourier transform (DFT) in 1970.[55] Stephen B. Weinstein and Paul M. Ebert used a discrete Fourier transform (DFT) to perform baseband modulation and demodulation in 1971.[56] Dial-up modems developed by Gandalf Technologies and Telebit in the 1970s and 1980s used OFDM to achieve higher speeds.[57] Amati Communications Corp. used its discrete multi-tone (DMT) form of OFDM to transmit data at higher speeds over phone lines also carrying phone calls in digital subscriber line (DSL) applications.[58] OFDM is part of the digital audio broadcasting (DAB)[59] and digital video broadcasting (DVB)[60] standards developed in Europe. OFDM is also used in the 802.11a[61] and 802.11g[62] wireless LAN standards.
Greg Raleigh, V. K. Jones, and Michael Pollack founded Clarity Wireless in 1996. The company built a prototype MIMO-OFDM fixed wireless link running 100 Mbit/s in 20 MHz of spectrum in the 5.8 GHz band, and demonstrated error-free operation over six miles with one watt of transmit power.[63] Cisco Systems acquired Clarity Wireless in 1998 for its non-line of sight, vector OFDM (VOFDM) technology.[64] The Broadband Wireless Industry Forum (BWIF) was created in 1999 to develop a VOFDM standard.[65] Arogyaswami Paulraj founded Iospan Wireless in late 1998 to develop MIMO-OFDM products. Iospan was acquired by Intel in 2003. Neither Clarity Wireless nor Iospan Wireless shipped MIMO-OFDM products before being acquired.[66]
Greg Raleigh and V. K. Jones founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs. In 2004, Airgo became the first company to ship MIMO-OFDM products.[67] Qualcomm acquired Airgo Networks in late 2006.[68] Surendra Babu Mandava and Arogyaswami Paulraj founded Beceem Communications in 2004 to produce MIMO-OFDM chipsets for WiMAX. The company was acquired by Broadcom in 2010.[69] The Institute of Electrical and Electronics Engineers (IEEE) created a task group in late 2003 to develop a wireless LAN standard delivering at least 100 Mbit/s of user data throughput. There were two major competing proposals: TGn Sync was backed by companies including Intel and Philips, and WWiSE was supported by companies including Airgo Networks, Broadcom, and Texas Instruments. Both groups agreed that the 802.11n standard would be based on MIMO-OFDM with 20 MHz and 40 MHz channel options.[70] TGn Sync, WWiSE, and a third proposal (MITMOT, backed by Motorola and Mitsubishi) were merged to create what was called the Joint Proposal.[71] The final 802.11n standard supported speeds up to 600 Mbit/s (using four simultaneous data streams) and was published in late 2009.[72] WiMAX was developed as an alternative to cellular standards, is based on the 802.16e standard, and uses MIMO-OFDM to deliver speeds up to 138 Mbit/s. The more advanced 802.16m standard enabled download speeds up to 1 Gbit/s.[73] A nationwide WiMAX network was built in the United States by Clearwire, a subsidiary of Sprint-Nextel, covering 130 million pops by mid-2012.[74] Clearwire subsequently announced plans to deploy LTE (the cellular 4G standard) covering 31 cities by mid-2013.[75] The first 4G cellular standard was proposed by NTT DoCoMo in 2004.[76] Long term evolution (LTE) is based on MIMO-OFDM and continues to be developed by the 3rd Generation Partnership Project (3GPP). LTE specifies downlink rates up to 300 Mbit/s, uplink rates up to 75 Mbit/s, and quality of service parameters such as low latency.[77] LTE Advanced adds support for picocells, femtocells, and multi-carrier channels up to 100 MHz wide. LTE has been embraced by both GSM/UMTS and CDMA operators.[78]
The first LTE services were launched in Oslo and Stockholm by TeliaSonera in 2009.[79] Deployment is most advanced in the United States, where all four Tier 1 operators have or are constructing nationwide LTE networks. There are currently more than 222 LTE networks in 83 countries operational with approximately 126 million connections (devices).[80]
The 802.11ac wireless LAN standard was proposed to deliver speeds of 1 Gbit/s and faster. Development of the specification began in 2011 and is expected to be completed by 2014. 802.11ac uses the 5 GHz band, defines channels up to 160 MHz wide, supports up to 8 simultaneous MIMO data streams, and delivers raw data rates up to nearly 7 Gbit/s.[81] A number of products based on 802.11ac draft specifications are now available.
Fifth generation (5G) mobile network concepts are in the exploratory stage. Commercialization is expected by the early 2020s. In March 2013, NTT DoCoMo tested a10 Gbit/s uplink using 400 MHz in the 11 GHz band. In May 2013, Samsung announced that it is experimenting in the 28 GHz band using base stations with up to 64 antennas and has achieved 1 Gbit/s at distances up to 2 kilometers.[82] Samsung claims the technology could deliver tens of Gbit/s under favorable conditions.[83] Research papers suggest that 5G networks are likely to consist of small distributed cells operating at frequencies up to 90 GHz using “massive MIMO.” According to Jakob Hoydis of Bell Laboratories, Alcatel-Lucent, Germany, “Network densification is the only solution to the capacity crunch.” This could involve two-tier networks (“HetNets”) using existing cellular base stations to ensure broad coverage and high mobility and interspersed small cells for capacity and indoor service. Massive MIMO would also be employed in high-speed backhaul links.[84]