Time interleaved (TI) ADCs are Analog-to-Digital Converters (ADCs) that involve M converters working in parallel.[1] Each of the M converters is referred to as sub-ADC, channel or slice in the literature. The time interleaving technique, akin to multithreading in computing, involves using multiple converters in parallel to sample the input signal at staggered intervals, increasing the overall sampling rate and improving performance without overburdening the single ADCs.
The concept of time interleaving can be traced back to the 1960s. One of the earliest mentions of using multiple ADCs to increase sampling rates appeared in the work of Bernard M. Oliver and Claude E. Shannon. Their pioneering work on communication theory and sampling laid the groundwork for the theoretical basis of time interleaving. However, practical implementations were limited by the technology of the time.
In the 1980s, significant advancements were made: W. C. Black and D. A. Hodges from the Berkeley University successfully implemented the first prototype of a time interleaved ADC. In particular, they designed a 4-way interleaved converter working at 2.5 MSample/s. Each slice of the converter was a 7-stage SAR pipeline ADC running at 625 kSample/s. An ENOB equal to 6.2 was measured for the proposed converter with a probing input signal at 100 kHz. The work was presented at ISSCC 1980 and the paper was focused on the practical challenges of implementing TI ADCs, including the synchronization and calibration of multiple channels to reduce mismatches.
In 1987, Ken Poulton and other researchers of the HP Labs developed the first product based on Time Interleaved ADCs: the HP 54111D digital oscilloscope.
In the 1990s, the TI ADC technology saw further advancements driven by the increasing demand for high-speed data conversion in telecommunications and other fields. A notable project during this period was the development of high-speed ADCs for digital oscilloscopes by Tektronix. Engineers at Tektronix implemented TI ADCs to achieve the high sampling rates necessary for capturing fast transient signals in test and measurement equipment. As a result of this work, the Tektronix TDS350, a two-channel, 200 MHz, 1 GSample/s digital storage scope, was commercialized in 1991.[2]
By the late 1990s, TI ADCs had become commercially viable. One of the key projects that showcased the potential of TI ADCs was the development of the GSM (Global System for Mobile Communications) standard, where high-speed ADCs were essential for digital signal processing in mobile phones. Companies like Analog Devices and Texas Instruments began to offer TI ADCs as standard products, enabling widespread adoption in various applications.
The 21st century has seen continued innovation in TI ADC technology. Researchers and engineers have focused on further improving the performance and integration of TI ADCs to meet the growing demands of digital systems. Key figures in this era include Boris Murmann and his colleagues at Stanford University, who have contributed to the development of advanced calibration techniques and low-power design methods for TI ADCs.
Today, TI ADCs are used in a wide range of applications, from 5G telecommunications to high-resolution medical imaging. The future of TI ADCs looks promising, with ongoing research focusing on further improving their performance and expanding their application areas. Emerging technologies such as autonomous vehicles, advanced radar systems, and artificial intelligence-driven signal processing will continue to drive the demand for high-speed, high-resolution ADCs.
In a time interleaved system, the conversion time required by each sub-ADC is equal to
Tc
Ts={Tc
M
To illustrate this concept, let us delve into the conversion process of a TI ADC with reference to the first figure of this paragraph. The figure shows the time diagram of a data converter that employs four interleaved channels. The input signal
Vin
fin={fclk
fclk
Ts
In a TI ADC, every
Ts
Tc
M
Mth
At any given moment, each channel is engaged in converting different samples. Consequently, the aggregate data rate of the system is faster than the data rate of a single sub-ADC by a factor of
M
M
To conclude, the time-interleaving method effectively increases the conversion speed of each sub-ADC by
M
Two architectures are possible to implement a time interleaved ADC.[4] The first architecture is depicted in the first figure of the paragraph and it is characterized by the presence of a single Sample and Hold (S&H) circuit for the entire structure. The sampler operates at a frequency
fs={Ts
In contrast, the second architecture, illustrated in the second figure of the paragraph, employs different S&H circuits for each channel, each operating at a reduced frequency
({M ⋅ Ts
M
The choice between these two architectures depends on the specific requirements and constraints of the application. The single S&H circuit architecture offers a compact and potentially lower-power solution, as it eliminates the redundancy of multiple S&H circuits. The centralized sampling can also reduce mismatches between channels, as all samples are derived from a single source. However, the high-speed requirement of the single S&H circuit can be a significant challenge, particularly at very high sampling rates where achieving the necessary performance may require more advanced and costly technology.
On the other hand, the multiple S&H circuit architecture distributes the sampling load, allowing each S&H circuit to operate at a lower speed. This can be advantageous in applications where high-speed circuits are difficult or expensive to implement. Additionally, this architecture can offer improved flexibility in managing timing and gain mismatches between channels. Each S&H circuit can be independently optimized for its specific operating conditions, potentially leading to better overall performance. The trade-offs include a larger footprint on the integrated circuit and increased power consumption, which may be critical factors in power-sensitive or space-constrained applications.
In practical implementations, the choice between these architectures is influenced by several factors, including the required sampling rate, power budget, available silicon area, and the acceptable level of complexity in calibration and error correction. For instance, in high-speed communication systems the single S&H circuit architecture might be preferred despite its stringent speed requirements, due to its compact design and potentially lower power consumption. Conversely, in applications where power is less of a concern but achieving ultra-high speeds is challenging, the multiple S&H circuit architecture might be more suitable.
Ideally, all the sub-ADCs are identical. In practice, however, they end up being slightly different due to process, voltage and temperature (PVT) variations. If not taken care of, sub-ADC mismatches can jeopardize the performance of TI ADCs since they show up in the output spectrum as spectral tones.[5]
Offset mismatches (i.e., different input-referred offsets for each sub-ADC) are superimposed to the converted signal as a sequence of period
M ⋅ Ts
f=({k}/{M}) ⋅ fs
0
M-1
Gain errors affect the amplitude of the converted signal and are transferred to the output as an amplitude modulation (AM) of the input signal with a sequence of period
M ⋅ Ts
f=\pmfin+({k}/{M}) ⋅ fs
Finally, skew mismatches are due to the channels being timed by different phases of the same clock signal. If one timing signal is skewed with respect to the others, spurious harmonics will be generated in the output spectrum. It can be demonstrated that these spurs are located at the frequencies
f=\pmfin+({k}/{M}) ⋅ fs
Channel mismatches in a TI ADC can seriously degrade its Spurious-Free Dynamic Range (SFDR) and its Signal-to-Noise-and-Distortion Ratio (SNDR). To recover the spectral purity of the converter, the proven solution consists of compensating these non-idealities with digitally implemented corrections. Despite being able to recover the overall spectral purity by suppressing the mismatch spurs, digital calibrations can significantly increase the overall power consumption of the receiver and may not be as effective when the input signal is broadband. For this reason, methods to provide higher stability and usability in real-world cases should be actively researched.
As cellular communications systems evolve, the performance of the receivers becomes more and more demanding. For example, the channel bandwidth offered by the 4G network can be as high as 20 MHz, whereas it can range from 400 MHz up to 1 GHz in the current 5G NR network.[6] On top of that, the complexity of signal modulation also increased from 64-QAM in 4G to 256-QAM in 5G NR.
The tighter requirements impose new design challenges on modern receivers, whose performance relies on the analog-to-digital interface provided by ADCs. In 4G receivers, the data conversion is performed by Delta-Sigma Modulators (DSMs),[7] as they are easily reconfigurable: It is sufficient to modify the oversampling ratio (OSR), the loop order or the quantizer resolution to adjust the bandwidth of the data converter according to the need. This is a desirable feature of an ADC in receivers supporting multiple wireless communication standards.
In 5G receivers, instead, DSMs are not the preferred choice: The bandwidth of the receiver has to be higher than a few hundreds of MHz, whereas the signal bandwidth
(fb)
(fs)
fb\le{fs
{fs
The ADCs employed in 5G receivers do not only require a high sampling rate to deal with large channel bandwidths, but also a reasonable number of bits. A high resolution is necessary for the data converter to enable the use of the high-order modulation schemes, which are fundamental to achieve high throughputs with an efficient bandwidth utilization. The resolution of a data converter is defined as the minimum voltage value that it can resolve, i.e., its Least Significant Bit (LSB). The latter parameter depends on the number of physical bits (N) of the converter as
LSB={FSR
Usually, for 5G receivers, ADCs with an ENOB of 12 bits and bandiwdth up to the GHz are the favorable choice. Time interleaved ADCs are frequently employed for this application since they are capable of meeting the above-mentioned requirements. In fact, TI ADCs utilize multiple ADC channels operating in parallel and this technique effectively increases the overall sampling rate, allowing the receiver to handle the wide bandwidths required by 5G network.
A receiver is one of the essential components of a communication system. In particular, a receiver is responsible for the conversion of radio signals in digital words to allow the signal to be further processed by electronic devices. Typically, a receiver include an antenna, a pre-selector filter, a low-noise amplifier (LNA), a mixer, a local oscillator, an intermediate frequency (IF) filter, a demodulator and an analog-to-digital converter.
The antenna is the first component in a receiver system; it captures electromagnetic waves from the air and it converts these radio waves into electrical signals. These signals are then filtered by the pre-selector, which guarantees that only the desired frequency range from the signals captured by the antenna are passed to the next stages of the receiver. The signal is then amplified by an LNA. The amplification action ensures that the signal is strong enough to be processed effectively by the subsequent stages of the system. The amplified signal is then mixed with a stable signal from the local oscillator to produce an intermediate frequency (IF) signal. This process, known as heterodyning, shifts the frequency of the received signal to a lower, more manageable IF. The IF signal undergoes further filtering to remove any remaining unwanted signals and noise. Finally, a demodulator extracts the original information signal from the modulated carrier wave. Precisely, the demodulator converts the IF signal back into the baseband signal, which contains the transmitted information. Different demodulation techniques can be used depending on the type of modulation employed (e.g., amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM)). As a last step, an ADC converts the continuous analog signal into a discrete digital signal, which can be processed by digital signal processors (DSPs) or microcontrollers. This step is crucial for enabling advanced digital signal processing techniques.
To further improve the power efficiency and cost of a receiver, the paradigm of Direct RF Sampling is emerging. According to this technique, the analog signal at radio frequency is simply fed to the ADC, avoiding the downconversion to an intermediate frequency altogether.[8]
Direct RF Sampling has significant advantages in terms of system design and performance. By removing the downconversion stage, the design complexity is reduced, leading to lower power consumption and cost. Additionally, the absence of the mixer and local oscillator means there are fewer components that can introduce noise and distortion, potentially improving the signal-to-noise ratio (SNR) and linearity of the receiver.
However, directly sampling the radio-frequency signal imposes stringent requirements on the performance of the ADC. The signal bandwidth of the ADC in the receiver must be a few GHz to handle the high-frequency signals directly. Achieving such high values with a single ADC is challenging due to limitations in speed, power consumption and resolution.
To meet these demanding requirements, Time interleaved ADC systems are typically adopted. In fact, TI ADCs utilize multiple slower sub-ADCs operating in parallel, each sampling the input signal at different time intervals. By interleaving the sampling process, the effective sampling rate of the overall system is increased, allowing it to handle the high bandwidths required for direct RF sampling.