In electronic instrumentation and signal processing, a time-to-digital converter (TDC) is a device for recognizing events and providing a digital representation of the time they occurred. For example, a TDC might output the time of arrival for each incoming pulse. Some applications wish to measure the time interval between two events rather than some notion of an absolute time.
In electronics time-to-digital converters (TDCs) or time digitizers are devices commonly used to measure a time interval and convert it into digital (binary) output. In some cases interpolating TDCs are also called time counters (TCs).
TDCs are used to determine the time interval between two signal pulses (known as start and stop pulse). Measurement is started and stopped when the rising or falling edge of a signal pulse crosses a set threshold. This pattern is seen in many physical experiments, like time-of-flight and lifetime measurements in atomic and high energy physics, experiments that involve laser ranging and electronic research involving the testing of integrated circuits and high-speed data transfer.
TDCs are used to timestamp events and measure time differences between events, especially where picosecond precision and high accuracy is required, such as the measurement of events in high energy physics experiments, where particles (e.g. electrons, photons, and ions) are detected.
Another application is cost-effective and non-mechanical water flow metering by measuring the time difference between ultrasound pulses that travel through the flow and arrive at different times depending on the flow speed and direction.[1] [2]
In an all-digital phase-locked loop (ADPLL), a TDC measures the phase shift and its result is used to adjust the digital controlled oscillator (DCO).[3]
If the required time resolution is not high, then counters can be used to make the conversion.
In its simplest implementation, a TDC is simply a high-frequency counter that increments every clock cycle. The current contents of the counter represents the current time. When an event occurs, the counter's value is captured in an output register.
In that approach, the measurement is an integer number of clock cycles, so the measurement is quantized to a clock period. To get finer resolution, a faster clock is needed. The accuracy of the measurement depends upon the stability of the clock frequency.
Typically a TDC uses a crystal oscillator reference frequency for good long term stability. High stability crystal oscillators are usually relative low frequency such as 10 MHz (or 100 ns resolution).[4] To get better resolution, a phase-locked loop frequency multiplier can be used to generate a faster clock. One might, for example, multiply the crystal reference oscillator by 100 to get a clock rate of 1 GHz (1 ns resolution).
High clock rates impose additional design constraints on the counter: if the clock period is short, it is difficult to update the count. Binary counters, for example, need a fast carry architecture because they essentially add one to the previous counter value. A solution is using a hybrid counter architecture. A Johnson counter, for example, is a fast non-binary counter. It can be used to count very quickly the low order count; a more conventional binary counter can be used to accumulate the high order count. The fast counter is sometime called a prescaler.
The speed of counters fabricated in CMOS-technology is limited by the capacitance between the gate and the channel and by the resistance of the channel and the signal traces. The product of both is the cut-off-frequency. Modern chip technology allows multiple metal layers and therefore coils with a large number of windings to be inserted into the chip.This allows designers to peak the device for a specific frequency, which may lie above the cut-off-frequency of the original transistor.
A peaked variant of the Johnson counter is the traveling-wave counter which also achieves sub-cycle resolution. Other methods to achieve sub-cycle resolution include analog-to-digital converters and vernier Johnson counters.
In most situations, the user does not want to just capture an arbitrary time that an event occurs, but wants to measure a time interval, the time between a start event and a stop event.
That can be done by measuring an arbitrary time of both the start and stop events and subtracting. The measurement can be off by two counts.
The subtraction can be avoided if the counter is held at zero until the start event, counts during the interval, and then stops counting after the stop event.
f0
T
T=n ⋅ T0
with
n
T0=1/f0
Since start, stop and clock signal are asynchronous, there is a uniform probability distribution of the start and stop signal-times between two subsequent clock pulses. This detuning of the start and stop signal from the clock pulses is called quantization error.
For a series of measurements on the same constant and asynchronous time interval one measures two different numbers of counted clock pulses
n1
n2
p(n1)=1-c
q(n2)=c
with
c=Frc(T/T0)
T/T0
T=(p ⋅ n1+q ⋅ n2) ⋅ T0
Measuring a time interval using a coarse counter with the averaging method described above is relatively time consuming because of the many repetitions that are needed to determine the probabilities
p
q
In contrast to the coarse counter in the previous section, fine measurement methods with much better accuracy but far smaller measuring range are presented here. Analogue methods like time interval stretching or double conversion as well as digital methods like tapped delay lines and the Vernier method are under examination. Though the analogue methods still obtain better accuracies, digital time interval measurement is often preferred due to its flexibility in integrated circuit technology and its robustness against external perturbations like temperature changes.
The counter implementation's accuracy is limited by the clock frequency. If time is measured by whole counts, then the resolution is limited to the clock period. For example, a 10 MHz clock has a resolution of 100 ns. To get resolution finer than a clock period, there are time interpolation circuits.[5] These circuits measure the fraction of a clock period: that is, the time between a clock event and the event being measured. The interpolation circuits often require a significant amount of time to perform their function; consequently, the TDC needs a quiet interval before the next measurement.
When counting is not feasible because the clock rate would be too high, analog methods can be used. Analog methods are often used to measure intervals that are between 10 and 200 ns. These methods often use a capacitor that is charged during the interval being measured. Initially, the capacitor is discharged to zero volts. When the start event occurs, the capacitor is charged with a constant current I1; the constant current causes the voltage v on the capacitor to increase linearly with time. The rising voltage is called the fast ramp. When the stop event occurs, the charging current is stopped. The voltage on the capacitor v is directly proportional to the time interval T and can be measured with an analog-to-digital converter (ADC). The resolution of such a system is in the range of 1 to 10 ps.[6]
Although a separate ADC can be used, the ADC step is often integrated into the interpolator. A second constant current I2 is used to discharge the capacitor at a constant but much slower rate (the slow ramp). The slow ramp might be 1/1000 of the fast ramp. This discharge effectively "stretches" the time interval;[7] it will take 1000 times as long for the capacitor to discharge to zero volts. The stretched interval can be measured with a counter. The measurement is similar to a dual-slope analog converter.
The dual-slope conversion can take a long time: a thousand or so clock ticks in the scheme described above. That limits how often a measurement can be made (dead time). Resolution of 1 ps with a 100 MHz (10 ns) clock requires a stretch ratio of 10,000 and implies a conversion time of 150 μs.[7] To decrease the conversion time, the interpolator circuit can be used twice in a residual interpolator technique.[7] The fast ramp is used initially as above to determine the time. The slow ramp is only at 1/100. The slow ramp will cross zero at some time during the clock period. When the ramp crosses zero, the fast ramp is turned on again to measure the crossing time (tresidual). Consequently, the time can be determined to 1 part in 10,000.
Interpolators are often used with a stable system clock. The start event is asynchronous, but the stop event is a following clock. For convenience, imagine that the fast ramp rises exactly 1 volt during a 100 ns clock period. Assume the start event occurs at 67.3 ns after a clock pulse; the fast ramp integrator is triggered and starts rising. The asynchronous start event is also routed through a synchronizer that takes at least two clock pulses. By the next clock pulse, the ramp has risen to .327 V. By the second clock pulse, the ramp has risen to 1.327 V and the synchronizer reports the start event has been seen. The fast ramp is stopped and the slow ramp starts. The synchronizer output can be used to capture system time from a counter. After 1327 clocks, the slow ramp returns to its starting point, and interpolator knows that the event occurred 132.7 ns before the synchronizer reported.
The interpolator is actually more involved because there are synchronizer issues and current switching is not instantaneous. Also, the interpolator must calibrate the height of the ramp to a clock period.[8]
The vernier method is more involved. The method involves a triggerable oscillator and a coincidence circuit. At the event, the integer clock count is stored and the oscillator is started. The triggered oscillator has a slightly different frequency than the clock oscillator. For sake of argument, say the triggered oscillator has a period that is 1 ns faster than the clock. If the event happened 67 ns after the last clock, then the triggered oscillator transition will slide by −1 ns after each subsequent clock pulse. The triggered oscillator will be at 66 ns after the next clock, at 65 ns after the second clock, and so forth. A coincidence detector looks for when the triggered oscillator and the clock transition at the same time, and that indicates the fraction time that needs to be added.
The interpolator design is more involved. The triggerable clock must be calibrated to clock. It must also start quickly and cleanly.
f1
f2
n1
n2
T
T=
n1-1 | |
f1 |
-
n2-1 | |
f2 |
Since highly reliable oscillators with stable and accurate frequency are still quite a challenge one also realizes the vernier method via two tapped delay lines using two slightly different cell delay times
\tau
In the example presented here the first delay line affiliated with the start signal contains cells of D-flip-flops with delay
\tauL
\tauL
\tauB<\tauL
T
T=n ⋅ (\tau1-\tau2)
with n the number of cells marked as transparent.
In general a digital delay-line based TDC,[10] also known as tapped delay line, contains a chain of cells (e.g. using D-latches in the figure) with well defined delay times
\tau
\tau
Counters can measure long intervals but have limited resolution. Interpolators have high resolution but they cannot measure long intervals. A hybrid approach can achieve both long intervals and high resolution. The long interval can be measured with a counter. The counter information is supplemented with two time interpolators: one interpolator measures the (short) interval between the start event and a following clock event, and the second interpolator measure the interval between the stop event and a following clock event. The basic idea has some complications: the start and stop events are asynchronous, and one or both might happen close to a clock pulse. The counter and interpolators must agree on matching the start and end clock events. To accomplish that goal, synchronizers are used.
The common hybrid approach is the Nutt method. In this example the fine measurement circuit measures the time between start and stop pulse and the respective second nearest clock pulse of the coarse counter (Tstart, Tstop), detected by the synchronizer (see figure). Thus the wanted time interval is
T=nT0+Tstart-Tstop
with n the number of counter clock pulses and T0 the period of the coarse counter.
Time measurement has played a crucial role in the understanding of nature from the earliest times. Starting with sun, sand or water driven clocks we are able to use clocks today, based on the most precise caesium resonators.
The first direct predecessor of a TDC was invented in the year 1942 by Bruno Rossi for the measurement of muon lifetimes.[11] It was designed as a time-to-amplitude-converter, constantly charging a capacitor during the measured time interval. The corresponding voltage is directly proportional to the time interval under examination.
While the basic concepts (like Vernier methods (Pierre Vernier 1584-1638) and time stretching) of dividing time into measurable intervals are still up-to-date, the implementation changed a lot during the past 50 years. Starting with vacuum tubes and ferrite pot-core transformers those ideas are implemented in complementary metal–oxide–semiconductor (CMOS) design today.[12]
Some information from Regarding even the fine measuring methods presented, there are still errors one may wish remove or at least to consider. Non-linearities of the time-to-digital conversion for example can be identified by taking a large number of measurements of a poissonian distributed source (statistical code density test). Small deviations from the uniform distribution reveal the non-linearities.Inconveniently the statistical code density method is quite sensitive to external temperature changes. Thus stabilizing delay or phase-locked loop (DLL or PLL) circuits are recommended.
In a similar way, offset errors (non-zero readouts at T = 0) can be removed.
For long time intervals, the error due to instabilities in the reference clock (jitter) plays a major role. Thus clocks of superior quality are needed for such TDCs.
Furthermore, external noise sources can be eliminated in postprocessing by robust estimation methods.
TDCs are currently built as stand-alone measuring devices in physical experiments or as system components like PCI cards. They can be made up of either discrete or integrated circuits.
Circuit design changes with the purpose of the TDC, which can either be a very good solution for single-shot TDCs with long dead times or some trade-off between dead-time and resolution for multi-shot TDCs.
See main article: Digital delay generator.
The time-to-digital converter measures the time between a start event and a stop event. There is also a digital-to-time converter or delay generator. The delay generator converts a number to a time delay. When the delay generator gets a start pulse at its input, then it outputs a stop pulse after the specified delay. The architectures for TDC and delay generators are similar. Both use counters for long, stable, delays. Both must consider the problem of clock quantization errors.
For example, the Tektronix 7D11 Digital Delay uses a counter architecture. A digital delay may be set from 100 ns to 1 s in 100 ns increments. An analog circuit provides an additional fine delay of 0 to 100 ns. A 5 MHz reference clock drives a phase-locked loop to produce a stable 500 MHz clock. It is this fast clock that is gated by the (fine-delayed) start event and determines the main quantization error. The fast clock is divided down to 10 MHz and fed to main counter.[13] The instrument quantization error depends primarily on the 500 MHz clock (2 ns steps), but other errors also enter; the instrument is specified to have 2.2 ns of jitter. The recycle time is 575 ns.
Just as a TDC may use interpolation to get finer than one clock period resolution, a delay generator may use similar techniques. The Hewlett-Packard 5359A High Resolution Time Synthesizer provides delays of 0 to 160 ms, has an accuracy of 1 ns, and achieves a typical jitter of 100 ps. The design uses a triggered phase-locked oscillator that runs at 200 MHz. Interpolation is done with a ramp, an 8-bit digital-to-analog converter, and a comparator. The resolution is about 45 ps.
When the start pulse is received, then counts down and outputs a stop pulse. For low jitter the synchronous counter has to feed a zero flag from the most significant bit down to the least significant bit and then combine it with the output from the Johnson counter.
A digital-to-analog converter (DAC) could be used to achieve sub-cycle resolution, but it is easier to either use vernier Johnson counters or traveling-wave Johnson counters.
The delay generator can be used for pulse-width modulation, e.g. to drive a MOSFET to load a Pockels cell within 8 ns with a specific charge.
The output of a delay generator can gate a digital-to-analog converter and so pulses of a variable height can be generated. This allows matching to low levels needed by analog electronics, higher levels for ECL and even higher levels for TTL. If a series of DACs is gated in sequence, variable pulse shapes can be generated to account for any transfer function.