In statistical signal processing, the goal of spectral density estimation (SDE) or simply spectral estimation is to estimate the spectral density (also known as the power spectral density) of a signal from a sequence of time samples of the signal.[1] Intuitively speaking, the spectral density characterizes the frequency content of the signal. One purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
Some SDE techniques assume that a signal is composed of a limited (usually small) number of generating frequencies plus noise and seek to find the location and intensity of the generated frequencies. Others make no assumption on the number of components and seek to estimate the whole generating spectrum.
Spectrum analysis, also referred to as frequency domain analysis or spectral density estimation, is the technical process of decomposing a complex signal into simpler parts. As described above, many physical processes are best described as a sum of many individual frequency components. Any process that quantifies the various amounts (e.g. amplitudes, powers, intensities) versus frequency (or phase) can be called spectrum analysis.
Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short segments (sometimes called frames), and spectrum analysis may be applied to these individual segments. Periodic functions (such as
\sin(t)
The Fourier transform of a function produces a frequency spectrum which contains all of the information about the original signal, but in a different form. This means that the original function can be completely reconstructed (synthesized) by an inverse Fourier transform. For perfect reconstruction, the spectrum analyzer must preserve both the amplitude and phase of each frequency component. These two pieces of information can be represented as a 2-dimensional vector, as a complex number, or as magnitude (amplitude) and phase in polar coordinates (i.e., as a phasor). A common technique in signal processing is to consider the squared amplitude, or power; in this case the resulting plot is referred to as a power spectrum.
Because of reversibility, the Fourier transform is called a representation of the function, in terms of frequency instead of time; thus, it is a frequency domain representation. Linear operations that could be performed in the time domain have counterparts that can often be performed more easily in the frequency domain. Frequency analysis also simplifies the understanding and interpretation of the effects of various time-domain operations, both linear and non-linear. For instance, only non-linear or time-variant operations can create new frequencies in the frequency spectrum.
In practice, nearly all software and electronic devices that generate frequency spectra utilize a discrete Fourier transform (DFT), which operates on samples of the signal, and which provides a mathematical approximation to the full integral solution. The DFT is almost invariably implemented by an efficient algorithm called fast Fourier transform (FFT). The array of squared-magnitude components of a DFT is a type of power spectrum called periodogram, which is widely used for examining the frequency characteristics of noise-free functions such as filter impulse responses and window functions. But the periodogram does not provide processing-gain when applied to noiselike signals or even sinusoids at low signal-to-noise ratios. In other words, the variance of its spectral estimate at a given frequency does not decrease as the number of samples used in the computation increases. This can be mitigated by averaging over time (Welch's method) or over frequency (smoothing). Welch's method is widely used for spectral density estimation (SDE). However, periodogram-based techniques introduce small biases that are unacceptable in some applications. So other alternatives are presented in the next section.
Many other techniques for spectral estimation have been developed to mitigate the disadvantages of the basic periodogram. These techniques can generally be divided into non-parametric, parametric, and more recently semi-parametric (also called sparse) methods.[2] The non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure. Some of the most common estimators in use for basic applications (e.g. Welch's method) are non-parametric estimators closely related to the periodogram. By contrast, the parametric approaches assume that the underlying stationary stochastic process has a certain structure that can be described using a small number of parameters (for example, using an auto-regressive or moving-average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. When using the semi-parametric methods, the underlying process is modeled using a non-parametric framework, with the additional assumption that the number of non-zero components of the model is small (i.e., the model is sparse). Similar approaches may also be used for missing data recovery[3] as well as signal reconstruction.
Following is a partial list of spectral density estimation techniques:
(r,q)
In parametric spectral estimation, one assumes that the signal is modeled by a stationary process which has a spectral density function (SDF)
S(f;a1,\ldots,ap)
f
p
a1,\ldots,ap
AR(p)
p
\{Yt\}
AR(p)
Yt=\phi1Yt-1+\phi2Yt-2+ … +\phipYt-p+\epsilont,
where the
\phi1,\ldots,\phip
\epsilont
2 | |
\sigma | |
p |
S(f;\phi1,\ldots,\phip,
2 | |
\sigma | |
p) |
=
| ||||||||||
|
|f|<fN,
with
\Deltat
fN
There are a number of approaches to estimating the parameters
\phi1,\ldots,
2 | |
\phi | |
p |
AR(p)
AR(p)
AR(p)
Alternative parametric methods include fitting to a moving-average model (MA) and to a full autoregressive moving-average model (ARMA).
Frequency estimation is the process of estimating the frequency, amplitude, and phase-shift of a signal in the presence of noise given assumptions about the number of the components.[9] This contrasts with the general methods above, which do not make prior assumptions about the components.
See also: Sinusoidal model.
If one only wants to estimate the frequency of the single loudest pure-tone signal, one can use a pitch detection algorithm.
If the dominant frequency changes over time, then the problem becomes the estimation of the instantaneous frequency as defined in the time–frequency representation. Methods for instantaneous frequency estimation include those based on the Wigner–Ville distribution and higher order ambiguity functions.[10]
If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a multiple-tone approach.
A typical model for a signal
x(n)
p
w(n)
x(n)=
p | |
\sum | |
i=1 |
Ai
jn\omegai | |
e |
+w(n)
x(n)
p
The most common methods for frequency estimation involve identifying the noise subspace to extract these components. These methods are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace. The most popular methods of noise subspace based frequency estimation are Pisarenko's method, the multiple signal classification (MUSIC) method, the eigenvector method, and the minimum norm method.
j\omega | |
\hat{P} | |
PHD\left(e |
\right)=
1 | ||||||||||||
|
j\omega | |
\hat{P} | |
MU\left(e |
\right)=
1 | |||||||||||||||
|
j\omega | |
\hat{P} | |
EV\left(e |
\right)=
1 | ||||||||||||||||||
|
j\omega | |
\hat{P} | |
MN\left(e |
\right)=
1 | |
\left|eHa\right|2 |
; a=λPnu1
Suppose
xn
n=0
N-1
\begin{align} xn&=\sumkAk\sin(2\pi\nukn+\phik)\\ &=\sumkAk\left(\sin(\phik)\cos(2\pi\nukn)+\cos(\phik)\sin(2\pi\nukn)\right)\\ &=\sumk
Ak\sin(\phik) | |
\left(\overbrace{a | |
k} |
\cos(2\pi\nukn)+
Ak\cos(\phik) | |
\overbrace{b | |
k} |
\sin(2\pi\nukn)\right) \end{align}
The variance of
xn
1 | |
N |
N-1 | |
\sum | |
n=0 |
2. | |
x | |
n |
If these data were samples taken from an electrical signal, this would be its average power (power is energy per unit time, so it is analogous to variance if energy is analogous to the amplitude squared).
Now, for simplicity, suppose the signal extends infinitely in time, so we pass to the limit as
N\toinfty.
\limN
1 | |
N |
N-1 | |
\sum | |
n=0 |
2. | |
x | |
n |
Again, for simplicity, we will pass to continuous time, and assume that the signal extends infinitely in time in both directions. Then these two formulas become
x(t)=\sumkAk\sin(2\pi\nukt+\phik)
and
\limT\toinfty
1 | |
2T |
T | |
\int | |
-T |
x(t)2dt.
The root mean square of
\sin
1/\sqrt{2}
Ak\sin(2\pi\nukt+\phik)
\tfrac{1}{2}
2. | |
A | |
k |
x(t)
\nuk
2. | |
\tfrac{1}{2}A | |
k |
x(t).
Then the power as a function of frequency is
2, | |
\tfrac{1}{2}A | |
k |
S(\nu)
S(\nu)=\sum
k:\nuk<\nu |
1 | |
2 |
2. | |
A | |
k |
S
x
The variance is the covariance of the data with itself. If we now consider the same data but with a lag of
\tau
x(t)
x(t+\tau)
c
x
c(\tau)=\limT\toinfty
1 | |
2T |
T | |
\int | |
-T |
x(t)x(t+\tau)dt.
If it exists, it is an even function of
\tau.
c
c(0),
It can be shown that
c
x
c(\tau)=\sumk
1 | |
2 |
2 | |
A | |
k |
\cos(2\pi\nuk\tau).
This is in fact the spectral decomposition of
c
x
c
The power spectrum of this example is not continuous, and therefore does not have a derivative, and therefore this signal does not have a power spectral density function. In general, the power spectrum will usually be the sum of two parts: a line spectrum such as in this example, which is not continuous and does not have a density function, and a residue, which is absolutely continuous and does have a density function.