In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density.[1] The term is used with this or similar meanings in many scientific and technical disciplines, including physics, acoustical engineering, telecommunications, and statistical forecasting. White noise refers to a statistical model for signals and signal sources, rather than to any specific signal. White noise draws its name from white light,[2] although light that appears white generally does not have a flat power spectral density over the visible band.
In discrete time, white noise is a discrete signal whose samples are regarded as a sequence of serially uncorrelated random variables with zero mean and finite variance; a single realization of white noise is a random shock. Depending on the context, one may also require that the samples be independent and have identical probability distribution (in other words independent and identically distributed random variables are the simplest representation of white noise).[3] In particular, if each sample has a normal distribution with zero mean, the signal is said to be additive white Gaussian noise.[4]
The samples of a white noise signal may be sequential in time, or arranged along one or more spatial dimensions. In digital image processing, the pixels of a white noise image are typically arranged in a rectangular grid, and are assumed to be independent random variables with uniform probability distribution over some interval. The concept can be defined also for signals spread over more complicated domains, such as a sphere or a torus.
An infinite-bandwidth white noise signal is a purely theoretical construction. The bandwidth of white noise is limited in practice by the mechanism of noise generation, by the transmission medium and by finite observation capabilities. Thus, random signals are considered "white noise" if they are observed to have a flat spectrum over the range of frequencies that are relevant to the context. For an audio signal, the relevant range is the band of audible sound frequencies (between 20 and 20,000 Hz). Such a signal is heard by the human ear as a hissing sound, resembling the /h/ sound in a sustained aspiration. On the other hand, the "sh" sound pronounced as //ʃ// in "ash" is a colored noise because it has a formant structure. In music and acoustics, the term "white noise" may be used for any signal that has a similar hissing sound.
The term white noise is sometimes used in the context of phylogenetically based statistical methods to refer to a lack of phylogenetic pattern in comparative data.[5] It is sometimes used analogously in nontechnical contexts to mean "random talk without meaningful contents".[6] [7]
Any distribution of values is possible (although it must have zero DC component). Even a binary signal which can only take on the values 1 or -1 will be white if the sequence is statistically uncorrelated. Noise having a continuous distribution, such as a normal distribution, can of course be white.
It is often incorrectly assumed that Gaussian noise (i.e., noise with a Gaussian amplitude distributionsee normal distribution) necessarily refers to white noise, yet neither property implies the other. Gaussianity refers to the probability distribution with respect to the value, in this context the probability of the signal falling within any particular range of amplitudes, while the term 'white' refers to the way the signal power is distributed (i.e., independently) over time or among frequencies.
One form of white noise is the generalized mean-square derivative of the Wiener process or Brownian motion.
A generalization to random elements on infinite dimensional spaces, such as random fields, is the white noise measure.
White noise is commonly used in the production of electronic music, usually either directly or as an input for a filter to create other types of noise signal. It is used extensively in audio synthesis, typically to recreate percussive instruments such as cymbals or snare drums which have high noise content in their frequency domain.[8] A simple example of white noise is a nonexistent radio station (static).
White noise is also used to obtain the impulse response of an electrical circuit, in particular of amplifiers and other audio equipment. It is not used for testing loudspeakers as its spectrum contains too great an amount of high-frequency content. Pink noise, which differs from white noise in that it has equal energy in each octave, is used for testing transducers such as loudspeakers and microphones.
White noise is used as the basis of some random number generators. For example, Random.org uses a system of atmospheric antennas to generate random digit patterns from sources that can be well-modeled by white noise.[9]
White noise is a common synthetic noise source used for sound masking by a tinnitus masker.[10] White noise machines and other white noise sources are sold as privacy enhancers and sleep aids (see music and sleep) and to mask tinnitus.[11] The Marpac Sleep-Mate was the first domestic use white noise machine built in 1962 by traveling salesman Jim Buckwalter.[12] Alternatively, the use of an AM radio tuned to unused frequencies ("static") is a simpler and more cost-effective source of white noise.[13] However, white noise generated from a common commercial radio receiver tuned to an unused frequency is extremely vulnerable to being contaminated with spurious signals, such as adjacent radio stations, harmonics from non-adjacent radio stations, electrical equipment in the vicinity of the receiving antenna causing interference, or even atmospheric events such as solar flares and especially lightning.
The effects of white noise upon cognitive function are mixed. Recently, a small study found that white noise background stimulation improves cognitive functioning among secondary students with attention deficit hyperactivity disorder (ADHD), while decreasing performance of non-ADHD students.[14] [15] Other work indicates it is effective in improving the mood and performance of workers by masking background office noise,[16] but decreases cognitive performance in complex card sorting tasks.[17]
Similarly, an experiment was carried out on sixty-six healthy participants to observe the benefits of using white noise in a learning environment. The experiment involved the participants identifying different images whilst having different sounds in the background. Overall the experiment showed that white noise does in fact have benefits in relation to learning. The experiments showed that white noise improved the participants' learning abilities and their recognition memory slightly.[18]
A random vector (that is, a random variable with values in Rn) is said to be a white noise vector or white random vector if its components each have a probability distribution with zero mean and finite variance, and are statistically independent: that is, their joint probability distribution must be the product of the distributions of the individual components.[19]
A necessary (but, in general, not sufficient) condition for statistical independence of two variables is that they be statistically uncorrelated; that is, their covariance is zero. Therefore, the covariance matrix R of the components of a white noise vector w with n elements must be an n by n diagonal matrix, where each diagonal element Rii is the variance of component wi; and the correlation matrix must be the n by n identity matrix.
If, in addition to being independent, every variable in w also has a normal distribution with zero mean and the same variance
\sigma2
\sigma2
The power spectrum P of a random vector w can be defined as the expected value of the squared modulus of each coefficient of its Fourier transform W, that is, Pi = E(|Wi|2). Under that definition, a Gaussian white noise vector will have a perfectly flat power spectrum, with Pi = σ2 for all i.
If w is a white random vector, but not a Gaussian one, its Fourier coefficients Wi will not be completely independent of each other; although for large n and common probability distributions the dependencies are very subtle, and their pairwise correlations can be assumed to be zero.
Often the weaker condition "statistically uncorrelated" is used in the definition of white noise, instead of "statistically independent". However, some of the commonly expected properties of white noise (such as flat power spectrum) may not hold for this weaker version. Under this assumption, the stricter version can be referred to explicitly as independent white noise vector.[20] Other authors use strongly white and weakly white instead.[21]
An example of a random vector that is "Gaussian white noise" in the weak but not in the strong sense is
x=[x1,x2]
x1
x2
+x1
-x1
x
In some situations, one may relax the definition by allowing each component of a white random vector
w
\mu
\mu
W0
wi
\mu\sqrt{n}
P
W(n)
W(n)
n
\operatorname{E}[W(n)]=0
RW(n)=\operatorname{E}[W(k+n)W(k)]
n=0
RW(n)=\sigma2\delta(n)
In order to define the notion of "white noise" in the theory of continuous-time signals, one must replace the concept of a "random vector" by a continuous-time random signal; that is, a random process that generates a function
w
t
Such a process is said to be white noise in the strongest sense if the value
w(t)
t
t
w(t1)
w(t2)
t1
t2
w(t1)
w(t2)
However, a precise definition of these concepts is not trivial, because some quantities that are finite sums in the finite discrete case must be replaced by integrals that may not converge. Indeed, the set of all possible instances of a signal
w
Rn
w
w
Some authors require each value
w(t)
\mu
\sigma2
E(w(t1) ⋅ w(t2))
t1
t2
\sigma2
W[a,a+r]=
a+r | |
\int | |
a |
w(t)dt
r
r\mu
Therefore, most authors define the signal
w
w(t)
|w(t)|2
[a,a+r]
w(t)
E(w(t1) ⋅ w(t2))
t1=t2
R(t1,t2)
N\delta(t1-t2)
N
\delta
In this approach, one usually specifies that the integral
WI
w(t)
I=[a,b]
(b-a)\sigma2
E(WI ⋅ WJ)
WI
WJ
r\sigma2
r
I\capJ
I,J
In the mathematical field known as white noise analysis, a Gaussian white noise
w
lS'(R)
lS'(R)
X\simlNn(\mu,\Sigma)
\forallk\inRn: E(ei)=
| ||||||
e |
,
w:\Omega\tolS'(R)
\forall\varphi\inlS(R): E(ei)=
| ||||||||||||
e |
,
\langlew,\varphi\rangle
w(\omega)
\varphi
\omega\in\Omega
\|\varphi
2 | |
\| | |
2 |
=\intR\vert\varphi(x)\vert2dx
In statistics and econometrics one often assumes that an observed series of data values is the sum of the values generated by a deterministic linear process, depending on certain independent (explanatory) variables, and on a series of random noise values. Then regression analysis is used to infer the parameters of the model process from the observed data, e.g. by ordinary least squares, and to test the null hypothesis that each of the parameters is zero against the alternative hypothesis that it is non-zero. Hypothesis testing typically assumes that the noise values are mutually uncorrelated with zero mean and have the same Gaussian probability distributionin other words, that the noise is Gaussian white (not just white). If there is non-zero correlation between the noise values underlying different observations then the estimated model parameters are still unbiased, but estimates of their uncertainties (such as confidence intervals) will be biased (not accurate on average). This is also true if the noise is heteroskedasticthat is, if it has different variances for different data points.
Alternatively, in the subset of regression analysis known as time series analysis there are often no explanatory variables other than the past values of the variable being modeled (the dependent variable). In this case the noise process is often modeled as a moving average process, in which the current value of the dependent variable depends on current and past values of a sequential white noise process.
These two ideas are crucial in applications such as channel estimation and channel equalization in communications and audio. These concepts are also used in data compression.
In particular, by a suitable linear transformation (a coloring transformation), a white random vector can be used to produce a "non-white" random vector (that is, a list of random variables) whose elements have a prescribed covariance matrix. Conversely, a random vector with known covariance matrix can be transformed into a white random vector by a suitable whitening transformation.
White noise may be generated digitally with a digital signal processor, microprocessor, or microcontroller. Generating white noise typically entails feeding an appropriate stream of random numbers to a digital-to-analog converter. The quality of the white noise will depend on the quality of the algorithm used.[23]
The term is sometimes used as a colloquialism to describe a backdrop of ambient sound, creating an indistinct or seamless commotion. Following are some examples:
The term can also be used metaphorically, as in the novel White Noise (1985) by Don DeLillo which explores the symptoms of modern culture that came together so as to make it difficult for an individual to actualize their ideas and personality.