Bussgang theorem explained

In mathematics, the Bussgang theorem is a theorem of stochastic analysis. The theorem states that the cross-correlation between a Gaussian signal before and after it has passed through a nonlinear operation are equal to the signals auto-correlation up to a constant. It was first published by Julian J. Bussgang in 1952 while he was at the Massachusetts Institute of Technology.[1]

Statement

Let

\left\{X(t)\right\}

be a zero-mean stationary Gaussian random process and

\left\{Y(t)\right\}=g(X(t))

where

g()

is a nonlinear amplitude distortion.

If

RX(\tau)

is the autocorrelation function of

\left\{X(t)\right\}

, then the cross-correlation function of

\left\{X(t)\right\}

and

\left\{Y(t)\right\}

is

RXY(\tau)=CRX(\tau),

where

C

is a constant that depends only on

g()

.

It can be further shown that

C=

1
\sigma3\sqrt{2\pi
}\int_^\infty ug(u)e^ \, du.

Derivation for One-bit Quantization

It is a property of the two-dimensional normal distribution that the joint density of

y1

and

y2

depends only on their covariance and is given explicitly by the expression

p(y1,y2)=

1
2\pi\sqrt{1-\rho2
} e^

where

y1

and

y2

are standard Gaussian random variables with correlation
\phi
y1y2

=\rho

.

Assume that

r2=Q(y2)

, the correlation between

y1

and

r2

is,
\phi
y1r2

=

1
2\pi\sqrt{1-\rho2
} \int_^ \int_^ y_1 Q(y_2) e^ \, dy_1 dy_2 .

Since

infty
\int
-infty

y1

-1
2
y
1
+
\rhoy2
1-\rho2
y1
2(1-\rho2)
e

dy1=\rho\sqrt{2\pi(1-\rho2)}y2

2
\rho
2
y
2
2(1-\rho2)
e

,the correlation
\phi
y1r2
may be simplified as
\phi
y1r2

=

\rho
\sqrt{2\pi
} \int_^ y_2 Q(y_2) e^ \, dy_2 .The integral above is seen to depend only on the distortion characteristic

Q

and is independent of

\rho

.

Remembering that

\rho=\phi
y1y2
, we observe that for a given distortion characteristic

Q

, the ratio
\phi
y1r2
\phi
y1y2
is
K
Q=1
\sqrt{2\pi
} \int_^ y_2 Q(y_2) e^ \, dy_2.

Therefore, the correlation can be rewritten in the form

\phi
y1r2

=KQ

\phi
y1y2
.
The above equation is the mathematical expression of the stated "Bussgang‘s theorem".

If

Q(x)=sign(x)

, or called one-bit quantization, then

KQ=

2
\sqrt{2\pi
} \int_^ y_2 e^ \, dy_2 = \sqrt.

[2] [3] [4]

Arcsine law

If the two random variables are both distorted, i.e.,

r1=Q(y1),r2=Q(y2)

, the correlation of

r1

and

r2

is
\phi
r1r2
infty
=\int
-infty
infty
\int
-infty

Q(y1)Q(y2)p(y1,y2)dy1dy2

.
When

Q(x)=sign(x)

, the expression becomes,
\phi=
r1r2
1
2\pi\sqrt{1-\rho2
} \left[\int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 - \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 - \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 \right]
where

\alpha=

2
y+
2
y
2
-2\rhoy1y2
1
2(1-\rho2)
.

Noticing that

\int_^ \int_^ p(y_1,y_2) \, dy_1 dy_2 = \frac \left[\int_{0}^{\infty} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 + \int_{0}^{\infty} \int_{-\infty}^{0} e^{-\alpha} \, dy_1 dy_2 + \int_{-\infty}^{0} \int_{0}^{\infty} e^{-\alpha} \, dy_1 dy_2 \right]=1,

and

infty
\int
0
infty
\int
0

e-\alphady1dy2=

0
\int
-infty
0
\int
-infty

e-\alphady1dy2

,
infty
\int
0
0
\int
-infty

e-\alphady1dy2=

0
\int
-infty
infty
\int
0

e-\alphady1dy2

,

we can simplify the expression of

\phi
r1r2
as
\phi=
r1r2
4
2\pi\sqrt{1-\rho2
} \int_^ \int_^ e^ \, dy_1 dy_2-1
Also, it is convenient to introduce the polar coordinate y_1 = R \cos \theta, y_2 = R \sin \theta . It is thus found that

\phi_ =\frac \int_^ \int_^ e^ R \, dR d\theta-1=\frac \int_^ \int_^ e^ R \, dR d\theta -1 .

Integration gives

\phi=
r1r2
2\sqrt{1-\rho2
} \int_^ \frac - 1= - \frac \arctan \left(\frac \right) \Bigg|_^ -1 =\frac \arcsin(\rho) ,
This is called "Arcsine law", which was first found by J. H. Van Vleck in 1943 and republished in 1966. The "Arcsine law" can also be proved in a simpler way by applying Price's Theorem.[5]

The function

f(x)=2
\pi

\arcsinx

can be approximated as

f(x)

2
\pi

x

when

x

is small.

Price's Theorem

Given two jointly normal random variables

y1

and

y2

with joint probability function

{\displaystylep(y1,y2)={

1
2\pi{\sqrt{1-\rho2
}}}e^},
we form the mean

I(\rho)=E(g(y1,y2))=\int

+infty
-infty
+infty
\int
-infty

g(y1,y2)p(y1,y2)dy1dy2

of some function

g(y1,y2)

of

(y1,y2)

. If

g(y1,y2)p(y1,y2)0

as

(y1,y2)0

, then
\partialnI(\rho)
\partial\rhon
infty
=\int
-infty
infty
\int
-infty
\partial2ng(y1,y2)
\partial
n
y
1
\partial
n
y
2

p(y1,y2)dy1dy2 =E\left(

\partial2ng(y1,y2)
\partial
n
y
1
\partial
n
y
2

\right)

.
Proof. The joint characteristic function of the random variables

y1

and

y2

is by definition the integral

\Phi(\omega1,\omega2)=\int

infty
-infty
infty
\int
-infty

p(y1,y2)

j(\omega1y1+\omega2y2)
e

dy1dy2=\exp\left\{-

2
\omega+
2
\omega
2
+2\rho\omega1\omega2
1
2

\right\}

.
From the two-dimensional inversion formula of Fourier transform, it follows that

p(y1,y2)=

1
4\pi2
infty
\int
-infty
infty
\int
-infty

\Phi(\omega1,

-j(\omega1y1+\omega2y2)
\omega
2) e

d\omega1

d\omega
2 =1
4\pi2
infty
\int
-infty
infty
\int
-infty

\exp\left\{-

2
\omega+
2
\omega
2
+2\rho\omega1\omega2
1
2
-j(\omega1y1+\omega2y2)
\right\} e

d\omega1d\omega2

.
Therefore, plugging the expression of

p(y1,y2)

into

I(\rho)

, and differentiating with respect to

\rho

, we obtain
\begin{align} \partialnI(\rho)
\partial\rhon

&=

infty
\int
-infty
infty
\int
-infty

g(y1,y2)p(y1,y2)dy1dy2\\ &=

infty
\int
-infty
infty
\int
-infty

g(y1,y2)\left(

1
4\pi2
infty
\int
-infty
infty
\int
-infty
\partialn\Phi(\omega1,\omega2)
\partial\rhon
-j(\omega1y1+\omega2y2)
e

d\omega1d\omega2\right) dy1dy2\\ &=

infty
\int
-infty
infty
\int
-infty

g(y1,y2)\left(

(-1)n
4\pi2
infty
\int
-infty
infty
\int
-infty
n
\omega
1
n
\omega
2

\Phi(\omega1,

-j(\omega1y1+\omega2y2)
\omega
2) e

d\omega1d\omega2\right) dy1dy2\\ &=

infty
\int
-infty
infty
\int
-infty

g(y1,y2)\left(

1
4\pi2
infty
\int
-infty
infty
\int
-infty

\Phi(\omega1,

\omega
2)
2n
\partial
-j(\omega1y1+\omega2y2)
e
\partial
n
y
1
\partial
n
y
2

d\omega1d\omega2\right) dy1dy2\\ &

infty
= \int
-infty
infty
\int
-infty

g(y1,y2)

\partial2np(y1,y2)
\partial
n
y
1
\partial
n
y
2

dy1dy2\\ \end{align}

After repeated integration by parts and using the condition at

infty

, we obtain the Price's theorem.
\begin{align} \partialnI(\rho)
\partial\rhon

&=

infty
\int
-infty
infty
\int
-infty

g(y1,y2)

\partial2np(y1,y2)
\partial
n
y
1
\partial
n
y
2

dy1dy2\\ &=

infty
\int
-infty
infty
\int
-infty
\partial2g(y1,y2)
\partialy1\partialy2
\partial2n-2p(y1,y2)
\partial
n-1
y
1
\partial
n-1
y
2

dy1dy2 \\ &= …

infty
\\ &=\int
-infty
infty
\int
-infty
\partial2ng(y1,y2)
\partial
n
y
1
\partial
n
y
2

p(y1,y2)dy1dy2 \end{align}

Proof of Arcsine law by Price's Theorem

If

g(y1,y2)=sign(y1)sign(y2)

, then
\partial2g(y1,y2)
\partialy1\partialy2

=4\delta(y1)\delta(y2)

where

\delta

is the Dirac delta function.

Substituting into Price's Theorem, we obtain,

\partialE(sign(y1)sign(y2))
\partial\rho

=

\partialI(\rho)
\partial\rho

=

infty
\int
-infty
infty
\int
-infty

4\delta(y1)\delta(y2)p(y1,y2)dy1

dy
2=2
\pi\sqrt{1-\rho2
}.
When

\rho=0

,

I(\rho)=0

. Thus

E\left(sign(y1)sign(y2)\right)=I(\rho)=

2
\pi
\rho
\int
0
1
\sqrt{1-\rho2
} \, d\rho=\frac \arcsin(\rho),
which is Van Vleck's well-known result of "Arcsine law".

Application

This theorem implies that a simplified correlator can be designed. Instead of having to multiply two signals, the cross-correlation problem reduces to the gating of one signal with another.

Further reading

Notes and References

  1. J.J. Bussgang,"Cross-correlation function of amplitude-distorted Gaussian signals", Res. Lab. Elec., Mas. Inst. Technol., Cambridge MA, Tech. Rep. 216, March 1952.
  2. Vleck. J. H. Van. The Spectrum of Clipped Noise. Radio Research Laboratory Report of Harvard University. 51.
  3. Vleck. J. H. Van. Middleton. D.. January 1966. The spectrum of clipped noise. Proceedings of the IEEE. 54. 1. 2–19. 10.1109/PROC.1966.4567. 1558-2256.
  4. Price. R.. June 1958. A useful theorem for nonlinear devices having Gaussian inputs. IRE Transactions on Information Theory. 4. 2. 69–72. 10.1109/TIT.1958.1057444. 2168-2712.
  5. Book: Papoulis, Athanasios. Probability, Random Variables, and Stochastic Processes. McGraw-Hill. 2002. 0-07-366011-6. 396.