Classical capacity explained

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following least upper bound on the classical capacity of any quantum channel

l{N}

:

\chi(l{N})=

max
\rhoXA

I(X;B)l{N(\rho)}

where

\rhoXA

is a classical-quantum state of the following form:

\rhoXA=\sumxpX(x)\vertx\rangle\langlex\vertX

A
\rho
x

,

pX(x)

is a probability distribution, and each
A
\rho
x
is a density operator that can be input to the channel

l{N}

.

Achievability using sequential decoding

We briefly review the HSW coding theorem (thestatement of the achievability of the Holevo information rate

I(X;B)

forcommunicating classical data over a quantum channel). We first review theminimal amount of quantum mechanics needed for the theorem. We then coverquantum typicality, and finally we prove the theorem using a recent sequentialdecoding technique.

Review of quantum mechanics

In order to prove the HSW coding theorem, we really just need a few basicthings from quantum mechanics. First, a quantum state is a unit trace,positive operator known as a density operator. Usually, we denote itby

\rho

,

\sigma

,

\omega

, etc. The simplest model for a quantum channelis known as a classical-quantum channel:

x\mapsto\rhox.

The meaning of the above notation is that inputting the classical letter

x

at the transmitting end leads to a quantum state

\rhox

at the receivingend. It is the task of the receiver to perform a measurement to determine theinput of the sender. If it is true that the states

\rhox

are perfectlydistinguishable from one another (i.e., if they have orthogonal supports suchthat

Tr\left\{\rhox

\rho
x\prime

\right\}=0

for

xx\prime

), then the channel is a noiseless channel. We are interested in situationsfor which this is not the case. If it is true that the states

\rhox

allcommute with one another, then this is effectively identical to the situationfor a classical channel, so we are also not interested in these situations.So, the situation in which we are interested is that in which the states

\rhox

have overlapping support and are non-commutative.

The most general way to describe a quantum measurement is with a positive operator-valued measure (POVM). We usually denote the elements of a POVM as

\left\{Λm\right\}m

. These operators should satisfypositivity and completeness in order to form a valid POVM:

Λm\geq0    \forallm

\summΛm=I.

The probabilistic interpretation of quantum mechanics states that if someonemeasures a quantum state

\rho

using a measurement device corresponding tothe POVM

\left\{Λm\right\}

, then the probability

p\left(m\right)

for obtaining outcome

m

is equal to

p\left(m\right)=Tr\left\{Λm\rho\right\},

and the post-measurement state is
\prime
\rho=
m
1
p\left(m\right)

\sqrt{Λm

}\rho\sqrt,if the person measuring obtains outcome

m

. These rules are sufficient for usto consider classical communication schemes over cq channels.

Quantum typicality

The reader can find a good review of this topic in the article about the typical subspace.

Gentle operator lemma

The following lemma is important for our proofs. Itdemonstrates that a measurement that succeeds with high probability on averagedoes not disturb the state too much on average:

Lemma: [Winter] Given anensemble

\left\{pX\left(x\right),\rhox\right\}

with expecteddensity operator

\rho\equiv\sumxpX\left(x\right)\rhox

, supposethat an operator

Λ

such that

I\geqΛ\geq0

succeeds with highprobability on the state

\rho

:

Tr\left\{Λ\rho\right\}\geq1-\epsilon.

Then the subnormalized state

\sqrt{Λ}\rhox\sqrt{Λ}

is closein expected trace distance to the original state

\rhox

:

EX\left\{\left\Vert\sqrt{Λ}\rhoX\sqrt{Λ} -\rhoX\right\Vert1\right\}\leq2\sqrt{\epsilon}.

(Note that

\left\VertA\right\Vert1

is the nuclear norm of the operator

A

so that

\left\VertA\right\Vert1\equiv

Tr

\left\{\sqrt{A\daggerA}\right\}

.)

The following inequality is useful for us as well. It holds for any operators

\rho

,

\sigma

,

Λ

such that

0\leq\rho,\sigma,Λ\leqI

:

The quantum information-theoretic interpretation of the above inequality isthat the probability of obtaining outcome

Λ

from a quantum measurementacting on the state

\rho

is upper bounded by the probability of obtainingoutcome

Λ

on the state

\sigma

summed with the distinguishability ofthe two states

\rho

and

\sigma

.

Non-commutative union bound

Lemma: [Sen's bound] The following boundholds for a subnormalized state

\sigma

such that

0\leq\sigma

and

Tr\left\{\sigma\right\}\leq1

with

\Pi1

, ...,

\PiN

beingprojectors:

Tr\left\{\sigma\right\}-Tr\left\{\PiN\Pi 1\sigma\Pi1\PiN\right\}

N
\leq2\sqrt{\sum
i=1

Tr\left\{\left(I-\Pii\right)\sigma\right\}},

We can think of Sen's bound as a "non-commutative unionbound" because it is analogous to the following union boundfrom probability theory:

\Pr\left\{\left(A1\cap\capAN\right)c\right\} =\Pr\left\{

c
A
1

\cup\cup

c
A
N

\right\}

N
\leq\sum
i=1

\Pr\left\{

c
A
i

\right\},

where

A1,\ldots,AN

are events. The analogous bound for projectorlogic would be

Tr\left\{\left(I-\Pi1\PiN\Pi1\right) \rho\right\}

N
\leq\sum
i=1

Tr\left\{\left(I-\Pii\right) \rho\right\},

if we think of

\Pi1\PiN

as a projector onto the intersection ofsubspaces. Though, the above bound only holds if the projectors

\Pi1

,...,

\PiN

are commuting (choosing

\Pi1=\left\vert+\right\rangle \left\langle+\right\vert

,

\Pi2=\left\vert0\right\rangle\left\langle 0\right\vert

, and

\rho=\left\vert0\right\rangle\left\langle0\right\vert

gives a counterexample). If the projectors are non-commuting, then Sen'sbound is the next best thing and suffices for our purposes here.

HSW theorem with the non-commutative union bound

We now prove the HSW theorem with Sen's non-commutative union bound. Wedivide up the proof into a few parts: codebook generation, POVM construction,and error analysis.

Codebook Generation. We first describe how Alice and Bob agree on arandom choice of code. They have the channel

x\rhox

and adistribution

pX\left(x\right)

. They choose

M

classical sequences

xn

according to the IID\ distribution
p
Xn

\left(xn\right)

.After selecting them, they label them with indices as

\left\{xn\left(m\right)\right\}m\in\left[

. This leads to the followingquantum codewords:
\rho
xn\left(m\right)
=\rho
x1\left(m\right)
… ⊗ \rho
xn\left(m\right)

.

The quantum codebook is then

\left\{

\rho
xn\left(m\right)

\right\}

. The average state of the codebook is then

where

\rho=\sumxpX\left(x\right)\rhox

.

POVM Construction . Sens' bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob shouldfirst ask "Is the received state in the average typicalsubspace?" He can do this operationally by performing atypical subspace measurement corresponding to

\left\{\Pi\rho,\deltan

n
,I-\Pi
\rho,\delta

\right\}

. Next, he asks in sequential order,"Is the received codeword in the

mth

conditionally typical subspace?" This is in some senseequivalent to the question, "Is the received codeword the

mth

transmitted codeword?" He can ask thesequestions operationally by performing the measurements corresponding to theconditionally typical projectors

\left\{

\Pi
\rho,\delta
xn\left(m\right)
,I-\Pi
\rho,\delta
xn\left(m\right)

\right\}

.

Why should this sequential decoding scheme work well? The reason is that thetransmitted codeword lies in the typical subspace on average:

E
Xn

\left\{Tr\left\{\Pi\rho,\delta

\rho
Xn

\right\}\right\}=Tr\left\{\Pi\rho,\delta

E
Xn

\left\{

\rho
Xn

\right\}\right\}

=Tr\left\{\Pi\rho,\delta\rho\right\}

\geq1-\epsilon,

where the inequality follows from (\ref). Also, theprojectors
\Pi
\rho,\delta
xn\left(m\right)
are "good detectors" for the states
\rho
xn\left(m\right)
(on average) because the following condition holds from conditional quantumtypicality:
E
Xn

\left\{Tr\left\{

\Pi
\rho,\delta
Xn
\rho
Xn

\right\}\right\}\geq1-\epsilon.

Error Analysis. The probability of detecting the

mth

codeword correctly under our sequential decoding scheme is equal to

Tr\left\{

\Pi
\rho,\delta
Xn\left(m\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
n
\Pi
\rho,\delta
\rho
xn\left(m\right)
n
\Pi
\rho,\delta
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\Pi
\rho ,\delta
Xn\left(m\right)

\right\},

where we make the abbreviation

\hat{\Pi}\equivI-\Pi

. (Observe that weproject into the average typical subspace just once.) Thus, the probability ofan incorrect detection for the

mth

codeword is given by

1-Tr\left\{

\Pi
\rho,\delta
Xn\left(m\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
n
\Pi
\rho,\delta
\rho
xn\left(m\right)
n
\Pi
\rho,\delta
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\Pi
\rho ,\delta
Xn\left(m\right)

\right\},

and the average error probability of this scheme is equal to

1-1
M

\summTr\left\{

\Pi
\rho,\delta
Xn\left(m\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\hat{\Pi }
\rho,\delta
Xn\left(1\right)
n
\Pi
\rho,\delta
\rho
xn\left(m\right)
n
\Pi
\rho,\delta
\hat{\Pi}
\rho ,\delta
Xn\left(1\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\Pi
\rho,\delta
Xn\left(m\right)

\right\}.

Instead of analyzing the average error probability, we analyze the expectationof the average error probability, where the expectation is with respect to therandom choice of code:

Our first step is to apply Sen's bound to the above quantity. But before doingso, we should rewrite the above expression just slightly, by observing that

1

=E
Xn

\left\{

1
M

\summ

Tr\left\{ \rho
Xn\left(m\right)

\right\}\right\}

=E
Xn

\left\{

1
M

\summTr\left\{

n
\Pi
\rho,\delta
\rho
Xn\left(m\right)

\right\}

n
+Tr\left\{ \hat{\Pi}
\rho,\delta
\rho
Xn\left(m\right)

\right\}\right\}

=E
Xn

\left\{

1
M

\summTr\left\{

n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\} \right\}+

1
M

\summTr\left\{\hat{\Pi}\rho,\deltan

E
Xn

\left\{

\rho
Xn\left(m\right)

\right\} \right\}

=E
Xn

\left\{

1
M

\summTr\left\{

n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\} \right\}+Tr\left\{

n
\hat{\Pi}
\rho,\delta

\rho n\right\}

\leqE
Xn

\left\{

1
M

\summ

n
Tr\left\{ \Pi
\rho,\delta
\rho
Xn\left(m\right)

\Pi\rho,\deltan\right\}\right\}+\epsilon

Substituting into (and forgetting about the small

\epsilon

term for now) gives an upper bound of
E
Xn

\left\{

1
M

\summTr\left\{

n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\} \right\}

-E
Xn

\left\{

1
M

\summTr\left\{

\Pi
\rho,\delta
Xn\left(m\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\Pi
\rho,\delta
Xn\left(m\right)

\right\}\right\}.

We then apply Sen's bound to this expression with
n
\sigma=\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta
and the sequentialprojectors as
\Pi
\rho,\delta
Xn\left(m\right)
,
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
, ...,
\hat{\Pi}
\rho ,\delta
Xn\left(1\right)
. This gives the upper bound
E
Xn

\left\{

1
M

\summ2\left[Tr\left\{ \left(

I-\Pi
\rho,\delta
Xn\left(m\right)

\right)

n
\Pi
\rho ,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta
m-1
\right\} +\sum
i=1

Tr\left\{

\Pi
\rho,\delta
Xn\left(i\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)

\Pi\rho,\deltan\right\}\right]1/2\right\}.

Due to concavity of the square root, we can bound this expression from aboveby

2\left[

E
Xn

\left\{

1
M

\summTr\left\{ \left(

I-\Pi
\rho,\delta
Xn\left(m\right)

\right)

n
\Pi
\rho ,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta
m-1
\right\} +\sum
i=1

Tr\left\{

\Pi
\rho,\delta
Xn\left(i\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)

\Pi\rho,\deltan\right\}\right\}\right]1/2

\leq2\left[

E
Xn

\left\{

1
M

\summTr\left\{ \left(

I-\Pi
\rho,\delta
Xn\left(m\right)

\right)

n
\Pi
\rho ,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\} +\sumiTr\left\{

\Pi
\rho,\delta
Xn\left(i\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)

\Pi\rho,\deltan\right\}\right\}\right]1/2,

where the second bound follows by summing over all of the codewords not equalto the

mth

codeword (this sum can only be larger).

We now focus exclusively on showing that the term inside the square root canbe made small. Consider the first term:

E
Xn

\left\{

1
M

\summTr\left\{

\left(I-\Pi
\rho,\delta
Xn\left(m\right)

\right)\Pi\rho,\deltan

\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\}\right\}

\leqE
Xn

\left\{

1
M

\summTr\left\{

\left(I-\Pi
\rho,\delta
Xn\left(m\right)

\right)

\rho
Xn\left(m\right)

\right\}+\left\Vert

\rho
Xn\left(m\right)
n
-\Pi
\rho,\delta
\rho
Xn\left(m\right)

\Pi\rho,\deltan\right\Vert1\right\}

\leq\epsilon+2\sqrt{\epsilon}.

where the first inequality follows from and thesecond inequality follows from the gentle operator lemma and theproperties of unconditional and conditional typicality. Consider now thesecond term and the following chain of inequalities:

\sumi

E
Xn

\left\{Tr\left\{

\Pi
\rho ,\delta
Xn\left(i\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta

\right\}\right\}

=\sumiTr\left\{

E
Xn

\left\{

\Pi
\rho,\delta
Xn\left(i\right)

\right\}\Pi\rho,\deltan

E
Xn

\left\{

\rho
Xn\left(m\right)
n
\right\} \Pi
\rho,\delta

\right\}

=\sumiTr\left\{

E
Xn

\left\{

\Pi
\rho,\delta
Xn\left(i\right)

\right\}\Pi\rho,\deltan\rho

n
\Pi
\rho,\delta

\right\}

\leq\sumi2-n\left[Tr\left\{

E
Xn

\left\{

\Pi
\rho,\delta
Xn\left(i\right)

\right\}

n
\Pi
\rho,\delta

\right\}

The first equality follows because the codewords

Xn\left(m\right)

and

Xn\left(i\right)

are independent since they are different. The secondequality follows from . The first inequality follows from(\ref). Continuing, we have

\leq\sumi2-n\left[

E
Xn

\left\{Tr\left\{

\Pi
\rho,\delta
Xn\left(i\right)

\right\}\right\}

\leq\sumi2-n\left[ 2n\left[

=\sumi2-n\left[

\leqM 2-n\left[.

The first inequality follows from
n
\Pi
\rho,\delta

\leqI

and exchangingthe trace with the expectation. The second inequality follows from(\ref). The next two are straightforward.

Putting everything together, we get our final bound on the expectation of theaverage error probability:

1-E
Xn

\left\{

1
M

\summTr\left\{

\Pi
\rho,\delta
Xn\left(m\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
n
\Pi
\rho,\delta
\rho
Xn\left(m\right)
n
\Pi
\rho,\delta
\hat{\Pi}
\rho,\delta
Xn\left(1\right)
\hat{\Pi}
\rho,\delta
Xn\left(m-1\right)
\Pi
\rho,\delta
Xn\left(m\right)

\right\}\right\}

\leq\epsilon+2\left[\left(\epsilon+2\sqrt{\epsilon}\right) +M 2-n\left[\right]1/2.

Thus, as long as we choose

M=2n\left[

, there exists a code with vanishing error probability.

See also

References