Quantum statistical mechanics explained

Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics.

Expectation

From classical probability theory, we know that the expectation of a random variable X is defined by its distribution DX by

E(X)=\intRdλ\operatorname{D}X(λ)

assuming, of course, that the random variable is integrable or that the random variable is non-negative. Similarly, let A be an observable of a quantum mechanical system. A is given by a densely defined self-adjoint operator on H. The spectral measure of A defined by

\operatorname{E}A(U)=\intUdλ\operatorname{E}(λ),

uniquely determines A and conversely, is uniquely determined by A. EA is a Boolean homomorphism from the Borel subsets of R into the lattice Q of self-adjoint projections of H. In analogy with probability theory, given a state S, we introduce the distribution of A under S which is the probability measure defined on the Borel subsets of R by

\operatorname{D}A(U)=\operatorname{Tr}(\operatorname{E}A(U)S).

Similarly, the expected value of A is defined in terms of the probability distribution DA by

E(A)=\intRdλ\operatorname{D}A(λ).

Note that this expectation is relative to the mixed state S which is used in the definition of DA.

Remark. For technical reasons, one needs to consider separately the positive and negative parts of A defined by the Borel functional calculus for unbounded operators.

One can easily show:

E(A)=\operatorname{Tr}(AS)=\operatorname{Tr}(SA).

\psi

, then:

E(A)=\langle\psi|A|\psi\rangle.

The trace of an operator A is written as follows:

\operatorname{Tr}(A)=\summ\langlem|A|m\rangle.

Von Neumann entropy

See main article: Von Neumann entropy.

Of particular significance for describing randomness of a state is the von Neumann entropy of S formally defined by

\operatorname{H}(S)=-\operatorname{Tr}(Slog2S)

. Actually, the operator S log2 S is not necessarily trace-class. However, if S is a non-negative self-adjoint operator not of trace class we define Tr(S) = +∞. Also note that any density operator S can be diagonalized, that it can be represented in some orthonormal basis by a (possibly infinite) matrix of the form

\begin{bmatrix}λ1&0&&0&\ 0&λ2&&0&\\vdots&\vdots&\ddots&\ 0&0&&λn&\\vdots&\vdots&&&\ddots\end{bmatrix}

and we define

\operatorname{H}(S)=-\sumiλilog2λi.

The convention is that

0log20=0

, since an event with probability zero should not contribute to the entropy. This value is an extended real number (that is in [0, ∞]) and this is clearly a unitary invariant of S.

Remark. It is indeed possible that H(S) = +∞ for some density operator S. In fact T be the diagonal matrix

T=\begin{bmatrix}

1
2(log22)2

&0&&0&\ 0&

1
3(log23)2

&&0&\\vdots&\vdots&\ddots&\ 0&0&&

1
n(log2n)2

&\\vdots&\vdots&&&\ddots\end{bmatrix}

T is non-negative trace class and one can show T log2 T is not trace-class.

Theorem. Entropy is a unitary invariant.

In analogy with classical entropy (notice the similarity in the definitions), H(S) measures the amount of randomness in the state S. The more dispersed the eigenvalues are, the larger the system entropy. For a system in which the space H is finite-dimensional, entropy is maximized for the states S which in diagonal form have the representation

\begin{bmatrix}

1
n

&0&&0\ 0&

1
n

&...&0\\vdots&\vdots&\ddots&\vdots\ 0&0&&

1
n

\end{bmatrix}

For such an S, H(S) = log2 n. The state S is called the maximally mixed state.

Recall that a pure state is one of the form

S=|\psi\rangle\langle\psi|,

for ψ a vector of norm 1.

Theorem. H(S) = 0 if and only if S is a pure state.

For S is a pure state if and only if its diagonal form has exactly one non-zero entry which is a 1.

Entropy can be used as a measure of quantum entanglement.

Gibbs canonical ensemble

See main article: canonical ensemble.

Consider an ensemble of systems described by a Hamiltonian H with average energy E. If H has pure-point spectrum and the eigenvalues

En

of H go to +∞ sufficiently fast, er H will be a non-negative trace-class operator for every positive r.

The Gibbs canonical ensemble is described by the state

S=

e-
\operatorname{Tr

(e-)}.

Where β is such that the ensemble average of energy satisfies

\operatorname{Tr}(SH)=E

and

\operatorname{Tr}(e-)=\sumn

-\betaEn
e

=Z(\beta)

This is called the partition function; it is the quantum mechanical version of the canonical partition function of classical statistical mechanics. The probability that a system chosen at random from the ensemble will be in a state corresponding to energy eigenvalue

Em

is

l{P}(Em)=

-\betaEm
e
\sum
-\betaEn
e
n

.

Under certain conditions, the Gibbs canonical ensemble maximizes the von Neumann entropy of the state subject to the energy conservation requirement.

Grand canonical ensemble

See main article: grand canonical ensemble.

For open systems where the energy and numbers of particles may fluctuate, the system is described by the grand canonical ensemble, described by the density matrix

\rho=

\beta(\sumi\muiNi-H)
e
\operatorname{Tr
\beta(\sumi\muiNi-H)
\left(e

\right)}.

where the N1, N2, ... are the particle number operators for the different species of particles that are exchanged with the reservoir. Note that this is a density matrix including many more states (of varying N) compared to the canonical ensemble.

The grand partition function is

lZ(\beta,\mu1,\mu2,)=

\beta(\sumi\muiNi-H)
\operatorname{Tr}(e

)

See also

References