Soft configuration model explained

In applied mathematics, the soft configuration model (SCM) is a random graph model subject to the principle of maximum entropy under constraints on the expectation of the degree sequence of sampled graphs.[1] Whereas the configuration model (CM) uniformly samples random graphs of a specific degree sequence, the SCM only retains the specified degree sequence on average over all network realizations; in this sense the SCM has very relaxed constraints relative to those of the CM ("soft" rather than "sharp" constraints[2]). The SCM for graphs of size

n

has a nonzero probability of sampling any graph of size

n

, whereas the CM is restricted to only graphs having precisely the prescribed connectivity structure.

Model formulation

The SCM is a statistical ensemble of random graphs

G

having

n

vertices (

n=|V(G)|

) labeled

\{vj\}

n=V(G)
j=1
, producing a probability distribution on

l{G}n

(the set of graphs of size

n

). Imposed on the ensemble are

n

constraints, namely that the ensemble average of the degree

kj

of vertex

vj

is equal to a designated value

\widehat{k}j

, for all

vj\inV(G)

. The model is fully parameterized by its size

n

and expected degree sequence

\{\widehat{k}j\}

n
j=1
. These constraints are both local (one constraint associated with each vertex) and soft (constraints on the ensemble average of certain observable quantities), and thus yields a canonical ensemble with an extensive number of constraints. The conditions

\langlekj\rangle=\widehat{k}j

are imposed on the ensemble by the method of Lagrange multipliers (see Maximum-entropy random graph model).

Derivation of the probability distribution

The probability

PSCM(G)

of the SCM producing a graph

G

is determined by maximizing the Gibbs entropy

S[G]

subject to constraints

\langlekj\rangle=\widehat{k}j,j=1,\ldots,n

and normalization

\sumG\inn}PSCM(G)=1

. This amounts to optimizing the multi-constraint Lagrange function below:

\begin{align} &l{L}\left(\alpha,\{\psij\}

n\right)
j=1

\\[6pt] ={}&-\sumG\inl{Gn}PSCM(G)logPSCM(G)+\alpha\left(1-\sumG\inn}PSCM(G)

n\psi
\right)+\sum
j\left(\widehat{k}

j-\sumG\inl{Gn}PSCM(G)kj(G)\right), \end{align}

where

\alpha

and

\{\psij\}

n
j=1
are the

n+1

multipliers to be fixed by the

n+1

constraints (normalization and the expected degree sequence). Setting to zero the derivative of the above with respect to

PSCM(G)

for an arbitrary

G\inl{G}n

yields

0=

\partiall{L
\left(\alpha,\{\psi

j\}

n\right)}{\partial
j=1

PSCM(G)}=-logPSCM(G)

n\psi
-1-\alpha-\sum
j

kj(G)  ⇒

P
SCM(G)=1
Z
n\psi
\exp\left[-\sum
jk

j(G)\right],

the constant

Z:=e\alpha+1=\sumG\inl{Gn}\exp\left[-\sum

n\psi
jk

j(G)\right]=\prod1\le

-(\psii+\psij)
\left(1+e

\right)

[3] being the partition function normalizing the distribution; the above exponential expression applies to all

G\inl{G}n

, and thus is the probability distribution. Hence we have an exponential family parameterized by

\{\psij\}

n
j=1
, which are related to the expected degree sequence

\{\widehat{k}j\}

n
j=1
by the following equivalent expressions:

\langlekq\rangle=\sumG\inn}kq(G)PSCM(G)=-

\partiallogZ
\partial\psiq

=\sumj\ne

1
\psiq+\psij
e+1

=\widehat{k}q,q=1,\ldots,n.

Notes and References

  1. News: Sparse Maximum-Entropy Random Graphs with a Given Power-Law Degree Distribution. van der Hoorn. Pim. Gabor Lippner. Dmitri Krioukov. 2017-10-10. 1705.10261.
  2. News: Garlaschelli. Diego. Frank den Hollander. Andrea Roccaverde. January 30, 2018. Coviariance structure behind breaking of ensemble equivalence in random graphs. September 14, 2018. February 4, 2023. https://web.archive.org/web/20230204161609/http://eprints.imtlucca.it/4040/1/1711.04273.pdf. live.
  3. News: The statistical mechanics of networks. Park. Juyong. M.E.J. Newman. 2004-05-25. cond-mat/0405566.