Robustness, the ability to withstand failures and perturbations, is a critical attribute of many complex systems including complex networks.
The study of robustness in complex networks is important for many fields. In ecology, robustness is an important attribute of ecosystems, and can give insight into the reaction to disturbances such as the extinction of species.[1] For biologists, network robustness can help the study of diseases and mutations, and how to recover from some mutations.[2] In economics, network robustness principles can help understanding of the stability and risks of banking systems.[3] And in engineering, network robustness can help to evaluate the resilience of infrastructure networks such as the Internet or power grids.[4]
See main article: article and Percolation theory.
The focus of robustness in complex networks is the response of the network to the removal of nodes or links. The mathematical model of such a process can be thought of as an inverse percolation process. Percolation theory models the process of randomly placing pebbles on an n-dimensional lattice with probability p, and predicts the sudden formation of a single large cluster at a critical probability
pc
\langles\rangle
\begin{align} \langles\rangle\sim\left|p-
\gammap | |
p | |
c\right| |
\end{align}
We can see the average cluster size suddenly diverges around the critical probability, indicating the formation of a single large cluster. It is also important to note that the exponent
\gammap
pc
The mathematical derivation for the threshold at which a complex network will lose its giant component is based on the Molloy–Reed criterion.[6]
\begin{align} \kappa\equiv
\langlek2\rangle | |
\langlek\rangle |
>2 \end{align}
The Molloy–Reed criterion is derived from the basic principle that in order for a giant component to exist, on average each node in the network must have at least two links. This is analogous to each person holding two others' hands in order to form a chain. Using this criterion and an involved mathematical proof, one can derive a critical threshold for the fraction of nodes needed to be removed for the breakdown of the giant component of a complex network.[7]
\begin{align} f | ||||||||
|
\end{align}
An important property of this finding is that the critical threshold is only dependent on the first and second moment of the degree distribution and is valid for an arbitrary degree distribution.
See main article: article and Erdős–Rényi model.
Using
\langlek2\rangle=\langlek\rangle(\langlek\rangle+1)
ER | ||
\begin{align} f | =1- | |
c |
1 | |
\langlek\rangle |
\end{align}
As a random network gets denser, the critical threshold increases, meaning a higher fraction of the nodes must be removed to disconnect the giant component.
See main article: article and Scale-free network.
By re-expressing the critical threshold as a function of the gamma exponent for a scale-free network, we can draw a couple of important conclusions regarding scale-free network robustness.[8]
\begin{align} fc&=1-
1 | |
\kappa-1 |
\\ \kappa&=
\langlek2\rangle | =\left| | |
\langlek\rangle |
2-\gamma | |
3-\gamma |
\right|A\\ A&=Kmin,~\gamma>3\\ A
3-\gamma | |
&=K | |
max |
\gamma-2 | |
K | |
min |
,~3>\gamma>2\\ A&=Kmax,~2>\gamma>1\\ &where~Kmax=Kmin
| ||||
N |
\end{align}
For
\gamma>3
\gamma<3
\kappa
Although scale-free networks are resilient to random failures, we might imagine them being quite vulnerable to targeted hub removal. In this case we consider the robustness of scale free networks in response to targeted attacks, performed with thorough prior knowledge of the network topology. By considering the changes induced by the removal of a hub, specifically the change in the maximum degree and the degrees of the connected nodes, we can derive another formula for the critical threshold considering targeted attacks on a scale free network.[9]
| |||||
\begin{align} f | =2+ | ||||
c |
2-\gamma | |
3-\gamma |
Kmin
| ||||
(f | ||||
c |
-1) \end{align}
This equation cannot be solved analytically, but can be graphed numerically. To summarize the important points, when gamma is large, the network acts as a random network, and attack robustness become similar to random failure robustness of a random network. However, when gamma is smaller, the critical threshold for attacks on scale-free networks becomes relatively small, indicating a weakness to targeted attacks.
For more detailed information on the attack tolerance of complex networks please see the attack tolerance page.
See main article: article and Cascading failure.
An important aspect of failures in many networks is that a single failure in one node might induce failures in neighboring nodes. When a small number of failures induces more failures, resulting in a large number of failures relative to the network size, a cascading failure has occurred. There are many models for cascading failures.[10] [11] [12] [13] [14] [15] [16] [17] These models differ in many details, and model different physical propagation phenomenon from power failures to information flow over Twitter, but have some shared principals. Each model focuses on some sort of propagation or cascade, there is some threshold determining when a node will fail or activate and contribute towards propagation, and there is some mechanism defined by which propagation will be directed when nodes fail or activate. All of these models predict some critical state, in which the distribution of the size of potential cascades matches a power law, and the exponent is uniquely determined by the degree exponent of the underlying network. Because of the differences in the models and the consensus of this result, we are led to believe the underlying phenomenon is universal and model-independent.[8]
For more detailed information on modeling cascading failures, see the global cascades model page.