Biological network inference is the process of making inferences and predictions about biological networks.[1] By using these networks to analyze patterns in biological systems, such as food-webs, we can visualize the nature and strength of these interactions between species, DNA, proteins, and more.
The analysis of biological networks with respect to diseases has led to the development of the field of network medicine.[2] Recent examples of application of network theory in biology include applications to understanding the cell cycle[3] as well as a quantitative framework for developmental processes.[4] Good network inference requires proper planning and execution of an experiment, thereby ensuring quality data acquisition. Optimal experimental design in principle refers to the use of statistical and or mathematical concepts to plan for data acquisition. This must be done in such a way that the data information content is enriched, and a sufficient amount of data is collected with enough technical and biological replicates where necessary.
The general cycle to modeling biological networks is as follows:
A network is a set of nodes and a set of directed or undirected edges between the nodes. Many types of biological networks exist, including transcriptional, signalling and metabolic. Few such networks are known in anything approaching their complete structure, even in the simplest bacteria. Still less is known on the parameters governing the behavior of such networks over time, how the networks at different levels in a cell interact, and how to predict the complete state description of a eukaryotic cell or bacterial organism at a given point in the future. Systems biology, in this sense, is still in its infancy .
There is great interest in network medicine for the modelling biological systems. This article focuses on inference of biological network structure using the growing sets of high-throughput expression data for genes, proteins, and metabolites.[9] Briefly, methods using high-throughput data for inference of regulatory networks rely on searching for patterns of partial correlation or conditional probabilities that indicate causal influence.[10] [11] Such patterns of partial correlations found in the high-throughput data, possibly combined with other supplemental data on the genes or proteins in the proposed networks, or combined with other information on the organism, form the basis upon which such algorithms work. Such algorithms can be of use in inferring the topology of any network where the change in state of one node can affect the state of other nodes.
Genes are the nodes and the edges are directed. A gene serves as the source of a direct regulatory edge to a target gene by producing an RNA or protein molecule that functions as a transcriptional activator or inhibitor of the target gene. If the gene is an activator, then it is the source of a positive regulatory connection; if an inhibitor, then it is the source of a negative regulatory connection. Computational algorithms take as primary input data measurements of mRNA expression levels of the genes under consideration for inclusion in the network, returning an estimate of the network topology. Such algorithms are typically based on linearity, independence or normality assumptions, which must be verified on a case-by-case basis.[12] Clustering or some form of statistical classification is typically employed to perform an initial organization of the high-throughput mRNA expression values derived from microarray experiments, in particular to select sets of genes as candidates for network nodes.[13] The question then arises: how can the clustering or classification results be connected to the underlying biology? Such results can be useful for pattern classification – for example, to classify subtypes of cancer, or to predict differential responses to a drug (pharmacogenomics). But to understand the relationships between the genes, that is, to more precisely define the influence of each gene on the others, the scientist typically attempts to reconstruct the transcriptional regulatory network.
See main article: Gene Co-Expression Network.
A gene co-expression network is an undirected graph, where each node corresponds to a gene, and a pair of nodes is connected with an edge if there is a significant co-expression relationship between them.
See main article: Signal transduction.
Signal transduction networks use proteins for the nodes and directed edges to represent interaction in which the biochemical conformation of the child is modified by the action of the parent (e.g. mediated by phosphorylation, ubiquitylation, methylation, etc.). Primary input into the inference algorithm would be data from a set of experiments measuring protein activation / inactivation (e.g., phosphorylation / dephosphorylation) across a set of proteins. Inference for such signalling networks is complicated by the fact that total concentrations of signalling proteins will fluctuate over time due to transcriptional and translational regulation. Such variation can lead to statistical confounding. Accordingly, more sophisticated statistical techniques must be applied to analyse such datasets.[14] (very important in the biology of cancer)
See main article: Metabolic Network.
Metabolite networks use nodes to represent chemical reactions and directed edges for the metabolic pathways and regulatory interactions that guide these reactions. Primary input into an algorithm would be data from a set of experiments measuring metabolite levels.
See main article: Interactome.
One of the most intensely studied networks in biology, Protein-protein interaction networks (PINs) visualize the physical relationships between proteins inside a cell. in a PIN, proteins are the nodes and their interactions are the undirected edges. PINs can be discovered with a variety of methods including; Two-hybrid Screening, in vitro: co-immunoprecipitation,[15] blue native gel electrophoresis,[16] and more.[17]
See main article: Neural Network.
A neuronal network is composed to represent neurons with each node and synapses for the edges, which are typically weighted and directed. the weights of edges are usually adjusted by the activation of connected nodes. The network is usually organized into input layers, hidden layers, and output layers.
See main article: Food Web.
A food web is an interconnected directional graph of what eats what in an ecosystem. The members of the ecosystem are the nodes and if a member eats another member then there is a directed edge between those 2 nodes.
These networks are defined by a set of pairwise interactions between and within a species that is used to understand the structure and function of larger ecological networks.[18] By using network analysis we can discover and understand how these interactions link together within the system's network. It also allows us to quantify associations between individuals, which makes it possible to infer details about the network as a whole at the species and/or population level.[19]
See main article: Chromatin.
DNA-DNA chromatin networks are used to clarify the activation or suppression of genes via the relative location of strands of chromatin. These interactions can be understood by analyzing commonalities amongst different loci, a fixed position on a chromosome where a particular gene or genetic marker is located. Network analysis can provide vital support in understanding relationships among different areas of the genome.
See main article: Gene regulatory network.
A gene regulatory network[20] is a set of molecular regulators that interact with each other and with other substances in the cell. The regulator can be DNA, RNA, protein and complexes of these. Gene regulatory networks can be modeled in numerous ways including; Coupled ordinary differential equations, Boolean networks, Continuous networks, and Stochastic gene networks.
The initial data used to make the inference can have a huge impact on the accuracy of the final inference. Network data is inherently noisy and incomplete sometimes due to evidence from multiple sources that don't overlap or contradictory data. Data can be sourced in multiple ways to include manual curation of scientific literature put into databases, High-throughput datasets, computational predictions, and text mining of old scholarly articles from before the digital era.
A network's diameter is the maximum number of steps separating any two nodes and can be used to determine the How connected a graph is, in topology analysis, and clustering analysis.
The transitivity or clustering coefficient of a network is a measure of the tendency of the nodes to cluster together. High transitivity means that the network contains communities or groups of nodes that are densely connected internally. In biological networks, finding these communities is very important, because they can reflect functional modules and protein complexes[21] The uncertainty about the connectivity may distort the results and should be taken into account when the transitivity and other topological descriptors are computed for inferred networks.[8]
Network confidence is a way to measure how sure one can be that the network represents a real biological interaction. We can do this via contextual biological information, counting the number of times an interaction is reported in the literature, or group different strategies into a single score. the MIscore method for assessing the reliability of protein-protein interaction data is based on the use of standards.[22] MIscore gives an estimation of confidence weighting on all available evidence for an interacting pair of proteins. The method allows weighting of evidence provided by different sources, provided the data is represented following the standards created by the IMEx consortium. The weights are number of publications, detection method, interaction evidence type.
See main article: Closeness.
Closeness, a.k.a. closeness centrality, is a measure of centrality in a network and is calculated as the reciprocal of the sum of the length of the shortest paths between the node and all other nodes in the graph. This measure can be used to make inferences in all graph types and analysis methods.
See main article: Betweenness.
Betweeness, a.k.a. betweenness centrality, is a measure of centrality in a graph based on shortest paths. The betweenness for each node is the number of these shortest paths that pass through the node.
See main article: Network Theory.
For our purposes, network analysis is closely related to graph theory. By measuring the attributes in the previous section we can utilize many different techniques to create accurate inferences based on biological data.
Topology Analysis analyzes the topology of a network to identify relevant participates and substructures that may be of biological significance. The term encompasses an entire class of techniques such as network motif search, centrality analysis, topological clustering, and shortest paths. These are but a few examples, each of these techniques use the general idea of focusing on the topology of a network to make inferences.
A motif is defined as a frequent and unique sub-graph. By counting all the possible instances, listing all patterns, and testing isomorphisms we can derive crucial information about a network. They're suggested to be the basic building blocks complex biological networks. The computational research has focused on improving existing motif detection tools to assist the biological investigations and allow larger networks to be analyzed. Several different algorithms have been provided so far, which are elaborated in the next section.
Centrality gives an estimation on how important a node or edge is for the connectivity or the information flow of the network. It is a useful parameter in signalling networks and it is often used when trying to find drug targets.[23] It is most commonly used in PINs to determine important proteins and their functions. Centrality can be measured in different ways depending on the graph and the question that needs answering, they include the degree of nodes or the number of connected edges to a node, global centrality measures, or via random walks which is used by the Google PageRank algorithm to assign weight to each webpage.[24] The centrality measures may be affected by errors due to noise on measurement and other causes.[25] Therefore, the topological descriptors should be defined as random variable with the associated probability distribution encoding the uncertainty on their value.
Topological Clustering or Topological Data Analysis (TDA) provides a general framework to analyze high dimensional, incomplete, and noisy data in a way that reduces dimensional and gives a robustness to noise. The idea that is that the shape of data sets contains relevant information. When this information is a homology group there is a mathematical interpretation that assumes that features that persist for a wide range of parameters are "true" features and features persisting for only a narrow range of parameters are noise, although the theoretical justification for this is unclear.[26] This technique has been used for progression analysis of disease,[27] [28] viral evolution,[29] propagation of contagions on networks,[30] bacteria classification using molecular spectroscopy,[31] and much more in and outside of biology.
The shortest path problem is a common problem in graph theory that tries to find the path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. This method can be used to determine the network diameter or redundancy in a network. there are many algorithms for this including Dijkstra's algorithm, Bellman–Ford algorithm, and the Floyd–Warshall algorithm just to name a few.
Cluster analysis groups objects (nodes) such that objects in the same cluster are more similar to each other than to those in other clusters. This can be used to perform pattern recognition, image analysis, information retrieval, statistical data analysis, and so much more. It has applications in Plant and animal ecology, Sequence analysis, antimicrobial activity analysis, and many other fields. Cluster analysis algorithms come in many forms as well such as Hierarchical clustering, k-means clustering, Distribution-based clustering, Density-based clustering, and Grid-based clustering.
Gene annotation databases are commonly used to evaluate the functional properties of experimentally derived gene sets. Annotation Enrichment Analysis (AEA) is used to overcome biases from overlap statistical methods used to assess these associations.[32] It does this by using gene/protein annotations to infer which annotations are over-represented in a list of genes/proteins taken from a network.
Transcriptional regulatory networks | FANMOD,[33] ChIP-on-chip,[34] position–weightmatrices, AlignACE, MDScan, MEME, REDUCE | |
Gene Co-Expression Networks | FANMOD, Paired Design,[35] WGCNA[36] | |
Signal transduction | FANMOD, PathLinker[37] | |
Metabolic Network | FANMOD, Pathway Tools, Ergo, KEGGtranslator, ModelSEED | |
Protein-Protein Interaction Networks | FANMOD, NETBOX,[38] Text Mining,[39] STRING | |
Neuronal Network | FANMOD, Neural Designer, Neuroph, Darknet | |
Food Webs | FANMOD, RCN, R | |
Within Species and Between Species Interaction Networks | FANMOD, NETBOX | |
DNA-DNA Chromatin Networks | FANMOD, | |
Gene Regulatory Networks | FANMOD, |