Statistical potential explained

In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB).

The original method to obtain such potentials is the quasi-chemical approximation, due to Miyazawa and Jernigan.[1] It was later followed by the potential of mean force (statistical PMF), developed by Sippl.[2] Although the obtained scores are often considered as approximations of the free energy—thus referred to as pseudo-energies—this physical interpretation is incorrect.[3] [4] Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences.[5]

Overview

Possible features to which a pseudo-energy can be assigned include:

The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB).

History

Initial development

Many textbooks present the statistical PMFs as proposed by Sippl as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice.The Boltzmann distribution applied to a specific pair of amino acids,is given by:

P\left(r\right)=1
Z
-F\left(r\right)
kT
e

where

r

is the distance,

k

is the Boltzmann constant,

T

isthe temperature and

Z

is the partition function, with

Z=\int

-F(r)
kT
e

dr

The quantity

F(r)

is the free energy assigned to the pairwise system.Simple rearrangement results in the inverse Boltzmann formula,which expresses the free energy

F(r)

as a function of

P(r)

:

F\left(r\right)=-kTlnP\left(r\right)-kTlnZ

To construct a PMF, one then introduces a so-called reference state with a corresponding distribution

QR

and partition function

ZR

, and calculates the following free energy difference:

\DeltaF\left(r\right)=-kTln

P\left(r\right)-kTln
QR\left(r\right)
Z
ZR

The reference state typically results from a hypotheticalsystem in which the specific interactions between the amino acidsare absent. The second term involving

Z

and

ZR

can be ignored, as it is a constant.

In practice,

P(r)

is estimated from the database of known proteinstructures, while

QR(r)

typically results from calculationsor simulations. For example,

P(r)

could be the conditional probabilityof finding the

C\beta

atoms of a valine and a serine at a givendistance

r

from each other, giving rise to the free energy difference

\DeltaF

. The total free energy difference of a protein,

\DeltaFrm{T

}, is then claimed to be the sumof all the pairwise free energies:

where the sum runs over all amino acid pairs

ai,aj

(with

i<j

) and

rij

is their corresponding distance. In many studies

QR

does not depend on the amino acid sequence.[6]

Conceptual issues

Intuitively, it is clear that a low value for

\DeltaFrm{T

} indicatesthat the set of distances in a structure is more likely in proteins thanin the reference state. However, the physical meaning of these statistical PMFs hasbeen widely disputed, since their introduction.[7] [8] The main issues are:
  1. The wrong interpretation of this "potential" as a true, physically valid potential of mean force;
  2. The nature of the so-called reference state and its optimal formulation;
  3. The validity of generalizations beyond pairwise distances.

Controversial analogy

g(r)

, which is given by:[10]
g(r)=P(r)
QR(r)

where

P(r)

and

QR(r)

are the respective probabilities offinding two particles at a distance

r

from each other in the liquidand in the reference state. For liquids, the reference stateis clearly defined; it corresponds to the ideal gas, consisting ofnon-interacting particles. The two-particle potential of mean force

W(r)

is related to

g(r)

by:

W(r)=-kTlogg(r)=-kTlog

P(r)
QR(r)

According to the reversible work theorem, the two-particlepotential of mean force

W(r)

is the reversible work required tobring two particles in the liquid from infinite separation to a distance

r

from each other.

Sippl justified the use of statistical PMFs—a few years after he introducedthem for use in protein structure prediction—byappealing to the analogy with the reversible work theorem for liquids. For liquids,

g(r)

can be experimentally measuredusing small angle X-ray scattering; for proteins,

P(r)

is obtainedfrom the set of known protein structures, as explained in the previoussection. However, as Ben-Naim wrote in a publication on the subject:
[...] the quantities, referred to as "statistical potentials," "structurebased potentials," or "pair potentials of mean force", as derived fromthe protein data bank (PDB), are neither "potentials" nor "potentials ofmean force," in the ordinary sense as used in the literature onliquids and solutions.
Moreover, this analogy does not solve the issue of how to specify a suitable reference state for proteins.

Machine learning

In the mid-2000s, authors started to combine multiple statistical potentials, derived from different structural features, into composite scores.[11] For that purpose, they used machine learning techniques, such as support vector machines (SVMs). Probabilistic neural networks (PNNs) have also been applied for the training of a position-specific distance-dependent statistical potential.[12] In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential.[13] The resulting method, named AlphaFold, won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by correctly predicting the most accurate structure for 25 out of 43 free modelling domains.

Explanation

Bayesian probability

Baker and co-workers [14] justified statistical PMFs from aBayesian point of view and used these insights in the construction ofthe coarse grained ROSETTA energy function. Accordingto Bayesian probability calculus, the conditional probability

P(X\mid A)

of a structure

X

, given the amino acid sequence

A

, can bewritten as:

P\left(X\midA\right)=

P\left(A\mid X\right)P\left(X\right)
P\left(A\right)

\proptoP\left(A\mid X\right)P\left(X\right)

P(X\midA)

is proportional to the product ofthe likelihood

P\left(A\midX\right)

times the prior

P\left(X\right)

. By assuming that the likelihood can be approximatedas a product of pairwise probabilities, and applying Bayes' theorem, thelikelihood can be written as:

where the product runs over all amino acid pairs

ai,aj

(with

i<j

), and

rij

is the distance between amino acids

i

and

j

.Obviously, the negative of the logarithm of the expressionhas the same functional form as the classicpairwise distance statistical PMFs, with the denominator playing the role of thereference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressedas a product of pairwise probabilities, and it is purely qualitative.

Probability kinematics

Hamelryck and co-workers later gave a quantitative explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and named probability kinematics. This variant of Bayesian thinking (sometimes called "Jeffrey conditioning") allows updating a prior distribution based on new information on the probabilities of the elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the reference ratio is determined by the prior distribution.

Reference ratio

Expressions that resemble statistical PMFs naturally result from the application ofprobability theory to solve a fundamental problem that arises in proteinstructure prediction: how to improve an imperfect probabilitydistribution

Q(X)

over a first variable

X

using a probabilitydistribution

P(Y)

over a second variable

Y

, with

Y=f(X)

. Typically,

X

and

Y

are fine and coarse grained variables, respectively. For example,

Q(X)

could concernthe local structure of the protein, while

P(Y)

could concern the pairwise distances between the amino acids. In that case,

X

could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles).In order to combine the two distributions, such that the local structure will be distributed according to

Q(X)

, whilethe pairwise distances will be distributed according to

P(Y)

, the following expression is needed:
P(X,Y)=P(Y)
Q(Y)

Q(X)

where

Q(Y)

is the distribution over

Y

implied by

Q(X)

. The ratio in the expression corresponds to the PMF. Typically,

Q(X)

is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is implied by

Q(X)

. Conventional applications of pairwise distance statistical PMFs usually lack twonecessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorouslydefined by

Q(X)

.

Applications

Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading. Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures.[15] Statistical potentials are not only used for protein structure prediction, but also for modelling the protein folding pathway.[16] [17]

See also

Notes and References

  1. Miyazawa S, Jernigan R . 1985 . Estimation of effective interresidue contact energies from protein crystal structures: quasi-chemical approximation . Macromolecules . 18 . 3. 534–552 . 10.1021/ma00145a039. 1985MaMol..18..534M . 10.1.1.206.715 .
  2. Sippl MJ . 1990 . Calculation of conformational ensembles from potentials of mean force. An approach to the knowledge-based prediction of local structures in globular proteins . 10.1016/s0022-2836(05)80269-4 . J Mol Biol . 213 . 4. 859–883 . 2359125 .
  3. Thomas PD, Dill KA . 1996 . Statistical potentials extracted from protein structures: how accurate are they? . J Mol Biol . 257 . 2. 457–469 . 10.1006/jmbi.1996.0175. 8609636 .
  4. Ben-Naim A . 1997 . Statistical potentials extracted from protein structures: Are these meaningful potentials? . J Chem Phys . 107 . 9. 3698–3706 . 10.1063/1.474725. 1997JChPh.107.3698B .
  5. Hamelryck T, Borg M, Paluszewski M, etal . Potentials of mean force for protein structure prediction vindicated, formalized and generalized . PLOS ONE . 5 . 11 . e13714 . 2010 . 21103041 . 2978081 . 10.1371/journal.pone.0013714 . 1008.4006 . 2010PLoSO...513714H . Flower . Darren R.. free .
  6. Rooman M, Wodak S . 1995 . Are database-derived potentials valid for scoring both forward and inverted protein folding? . Protein Eng . 8 . 9. 849–858 . 10.1093/protein/8.9.849. 8746722 .
  7. Koppensteiner WA, Sippl MJ . 1998 . Knowledge-based potentials–back to the roots . Biochemistry Mosc. . 63 . 3. 247–252 . 9526121 .
  8. Shortle D . 2003 . Propensities, probabilities, and the Boltzmann hypothesis . Protein Sci . 12 . 6. 1298–1302 . 10.1110/ps.0306903. 12761401 . 2323900.
  9. Sippl MJ, Ortner M, Jaritz M, Lackner P, Flockner H . 1996 . Helmholtz free energies of atom pair interactions in proteins . Fold Des . 1 . 4. 289–98 . 10.1016/s1359-0278(96)00042-9. 9079391 .
  10. Chandler D (1987) Introduction to Modern Statistical Mechanics. New York: Oxford University Press, USA.
  11. Eramian. David. Shen. Min‐yi. Devos. Damien. Melo. Francisco. Sali. Andrej. Marti-Renom. Marc. 2006. A composite score for predicting errors in protein structure models. Protein Science. 15. 7. 1653–1666. 10.1110/ps.062095806. 2242555. 16751606.
  12. Zhao. Feng. Xu. Jinbo. 2012. A Position-Specific Distance-Dependent Statistical Potential for Protein Structure and Functional Study. Structure. 20. 6. 1118–1126. 10.1016/j.str.2012.04.003. 3372698. 22608968.
  13. Senior AW, Evans R, Jumper J, etal . Improved protein structure prediction using potentials from deep learning. Nature. 577 . 7792 . 706–710 . 2020 . 31942072. 10.1038/s41586-019-1923-7. 2020Natur.577..706S. 210221987.
  14. Simons KT, Kooperberg C, Huang E, Baker D . 1997 . Assembly of protein tertiary structures from fragments with similar local sequences using simulated annealing and Bayesian scoring functions . J Mol Biol . 268 . 1. 209–225 . 10.1006/jmbi.1997.0959. 9149153 . 10.1.1.579.5647 .
  15. Lam SD, Das S, Sillitoe I, Orengo C . An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences . Acta Crystallogr D . 73 . 8 . 628–640 . 2017 . 28777078 . 10.1107/S2059798317008920 . 5571743.
  16. Kmiecik S and Kolinski A . Characterization of protein-folding pathways by reduced-space modeling . Proc. Natl. Acad. Sci. U.S.A. . 104 . 30 . 12330–12335 . 2007 . 17636132 . 10.1073/pnas.0702265104 . 1941469. 2007PNAS..10412330K . free .
  17. Adhikari AN, Freed KF, Sosnick TR . De novo prediction of protein folding pathways and structure using the principle of sequential stabilization . Proc. Natl. Acad. Sci. U.S.A. . 109 . 43 . 17442–17447 . 2012 . 10.1073/pnas.1209000109 . 23045636 . 3491489. 2012PNAS..10917442A . free .