Statistical potential
In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank.
The original method to obtain such potentials is the quasi-chemical approximation, due to Miyazawa and Jernigan. It was later followed by the potential of mean force, developed by Sippl. Although the obtained scores are often considered as approximations of the free energy—thus referred to as pseudo-energies—this physical interpretation is incorrect. Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences.
Overview
Possible features to which a pseudo-energy can be assigned include:- interatomic distances,
- torsion angles,
- solvent exposure,
- or hydrogen bond geometry.
History
Initial development
Many textbooks present the statistical PMFs as proposed by Sippl as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice.The Boltzmann distribution applied to a specific pair of amino acids,
is given by:
where is the distance, is the Boltzmann constant, is
the temperature and is the partition function, with
The quantity is the free energy assigned to the pairwise system.
Simple rearrangement results in the inverse Boltzmann formula,
which expresses the free energy as a function of :
To construct a PMF, one then introduces a so-called reference state with a corresponding distribution and partition function
, and calculates the following free energy difference:
The reference state typically results from a hypothetical
system in which the specific interactions between the amino acids
are absent. The second term involving and
can be ignored, as it is a constant.
In practice, is estimated from the database of known protein
structures, while typically results from calculations
or simulations. For example, could be the conditional probability
of finding the atoms of a valine and a serine at a given
distance from each other, giving rise to the free energy difference
. The total free energy difference of a protein,
, is then claimed to be the sum
of all the pairwise free energies:
where the sum runs over all amino acid pairs
and is their corresponding distance. In many studies does not depend on the amino acid sequence.
Conceptual issues
Intuitively, it is clear that a low value for indicatesthat the set of distances in a structure is more likely in proteins than
in the reference state. However, the physical meaning of these statistical PMFs has
been widely disputed, since their introduction. The main issues are:
- The wrong interpretation of this "potential" as a true, physically valid potential of mean force;
- The nature of the so-called reference state and its optimal formulation;
- The validity of generalizations beyond pairwise distances.
Controversial analogy
In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl. It was based on an analogy with the statistical physics of liquids. For liquids, the potential of mean force is related to the radial distribution function, which is given by:where and are the respective probabilities of
finding two particles at a distance from each other in the liquid
and in the reference state. For liquids, the reference state
is clearly defined; it corresponds to the ideal gas, consisting of
non-interacting particles. The two-particle potential of mean force
is related to by:
According to the reversible work theorem, the two-particle
potential of mean force is the reversible work required to
bring two particles in the liquid from infinite separation to a distance
from each other.
Sippl justified the use of statistical PMFs—a few years after he introduced
them for use in protein structure prediction—by
appealing to the analogy with the reversible work theorem for liquids. For liquids, can be experimentally measured
using small angle X-ray scattering; for proteins, is obtained
from the set of known protein structures, as explained in the previous
section. However, as Ben-Naim wrote in a publication on the subject:
the quantities, referred to as "statistical potentials," "structure
based potentials," or "pair potentials of mean force", as derived from
the protein data bank, are neither "potentials" nor "potentials of
mean force," in the ordinary sense as used in the literature on
liquids and solutions.
Moreover, this analogy does not solve the issue of how to specify a suitable reference state for proteins.
Machine learning
In the mid-2000s, authors started to combine multiple statistical potentials, derived from different structural features, into composite scores. For that purpose, they used machine learning techniques, such as support vector machines. Probabilistic neural networks have also been applied for the training of a position-specific distance-dependent statistical potential. In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential. The resulting method, named AlphaFold, won the 13th Critical Assessment of Techniques for Protein Structure Prediction by correctly predicting the most accurate structure for 25 out of 43 free modelling domains.Explanation
Bayesian probability
Baker and co-workers justified statistical PMFs from aBayesian point of view and used these insights in the construction of
the coarse grained ROSETTA energy function. According
to Bayesian probability calculus, the conditional probability of a structure, given the amino acid sequence, can be
written as:
is proportional to the product of
the likelihood times the prior
. By assuming that the likelihood can be approximated
as a product of pairwise probabilities, and applying Bayes' theorem, the
likelihood can be written as:
where the product runs over all amino acid pairs, and is the distance between amino acids and.
Obviously, the negative of the logarithm of the expression
has the same functional form as the classic
pairwise distance statistical PMFs, with the denominator playing the role of the
reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed
as a product of pairwise probabilities, and it is purely qualitative.