In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.
Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred.
Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied.
The motion of a set of N particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form
M ⋅
d2q | |
dt2 |
=f=-
\partialV | |
\partialq |
where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions rk, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar V(q) represents the potential energy, both of which are functions of the generalized coordinates q.
If M constraints are present, the coordinates must also satisfy M time-independent algebraic equations
gj(q)=0
where the index j runs from 1 to M. For brevity, these functions gi are grouped into an M-dimensional vector g below. The task is to solve the combined set of differential-algebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law.
This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it.[1] The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become non-diagonal and depend on the generalized coordinates.
A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short time-steps, making simulations inefficient computationally.
A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints.
Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicit-force solutions.
The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in so-called internal coordinates corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internal-coordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond.
The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers.[2] [3]
Efficient recursive, internal-coordinate constraint solvers were extended to molecular dynamics.[4] [5] Analogous methods were applied later to other systems.[6] [7] [8]
In most of molecular dynamics simulations that use constraint algorithms, constraints are enforced using the method of Lagrange multipliers. Given a set of n linear (holonomic) constraints at the time t,
\sigmak(t):=\|xk\alpha(t)-xk\beta(t)\|2-
2 | |
d | |
k |
=0, k=1\ldotsn
where
\scriptstylexk\alpha(t)
\scriptstylexk\beta(t)
dk
The forces due to these constraints are added in the equations of motion, resulting in, for each of the N particles in the system
\partial2xi(t) | |
\partialt2 |
mi=-
\partial | |
\partialxi |
\left[V(xi(t))-
n | |
\sum | |
k=1 |
λk\sigmak(t)\right], i=1\ldotsN.
Adding the constraint forces does not change the total energy, as the net work done by the constraint forces (taken over the set of particles that the constraints act on) is zero. Note that the sign on
λk
From integrating both sides of the equation with respect to the time, the constrained coordinates of particles at the time,
t+\Deltat
xi(t+\Deltat)=\hat{x}i(t+\Deltat)+
n | |
\sum | |
k=1 |
λk
\partial\sigmak(t) | |
\partialxi |
\left(\Delta
-1 | |
t\right) | |
i |
, i=1\ldotsN
where
\hat{x}i(t+\Deltat)
To satisfy the constraints
\sigmak(t+\Deltat)
\sigmak(t+\Deltat):=\left\|xk\alpha(t+\Deltat)-xk\beta(t+\Deltat)\right\|2-
2 | |
d | |
k |
=0.
This implies solving a system of
n
\sigmaj(t+\Deltat):=\left\|\hat{x}j\alpha(t+\Deltat)-\hat{x}j\beta(t+\Deltat)+
n | |
\sum | |
k=1 |
λk\left(\Deltat\right)2\left[
\partial\sigmak(t) | |
\partialxj\alpha |
-1 | |
m | |
j\alpha |
-
\partial\sigmak(t) | |
\partialxj\beta |
-1 | |
m | |
j\beta |
\right]\right\|2-
2 | |
d | |
j |
=0, j=1\ldotsn
simultaneously for the
n
λk
This system of
n
n
\underline{λ}
\underline{λ}(l+1)\leftarrow\underline{λ}(l)-
-1 | |
J | |
\sigma |
\underline{\sigma}(t+\Deltat)
where
J\sigma
J=\left(\begin{array}{cccc}
\partial\sigma1 | |
\partialλ1 |
&
\partial\sigma1 | |
\partialλ2 |
& … &
\partial\sigma1 | |
\partialλn |
\\[5pt]
\partial\sigma2 | |
\partialλ1 |
&
\partial\sigma2 | |
\partialλ2 |
& … &
\partial\sigma2 | |
\partialλn |
\\[5pt] \vdots&\vdots&\ddots&\vdots\\[5pt]
\partial\sigman | |
\partialλ1 |
&
\partial\sigman | |
\partialλ2 |
& … &
\partial\sigman | |
\partialλn |
\end{array}\right).
Since not all particles contribute to all of constraints,
J\sigma
J\sigma
Instead of constantly updating the vector
\underline{λ}
\underline{λ}(0)=0
\sigmak(t)
\partial\sigmak(t) | |
\partialλj |
Jij=\left.
\partial\sigmaj | |
\partialλi |
\right|λ=0=2\left[\hat{x}j\alpha-\hat{x}j\beta\right]\left[
\partial\sigmai | |
\partialxj\alpha |
-1 | |
m | |
j\alpha |
-
\partial\sigmai | |
\partialxj\beta |
-1 | |
m | |
j\beta |
\right].
then
λ
λj=-J-1\left[\left\|\hat{x}j\alpha(t+\Deltat)-\hat{x}j\beta(t+\Deltat)\right\|2-
2\right]. | |
d | |
j |
After each iteration, the unconstrained particle positions are updated using
\hat{x}i(t+\Deltat)\leftarrow\hat{x}i(t+\Deltat)+
n | |
\sum | |
k=1 |
λ | ||||
|
\left(\Delta
-1 | |
t\right) | |
i |
.
The vector is then reset to
\underline{λ}=0.
The above procedure is repeated until the solution of constraint equations,
\sigmak(t+\Deltat)
Although there are a number of algorithms to compute the Lagrange multipliers, these difference is rely only on the methods to solve the system of equations. For this methods, quasi-Newton methods are commonly used.
The SETTLE algorithm[10] solves the system of non-linear equations analytically for
n=3
The SHAKE algorithm was first developed for satisfying a bond geometry constraint during molecular dynamics simulations.[11] The method was then generalised to handle any holonomic constraint, such as those required to maintain constant bond angles, or molecular rigidity.
In SHAKE algorithm, the system of non-linear constraint equations is solved using the Gauss–Seidel method which approximates the solution of the linear system of equations using the Newton–Raphson method;
\underline{λ}=
-1 | |
-J | |
\sigma |
\underline{\sigma}.
This amounts to assuming that
J\sigma
k
k
\begin{align} λk&\leftarrow
\sigmak(t) | |
\partial\sigmak(t)/\partialλk |
,\\[5pt] xk\alpha&\leftarrowxk\alpha+λk
\partial\sigmak(t) | |
\partialxk\alpha |
,\\[5pt] xk\beta&\leftarrowxk\beta+λk
\partial\sigmak(t) | |
\partialxk\beta |
, \end{align}
for all
k=1\ldotsn
\sigmak(t+\Deltat)
The calculation cost of each iteration is
lO(n)
A noniterative form of SHAKE was developed later on.[12]
Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the Gauss–Seidel method.
The original SHAKE algorithm is capable of constraining both rigid and flexible molecules (eg. water, benzene and biphenyl) and introduces negligible error or energy drift into a molecular dynamics simulation.[13] One issue with SHAKE is that the number of iterations required to reach a certain level of convergence does rise as molecular geometry becomes more complex. To reach 64 bit computer accuracy (a relative tolerance of
≈ 10-16
A later extension of the method, QSHAKE (Quaternion SHAKE) was developed as a faster alternative for molecules composed of rigid units, but it is not as general purpose.[15] It works satisfactorily for rigid loops such as aromatic ring systems but QSHAKE fails for flexible loops, such as when a protein has a disulfide bond.[16]
Further extensions include RATTLE,[17] WIGGLE,[18] and MSHAKE.[19]
While RATTLE works the same way as SHAKE,[20] yet using the Velocity Verlet time integration scheme, WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers
λk
A final modification to the SHAKE algorithm is the P-SHAKE algorithm[21] that is applied to very rigid or semi-rigid molecules. P-SHAKE computes and updates a pre-conditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian
J\sigma
lO(n2)
The M-SHAKE algorithm[22] solves the non-linear system of equations using Newton's method directly. In each iteration, the linear system of equations
\underline{λ}=
-1 | |
-J | |
\sigma |
\underline{\sigma}
is solved exactly using an LU decomposition. Each iteration costs
lO(n3)
This solution was first proposed in 1986 by Ciccotti and Ryckaert[23] under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix
J\sigma
lO(n3)
lO(n2)
Several variants of this approach based on sparse matrix techniques were studied by Barth et al..[24]
The SHAPE algorithm[25] is a multicenter analog of SHAKE for constraining rigid bodies of three or more centers. Like SHAKE, an unconstrained step is taken and then corrected by directly calculating and applying the rigid body rotation matrix that satisfies:
Lrigid\left(t+
\Deltat | |
2 |
\right)=Lnonrigid\left(t+
\Deltat | |
2 |
\right)
This approach involves a single 3×3 matrix diagonalization followed by three or four rapid Newton iterations to determine the rotation matrix. SHAPE provides the identical trajectory that is provided with fully converged iterative SHAKE, yet it is found to be more efficient and more accurate than SHAKE when applied to systems involving three or more centers. It extends the ability of SHAKE like constraints to linear systems with three or more atoms, planar systems with four or more atoms, and to significantly larger rigid structures where SHAKE is intractable. It also allows rigid bodies to be linked with one or two common centers (e.g. peptide planes) by solving rigid body constraints iteratively in the same basic manner that SHAKE is used for atoms involving more than one SHAKE constraint.
An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije,[26] and was based on the 1986 method of Edberg, Evans and Morriss (EEM),[27] and a modification thereof by Baranyai and Evans (BE).[28]
LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian
J\sigma
(I-
-1 | |
J | |
\sigma) |
=I+J\sigma+
2 | |
J | |
\sigma |
+
3 | |
J | |
\sigma |
+ …
in each step of the Newton iteration. This approximation only works for matrices with eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity.
LINCS has been reported to be 3–4 times faster than SHAKE.[26]
Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method.[29] [30] [31] This approach was pioneered by Lagrange,[1] and result in Lagrange equations of the mixed type.[32]
. Lagrange. 1788 . Mécanique analytique.
lO(n2)
. Arnold Sommerfeld . 1952 . . Academic Press . New York . 978-0-12-654670-5.