Constraint (computational chemistry) explained

In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The general steps involved are: (i) choose novel unconstrained coordinates (internal coordinates), (ii) introduce explicit constraint forces, (iii) minimize constraint forces implicitly by the technique of Lagrange multipliers or projection methods.

Constraint algorithms are often applied to molecular dynamics simulations. Although such simulations are sometimes performed using internal coordinates that automatically satisfy the bond-length, bond-angle and torsion-angle constraints, simulations may also be performed using explicit or implicit constraint forces for these three constraints. However, explicit constraint forces give rise to inefficiency; more computational power is required to get a trajectory of a given length. Therefore, internal coordinates and implicit-force constraint solvers are generally preferred.

Constraint algorithms achieve computational efficiency by neglecting motion along some degrees of freedom. For instance, in atomistic molecular dynamics, typically the length of covalent bonds to hydrogen are constrained; however, constraint algorithms should not be used if vibrations along these degrees of freedom are important for the phenomenon being studied.

Mathematical background

The motion of a set of N particles can be described by a set of second-order ordinary differential equations, Newton's second law, which can be written in matrix form

M

d2q
dt2

=f=-

\partialV
\partialq

where M is a mass matrix and q is the vector of generalized coordinates that describe the particles' positions. For example, the vector q may be a 3N Cartesian coordinates of the particle positions rk, where k runs from 1 to N; in the absence of constraints, M would be the 3Nx3N diagonal square matrix of the particle masses. The vector f represents the generalized forces and the scalar V(q) represents the potential energy, both of which are functions of the generalized coordinates q.

If M constraints are present, the coordinates must also satisfy M time-independent algebraic equations

gj(q)=0

where the index j runs from 1 to M. For brevity, these functions gi are grouped into an M-dimensional vector g below. The task is to solve the combined set of differential-algebraic (DAE) equations, instead of just the ordinary differential equations (ODE) of Newton's second law.

This problem was studied in detail by Joseph Louis Lagrange, who laid out most of the methods for solving it.[1] The simplest approach is to define new generalized coordinates that are unconstrained; this approach eliminates the algebraic equations and reduces the problem once again to solving an ordinary differential equation. Such an approach is used, for example, in describing the motion of a rigid body; the position and orientation of a rigid body can be described by six independent, unconstrained coordinates, rather than describing the positions of the particles that make it up and the constraints among them that maintain their relative distances. The drawback of this approach is that the equations may become unwieldy and complex; for example, the mass matrix M may become non-diagonal and depend on the generalized coordinates.

A second approach is to introduce explicit forces that work to maintain the constraint; for example, one could introduce strong spring forces that enforce the distances among mass points within a "rigid" body. The two difficulties of this approach are that the constraints are not satisfied exactly, and the strong forces may require very short time-steps, making simulations inefficient computationally.

A third approach is to use a method such as Lagrange multipliers or projection to the constraint manifold to determine the coordinate adjustments necessary to satisfy the constraints.

Finally, there are various hybrid approaches in which different sets of constraints are satisfied by different methods, e.g., internal coordinates, explicit forces and implicit-force solutions.

Internal coordinate methods

The simplest approach to satisfying constraints in energy minimization and molecular dynamics is to represent the mechanical system in so-called internal coordinates corresponding to unconstrained independent degrees of freedom of the system. For example, the dihedral angles of a protein are an independent set of coordinates that specify the positions of all the atoms without requiring any constraints. The difficulty of such internal-coordinate approaches is twofold: the Newtonian equations of motion become much more complex and the internal coordinates may be difficult to define for cyclic systems of constraints, e.g., in ring puckering or when a protein has a disulfide bond.

The original methods for efficient recursive energy minimization in internal coordinates were developed by Gō and coworkers.[2] [3]

Efficient recursive, internal-coordinate constraint solvers were extended to molecular dynamics.[4] [5] Analogous methods were applied later to other systems.[6] [7] [8]

Lagrange multiplier-based methods

In most of molecular dynamics simulations that use constraint algorithms, constraints are enforced using the method of Lagrange multipliers. Given a set of n linear (holonomic) constraints at the time t,

\sigmak(t):=\|xk\alpha(t)-xk\beta(t)\|2-

2
d
k

=0,k=1\ldotsn

where

\scriptstylexk\alpha(t)

and

\scriptstylexk\beta(t)

are the positions of the two particles involved in the kth constraint at the time t and

dk

is the prescribed inter-particle distance.

The forces due to these constraints are added in the equations of motion, resulting in, for each of the N particles in the system

\partial2xi(t)
\partialt2

mi=-

\partial
\partialxi

\left[V(xi(t))-

n
\sum
k=1

λk\sigmak(t)\right],i=1\ldotsN.

Adding the constraint forces does not change the total energy, as the net work done by the constraint forces (taken over the set of particles that the constraints act on) is zero. Note that the sign on

λk

is arbitrary and some references[9] have an opposite sign.

From integrating both sides of the equation with respect to the time, the constrained coordinates of particles at the time,

t+\Deltat

, are given,

xi(t+\Deltat)=\hat{x}i(t+\Deltat)+

n
\sum
k=1

λk

\partial\sigmak(t)
\partialxi

\left(\Delta

-1
t\right)
i

,i=1\ldotsN

where

\hat{x}i(t+\Deltat)

is the unconstrained (or uncorrected) position of the ith particle after integrating the unconstrained equations of motion.

To satisfy the constraints

\sigmak(t+\Deltat)

in the next timestep, the Lagrange multipliers should be determined as the following equation,

\sigmak(t+\Deltat):=\left\|xk\alpha(t+\Deltat)-xk\beta(t+\Deltat)\right\|2-

2
d
k

=0.

This implies solving a system of

n

non-linear equations

\sigmaj(t+\Deltat):=\left\|\hat{x}j\alpha(t+\Deltat)-\hat{x}j\beta(t+\Deltat)+

n
\sum
k=1

λk\left(\Deltat\right)2\left[

\partial\sigmak(t)
\partialxj\alpha
-1
m
j\alpha

-

\partial\sigmak(t)
\partialxj\beta
-1
m
j\beta

\right]\right\|2-

2
d
j

=0,j=1\ldotsn

simultaneously for the

n

unknown Lagrange multipliers

λk

.

This system of

n

non-linear equations in

n

unknowns is commonly solved using Newton–Raphson method where the solution vector

\underline{λ}

is updated using

\underline{λ}(l+1)\leftarrow\underline{λ}(l)-

-1
J
\sigma

\underline{\sigma}(t+\Deltat)

where

J\sigma

is the Jacobian of the equations σk:

J=\left(\begin{array}{cccc}

\partial\sigma1
\partialλ1

&

\partial\sigma1
\partialλ2

&&

\partial\sigma1
\partialλn

\\[5pt]

\partial\sigma2
\partialλ1

&

\partial\sigma2
\partialλ2

&&

\partial\sigma2
\partialλn

\\[5pt] \vdots&\vdots&\ddots&\vdots\\[5pt]

\partial\sigman
\partialλ1

&

\partial\sigman
\partialλ2

&&

\partial\sigman
\partialλn

\end{array}\right).

Since not all particles contribute to all of constraints,

J\sigma

is a block matrix and can be solved individually to block-unit of the matrix. In other words,

J\sigma

can be solved individually for each molecule.

Instead of constantly updating the vector

\underline{λ}

, the iteration can be started with

\underline{λ}(0)=0

, resulting in simpler expressions for

\sigmak(t)

and
\partial\sigmak(t)
\partialλj
. In this case

Jij=\left.

\partial\sigmaj
\partialλi

\right|λ=0=2\left[\hat{x}j\alpha-\hat{x}j\beta\right]\left[

\partial\sigmai
\partialxj\alpha
-1
m
j\alpha

-

\partial\sigmai
\partialxj\beta
-1
m
j\beta

\right].

then

λ

is updated to

λj=-J-1\left[\left\|\hat{x}j\alpha(t+\Deltat)-\hat{x}j\beta(t+\Deltat)\right\|2-

2\right].
d
j

After each iteration, the unconstrained particle positions are updated using

\hat{x}i(t+\Deltat)\leftarrow\hat{x}i(t+\Deltat)+

n
\sum
k=1
λ
k\partial\sigmak
\partialxi

\left(\Delta

-1
t\right)
i

.

The vector is then reset to

\underline{λ}=0.

The above procedure is repeated until the solution of constraint equations,

\sigmak(t+\Deltat)

, converges to a prescribed tolerance of a numerical error.

Although there are a number of algorithms to compute the Lagrange multipliers, these difference is rely only on the methods to solve the system of equations. For this methods, quasi-Newton methods are commonly used.

The SETTLE algorithm

The SETTLE algorithm[10] solves the system of non-linear equations analytically for

n=3

constraints in constant time. Although it does not scale to larger numbers of constraints, it is very often used to constrain rigid water molecules, which are present in almost all biological simulations and are usually modelled using three constraints (e.g. SPC/E and TIP3P water models).

The SHAKE algorithm

The SHAKE algorithm was first developed for satisfying a bond geometry constraint during molecular dynamics simulations.[11] The method was then generalised to handle any holonomic constraint, such as those required to maintain constant bond angles, or molecular rigidity.

In SHAKE algorithm, the system of non-linear constraint equations is solved using the Gauss–Seidel method which approximates the solution of the linear system of equations using the Newton–Raphson method;

\underline{λ}=

-1
-J
\sigma

\underline{\sigma}.

This amounts to assuming that

J\sigma

is diagonally dominant and solving the

k

th equation only for the

k

unknown. In practice, we compute

\begin{align} λk&\leftarrow

\sigmak(t)
\partial\sigmak(t)/\partialλk

,\\[5pt] xk\alpha&\leftarrowxk\alpha+λk

\partial\sigmak(t)
\partialxk\alpha

,\\[5pt] xk\beta&\leftarrowxk\beta+λk

\partial\sigmak(t)
\partialxk\beta

, \end{align}

for all

k=1\ldotsn

iteratively until the constraint equations

\sigmak(t+\Deltat)

are solved to a given tolerance.

The calculation cost of each iteration is

lO(n)

, and the iterations themselves converge linearly.

A noniterative form of SHAKE was developed later on.[12]

Several variants of the SHAKE algorithm exist. Although they differ in how they compute or apply the constraints themselves, the constraints are still modelled using Lagrange multipliers which are computed using the Gauss–Seidel method.

The original SHAKE algorithm is capable of constraining both rigid and flexible molecules (eg. water, benzene and biphenyl) and introduces negligible error or energy drift into a molecular dynamics simulation.[13] One issue with SHAKE is that the number of iterations required to reach a certain level of convergence does rise as molecular geometry becomes more complex. To reach 64 bit computer accuracy (a relative tolerance of

10-16

) in a typical molecular dynamics simulation at a temperature of 310K, a 3-site water model having 3 constraints to maintain molecular geometry requires an average of 9 iterations (which is 3 per site per time-step). A 4-site butane model with 5 constraints needs 17 iterations (22 per site), a 6-site benzene model with 12 constraints needs 36 iterations (72 per site), while a 12-site biphenyl model with 29 constraints requires 92 iterations (229 per site per time-step).[14] Hence the CPU requirements of the SHAKE algorithm can become significant, particularly if a molecular model has a high degree of rigidity.

A later extension of the method, QSHAKE (Quaternion SHAKE) was developed as a faster alternative for molecules composed of rigid units, but it is not as general purpose.[15] It works satisfactorily for rigid loops such as aromatic ring systems but QSHAKE fails for flexible loops, such as when a protein has a disulfide bond.[16]

Further extensions include RATTLE,[17] WIGGLE,[18] and MSHAKE.[19]

While RATTLE works the same way as SHAKE,[20] yet using the Velocity Verlet time integration scheme, WIGGLE extends SHAKE and RATTLE by using an initial estimate for the Lagrange multipliers

λk

based on the particle velocities. It is worth mentioning that MSHAKE computes corrections on the constraint forces, achieving better convergence.

A final modification to the SHAKE algorithm is the P-SHAKE algorithm[21] that is applied to very rigid or semi-rigid molecules. P-SHAKE computes and updates a pre-conditioner which is applied to the constraint gradients before the SHAKE iteration, causing the Jacobian

J\sigma

to become diagonal or strongly diagonally dominant. The thus de-coupled constraints converge much faster (quadratically as opposed to linearly) at a cost of

lO(n2)

.

The M-SHAKE algorithm

The M-SHAKE algorithm[22] solves the non-linear system of equations using Newton's method directly. In each iteration, the linear system of equations

\underline{λ}=

-1
-J
\sigma

\underline{\sigma}

is solved exactly using an LU decomposition. Each iteration costs

lO(n3)

operations, yet the solution converges quadratically, requiring fewer iterations than SHAKE.

This solution was first proposed in 1986 by Ciccotti and Ryckaert[23] under the title "the matrix method", yet differed in the solution of the linear system of equations. Ciccotti and Ryckaert suggest inverting the matrix

J\sigma

directly, yet doing so only once, in the first iteration. The first iteration then costs

lO(n3)

operations, whereas the following iterations cost only

lO(n2)

operations (for the matrix-vector multiplication). This improvement comes at a cost though, since the Jacobian is no longer updated, convergence is only linear, albeit at a much faster rate than for the SHAKE algorithm.

Several variants of this approach based on sparse matrix techniques were studied by Barth et al..[24]

SHAPE algorithm

The SHAPE algorithm[25] is a multicenter analog of SHAKE for constraining rigid bodies of three or more centers. Like SHAKE, an unconstrained step is taken and then corrected by directly calculating and applying the rigid body rotation matrix that satisfies:

Lrigid\left(t+

\Deltat
2

\right)=Lnonrigid\left(t+

\Deltat
2

\right)

This approach involves a single 3×3 matrix diagonalization followed by three or four rapid Newton iterations to determine the rotation matrix. SHAPE provides the identical trajectory that is provided with fully converged iterative SHAKE, yet it is found to be more efficient and more accurate than SHAKE when applied to systems involving three or more centers. It extends the ability of SHAKE like constraints to linear systems with three or more atoms, planar systems with four or more atoms, and to significantly larger rigid structures where SHAKE is intractable. It also allows rigid bodies to be linked with one or two common centers (e.g. peptide planes) by solving rigid body constraints iteratively in the same basic manner that SHAKE is used for atoms involving more than one SHAKE constraint.

LINCS algorithm

An alternative constraint method, LINCS (Linear Constraint Solver) was developed in 1997 by Hess, Bekker, Berendsen and Fraaije,[26] and was based on the 1986 method of Edberg, Evans and Morriss (EEM),[27] and a modification thereof by Baranyai and Evans (BE).[28]

LINCS applies Lagrange multipliers to the constraint forces and solves for the multipliers by using a series expansion to approximate the inverse of the Jacobian

J\sigma

:

(I-

-1
J
\sigma)

=I+J\sigma+

2
J
\sigma

+

3
J
\sigma

+

in each step of the Newton iteration. This approximation only works for matrices with eigenvalues smaller than 1, making the LINCS algorithm suitable only for molecules with low connectivity.

LINCS has been reported to be 3–4 times faster than SHAKE.[26]

Hybrid methods

Hybrid methods have also been introduced in which the constraints are divided into two groups; the constraints of the first group are solved using internal coordinates whereas those of the second group are solved using constraint forces, e.g., by a Lagrange multiplier or projection method.[29] [30] [31] This approach was pioneered by Lagrange,[1] and result in Lagrange equations of the mixed type.[32]

See also

Notes and References

  1. Book: Lagrange, GL. Lagrange

    . Lagrange. 1788 . Mécanique analytique.

  2. Noguti T . Gō N . 1983 . A Method of Rapid Calculation of a 2nd Derivative Matrix of Conformational Energy for Large Molecules . Journal of the Physical Society of Japan . 52 . 10 . 3685–3690 . 10.1143/JPSJ.52.3685 . Toshiyuki. 1983JPSJ...52.3685N .
  3. Abe . H . Braun W. Noguti T. Gō N . 1984 . Rapid Calculation of 1st and 2nd Derivatives of Conformational Energy with respect to Dihedral Angles for Proteins: General Recurrent Equations . Computers and Chemistry . 8 . 4 . 239–247 . 10.1016/0097-8485(84)85015-9.
  4. Bae . D-S . Haug EJ . 1988 . A Recursive Formulation for Constrained Mechanical System Dynamics: Part I. Open Loop Systems . Mechanics of Structures and Machines . 15 . 3 . 359–382. 10.1080/08905458708905124 .
  5. Jain . A . Vaidehi N. Rodriguez G . 1993 . A Fast Recursive Algorithm for Molecular Dynamics Simulation . Journal of Computational Physics . 106 . 2 . 258–268 . 10.1006/jcph.1993.1106 . 1993JCoPh.106..258J.
  6. Rice . LM . Brünger AT . 1994 . Torsion Angle Dynamics: Reduced Variable Conformational Sampling Enhances Crystallographic Structure Refinement . Proteins: Structure, Function, and Genetics . 19 . 277–290 . 10.1002/prot.340190403 . 7984624 . 4. 25080482 .
  7. Mathiowetz . AM . Jain A . Karasawa N . Goddard III, WA . 1994 . Protein Simulations Using Techniques Suitable for Very Large Systems: The Cell Multipole Method for Nonbond Interactions and the Newton-Euler Inverse Mass Operator Method for Internal Coordinate Dynamics . Proteins: Structure, Function, and Genetics . 20 . 227–247 . 10.1002/prot.340200304 . 7892172 . 3. 25753031 .
  8. Mazur . AK . 1997 . Quasi-Hamiltonian Equations of Motion for Internal Coordinate Molecular Dynamics of Polymers . Journal of Computational Chemistry . 18 . 11 . 1354–1364 . 10.1002/(SICI)1096-987X(199708)18:11<1354::AID-JCC3>3.0.CO;2-K. physics/9703019 .
  9. Miyamoto . S . Kollman PA . 1992 . SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithm for Rigid Water Models . Journal of Computational Chemistry . 13 . 8 . 952–962 . 10.1002/jcc.540130805. 122506495 .
  10. Miyamoto . S . Kollman PA . 1992 . SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithm for Rigid Water Models . Journal of Computational Chemistry . 13 . 8 . 952–962 . 10.1002/jcc.540130805. 122506495 .
  11. Ryckaert . J-P . Ciccotti G. Berendsen HJC . 1977 . Numerical Integration of the Cartesian Equations of Motion of a System with Constraints: Molecular Dynamics of n-Alkanes . Journal of Computational Physics . 23 . 3 . 327–341 . 10.1016/0021-9991(77)90098-5 . 1977JCoPh..23..327R. 10.1.1.399.6868 .
  12. Yoneya . M . Berendsen HJC. Hirasawa K . A Noniterative Matrix Method for Constraint Molecular-Dynamics Simulations . Molecular Simulations . 13 . 6 . 395–405 . 10.1080/08927029408022001 . 1994 .
  13. Hammonds . KD . Heyes DM . 2020 . Shadow Hamiltonian in classical NVE molecular dynamics simulations: A path to long time stability . Journal of Chemical Physics . 152 . 2 . 024114_1–024114_15 . 10.1063/1.5139708 . 31941339 . 210333551 .
  14. Hammonds . KD . Heyes DM . 2020 . Shadow Hamiltonian in classical NVE molecular dynamics simulations: A path to long time stability . Journal of Chemical Physics . 152 . 2 . 024114_1–024114_15 . 10.1063/1.5139708 . 31941339 . 210333551 .
  15. Forester . TR . Smith W . 1998 . SHAKE, Rattle, and Roll: Efficient Constraint Algorithms for Linked Rigid Bodies . Journal of Computational Chemistry . 19 . 102–111 . 10.1002/(SICI)1096-987X(19980115)19:1<102::AID-JCC9>3.0.CO;2-T .
  16. McBride . C . Wilson MR. Howard JAK . 1998 . Molecular dynamics simulations of liquid crystal phases using atomistic potentials . Molecular Physics . 93 . 6 . 955–964 . 10.1080/002689798168655. 1998MolPh..93..955C .
  17. Hans C.. Andersen . RATTLE: A "Velocity" Version of the SHAKE Algorithm for Molecular Dynamics Calculations . Journal of Computational Physics . 1983 . 52 . 1 . 24–34 . 10.1016/0021-9991(83)90014-1 . 1983JCoPh..52...24A . 10.1.1.459.5668 .
  18. Sang-Ho . Lee . Kim Palmo . Samuel Krimm . WIGGLE: A new constrained molecular dynamics algorithm in Cartesian coordinates . Journal of Computational Physics . 210 . 1 . 2005 . 171–182 . 10.1016/j.jcp.2005.04.006 . 2005JCoPh.210..171L .
  19. S. G. . Lambrakos . J. P. Boris . E. S. Oran . I. Chandrasekhar . M. Nagumo . A Modified SHAKE algorithm for Maintaining Rigid Bonds in Molecular Dynamics Simulations of Large Molecules . Journal of Computational Physics . 85 . 1989. 2 . 473–486 . 10.1016/0021-9991(89)90160-5 . 1989JCoPh..85..473L .
  20. Benedict . Leimkuhler . Robert Skeel . Symplectic numerical integrators in constrained Hamiltonian systems . Journal of Computational Physics . 112 . 1 . 1994 . 117–125 . 10.1006/jcph.1994.1085 . 1994JCoPh.112..117L .
  21. Pedro . Gonnet . P-SHAKE: A quadratically convergent SHAKE in

    lO(n2)

    . Journal of Computational Physics . 220 . 2007. 2 . 740–750 . 10.1016/j.jcp.2006.05.032 . 2007JCoPh.220..740G .
  22. Kräutler. Vincent. W. F. van Gunsteren . P. H. Hünenberger . A Fast SHAKE Algorithm to Solve Distance Constraint Equations for Small Molecules in Molecular Dynamics Simulations. Journal of Computational Chemistry. 22. 5. 501–508. 2001. 10.1002/1096-987X(20010415)22:5<501::AID-JCC1021>3.0.CO;2-V. 6187100 .
  23. Ciccotti. G.. J. P. Ryckaert. Molecular Dynamics Simulation of Rigid Molecules. Computer Physics Reports. 4. 1986. 6. 345–392. 10.1016/0167-7977(86)90022-5. 1986CoPhR...4..346C .
  24. Barth. Eric. K. Kuczera . B. Leimkuhler . R. Skeel . Algorithms for constrained molecular dynamics . Journal of Computational Chemistry. 16. 10. 1192–1209. 1995. 10.1002/jcc.540161003. 38109923 .
  25. Tao. Peng. Xiongwu Wu . Bernard R. Brooks . Maintain rigid structures in Verlet based Cartesian molecular dynamics simulations. The Journal of Chemical Physics. 137. 13. 134110. 2012. 10.1063/1.4756796. 23039588. 2012JChPh.137m4110T . 3477181.
  26. Hess . B . Bekker H . Berendsen HJC . Fraaije JGEM . 1997 . LINCS: A Linear Constraint Solver for Molecular Simulations . Journal of Computational Chemistry . 18 . 12 . 1463–1472 . 10.1002/(SICI)1096-987X(199709)18:12<1463::AID-JCC4>3.0.CO;2-H. 10.1.1.48.2727 .
  27. Edberg . R . Evans DJ. Morriss GP . 1986 . Constrained Molecular-Dynamics Simulations of Liquid Alkanes with a New Algorithm . Journal of Chemical Physics . 84 . 12 . 6933–6939 . 10.1063/1.450613. 1986JChPh..84.6933E .
  28. Baranyai . A . Evans DJ . 1990 . New Algorithm for Constrained Molecular-Dynamics Simulation of Liquid Benzene and Naphthalene . Molecular Physics . 70 . 1 . 53–63 . 10.1080/00268979000100841. 1990MolPh..70...53B .
  29. Mazur . AK . 1999 . Symplectic integration of closed chain rigid body dynamics with internal coordinate equations of motion . Journal of Chemical Physics . 111 . 4 . 1407–1414 . 10.1063/1.479399. 1999JChPh.111.1407M .
  30. Bae . D-S . Haug EJ . 1988 . A Recursive Formulation for Constrained Mechanical System Dynamics: Part II. Closed Loop Systems . Mechanics of Structures and Machines . 15 . 4 . 481–506. 10.1080/08905458708905130 .
  31. Rodriguez . G . Jain A. Kreutz-Delgado K . 1991 . A Spatial Operator Algebra for Manipulator Modeling and Control . The International Journal of Robotics Research . 10 . 4 . 371–381 . 10.1177/027836499101000406. 2060/19900020578 . 12166182 . free .
  32. Book: Sommerfeld, Arnold . Arnold Sommerfeld

    . Arnold Sommerfeld . 1952 . . Academic Press . New York . 978-0-12-654670-5.