## Michael P. Allen and Dominic J. Tildesley

Print publication date: 2017

Print ISBN-13: 9780198803195

Published to Oxford Scholarship Online: November 2017

DOI: 10.1093/oso/9780198803195.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (www.oxfordscholarship.com). (c) Copyright Oxford University Press, 2018. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single chapter of a monograph in OSO for personal use (for details see www.oxfordscholarship.com/page/privacy-policy). Subscriber: null; date: 20 January 2019

# Quantum simulations

Chapter:
(p.406) 13 Quantum simulations
Source:
Computer Simulation of Liquids
Publisher:
Oxford University Press
DOI:10.1093/oso/9780198803195.003.0013

# Abstract and Keywords

This chapter covers the introduction of quantum mechanics into computer simulation methods. The chapter begins by explaining how electronic degrees of freedom may be handled in an ab initio fashion and how the resulting forces are included in the classical dynamics of the nuclei. The technique for combining the ab initio molecular dynamics of a small region, with classical dynamics or molecular mechanics applied to the surrounding environment, is explained. There is a section on handling quantum degrees of freedom, such as low-mass nuclei, by discretized path integral methods, complete with practical code examples. The problem of calculating quantum time correlation functions is addressed. Ground-state quantum Monte Carlo methods are explained, and the chapter concludes with a forward look to the future development of such techniques particularly to systems that include excited electronic states.

# 13.1 Introduction

The dynamics of a quantum system is described by the time-dependent Schrödinger equation

(13.1)
$Display mathematics$

where Φ‎ is the complete wave function for the nuclei and the electrons. In this chapter, we use the notation $r(n)$ for the complete set of positions of all of the n electrons, and $R(N)$ for the positions of the N nuclei, to avoid any confusion with the position $r$ in space. The Hamiltonian for the system is

(13.2)
$Display mathematics$

where $Vn-e$ is the sum of all the Coulombic interactions (nuclei–nuclei, electrons–electrons, and nuclei–electrons). $me$ is the mass of the electron, MI the mass of nucleus I, and $He$ is the Hamiltonian for the electronic sub-system.

In Section 13.2, we consider approximations to the solution of eqn (13.1) using the ab initio molecular dynamics method. In this case the total wave function, Φ‎, is factored into a part depending on the electronic degrees of freedom, Ψ‎, and a part corresponding to the nuclei, χ‎. Ψ‎ is calculated for the clamped positions of the nuclei using a static electronic structure calculation. The corresponding forces on the nuclei from the electrons can be calculated and the equations of motion of the nuclei are then solved classically. In this case it is the electrons associated with the classical nuclei that are being treated quantum mechanically.

In Section 13.3, we introduce the QMMM approach. In these methods, a small part of the system must be studied using a quantum mechanical technique (e.g. ab initio molecular dynamics) while the remainder of the system can be adequately simulated using a classical approach (e.g. classical molecular dynamics). The interesting problem is how to couple these two regions consistently.

(p.407) An alternative approach to quantum mechanical systems is through the non-normalized quantum-mechanical density operator

(13.3)
$Display mathematics$

which satisfies the Bloch equation

(13.4)
$Display mathematics$

In the coordinate representation of quantum mechanics, we may define the density matrix as

(13.5)
$Display mathematics$

where s is a quantum state of the system with a nuclear wave function, $χs$, and we assume the electrons associated with each nucleus remain in their ground state.

A formal solution of the Schrödinger equation, eqn (13.1), suggests that, for time-independent Hamiltonians, the quantum-mechanical propagator from time 0 to time t is

(13.6)
$Display mathematics$

This propagator converts $Φ(0)$ to $Φ(t)$. Thus we see an analogy between the propagator of eqn (13.6) and the definition of $ϱ$, eqn (13.3), or similarly between eqn (13.1) and the Bloch equation (13.4). This isomorphism is achieved with the transformation $β→it/ℏ$. Some of the techniques discussed in this chapter use high-temperature or short-time approximations to the quantum-mechanical density matrix or propagator. Nonetheless, these techniques are often useful in low-temperature simulations where the $ℏ2$-expansions might fail.

One successful approach treating the nuclei quantum mechanically has been to use the path-integral formulation of quantum mechanics (Feynman and Hibbs, 1965). By taking the trace of $ϱ(R(N),R′(N);β)$, that is, by setting $R(N)=R′(N)$ and then integrating over $R(N)$, we obtain the quantum partition function.

(13.7)
$Display mathematics$

Using this relationship, thermodynamic properties, static structural properties, and dynamic properties may, in some circumstances, be estimated by the temperature–time analogy already discussed. In Section 13.4 we describe a simulation algorithm which has arisen directly out of this formalism. This approach is often used as a semiclassical finite-temperature technique, which will provide a measure of improvement over quantum-corrected classical results (e.g. for liquid neon). In this section, we concentrate on the application of path-integral methods to the nuclei.

Path-integral techniques can also be applied in the strongly quantum-mechanical low-temperature limit (e.g. for liquid helium, Morales et al., 2014), but special techniques have also been developed for these problems. As an example we consider a random-walk estimation of the electronic ground state in Section 13.5.

# (p.408) 13.2Ab-initio molecular dynamics

## 13.2.1 Approximate quantum dynamics

We consider the time evolution of a quantum mechanical systems of nuclei and electrons. At a particular time t, the eigenfunctions of the electrons can be obtained from the solution of the time-independent electronic Schrödinger equation

(13.8)
$Display mathematics$

for a fixed configuration $R(N)$ of the nuclei. $He$ is defined in eqn (13.2) and the eigenfunctions $Ψk$ are orthonormal. The eigenvalues Ek are the energies of the electronic sub-system in the fixed field of the nuclei (the so-called adiabatic energies).

Using the Born–Huang ansatz (Kutzelnigg, 1997), we can expand the total wave function

(13.9)
$Display mathematics$

where the functions $χℓ$ are time-dependent coefficients in the expansion. Substitution of eqn (13.9) into eqn (13.1), followed by an integration over $r(n)$ and a neglect of the non-adiabatic coupling operators leads to the Born–Oppenheimer (BO) approximation

(13.10)
$Display mathematics$

where we can now identify χ‎ as the set of nuclear wave functions for a selected electronic state k. Eqn (13.10) is the quantum-mechanical equation of motion of the nuclei in the BO approximation. The corresponding equation for the electronic degrees of freedom is

(13.11)
$Display mathematics$

where Ψ‎, the electronic wave function, can be expressed in the basis of the electronic states

(13.12)
$Display mathematics$

and the coefficients, $cℓ(t)$ are the occupation numbers of the states.

Equations (13.10) and (13.11) can be solved, in principle, to give the quantum dynamics of the nuclei and electrons each moving in a time-dependent effective potential defined by the other part of the system. However, even for modest-sized condensed phase problems, this is not a pragmatic approach, and there are three different ways in which the problem can be simplified to make it more tractable. All of these can be termed ab initio molecular dynamics (AIMD). The first of these, Ehrenfest dynamics (Marx and Hutter, 2012), solves the quantum mechanical motion of the electrons at every step using eqn (13.11) and then (p.409) uses the wave function to compute the force on each nucleus I, propagating the nuclei forward in time using classical mechanics:

(13.13)
$Display mathematics$

Ehrenfest dynamics is a mean-field approach that includes non-adiabatic transitions between $Ψk$ and $Ψℓ$. One particular advantage of the Ehrenfest approach is that during the motion the wave functions, $Ψk$, remain normalized and orthogonal to one another and this orthonormality does not need to be imposed in the dynamics using constraints.

In the second approach, known as BornߝOppenheimer (BO) dynamics, the electronic wave function Ψ‎ is restricted to the ground-state adiabatic wave function, Ψ‎0 at all times, corresponding to a single term in eqn (13.12). The equations of motion are

(13.14)
$Display mathematics$

where the nuclei move classically on the BO potential-energy surface and E0 is obtained from the solution of the time-independent Schrödinger equation for the ground state. In contrast to Ehrenfest dynamics, the ground state has to be established at each step in the dynamics by diagonalizing the Hamiltonian, $He$.

In the third approach, developed by Car and Parrinello (1985), $⟨Ψ0|He|Ψ0⟩$ is considered to be a functional of the set of orbitals, ${ψi}$, that form the basis of the wave function. It is now possible to construct a Lagrangian in which the functional derivative with respect to these orbitals leads to the force that will drive the orbitals forward in time. This Lagrangian can be represented as

(13.15)
$Display mathematics$

where μ‎ is a fictitious mass controlling the dynamics of the orbitals and we have included a set of constraints in the Lagrangian that will keep the underlying orbitals orthogonal and normalized during the dynamics. The equations of motion resulting from $L$ are

(13.16)
$Display mathematics$

where $δ/δψi∗$ indicates a functional derivative. (Note that in taking the functional derivative of the orbital kinetic energy, $ψ˙i∗$ and $ψ˙i$ are treated as independent functions.) For both BO and CP dynamics the force on the nuclei can, in many cases, be evaluated efficiently using the Hellmann–Feynman theorem

(13.17)
$Display mathematics$

In these equations of motion there is a real temperature associated with the kinetic energy of the nuclei and a fictitious temperature associated with the electronic degrees (p.410) of freedom arising from the kinetic energy, $∑iμ⟨ψ˙i|ψ˙i⟩$. If this electronic temperature is kept low, with a suitable choice of μ‎ then the electronic degrees of freedom will evolve on the BO surface and we should recover the Born–Oppenheimer dynamics of eqn (13.14). We will return to the expressions for the constraint terms in eqn (13.16) when we have considered the details of the electronic wave function.

## 13.2.2 Density functional theory and the Kohn–Sham method

The use of the Born–Oppenheimer or Car–Parrinello dynamics requires a knowledge of the electronic wave function for the ground state of the system. Since the electrons are fermions, Ψ‎0 must be antisymmetric to the exchange of two electrons. This symmetry can be included by writing the full wave function as a Slater determinant of the individual wave functions of each of the n electrons

(13.18)
$Display mathematics$

Traditionally, Ψ‎0 is then obtained using the Hartree–Fock (HF) approximation (Schaefer, 1972). This approach scales as $O(n4)$ and more sophisticated version of the theory such as the configuration-interaction methods are significantly more expensive, with the MP4 method scaling as $O(n7)$.

The breakthrough in electronic structure calculations for condensed phase systems has been the development of density functional theory (DFT) (Hohenberg and Kohn, 1964). In DFT, the expectation value

(13.19)
$Display mathematics$

defines a functional, $FHK$, of the ground-state electron density, $ρ0(r)$, that is completely independent of the environment of the electrons and only depends on the spatial coordinate, $r$. (Note that the electron density, ρ‎, should not be confused with the density operator, $ϱ$, used in other sections of this chapter.)

In eqn (13.19)

(13.20)
$Display mathematics$

where $K$ is the quantum mechanical kinetic-energy operator of the electrons and $Vee$ is the potential energy between the electrons. The corresponding energy functional,

(13.21)
$Display mathematics$

depends additionally on the external field due to the positively charged nuclei acting on the electrons. The first Hohenberg–Kohn theorem states that the full many-particle ground state is a unique functional of $ρ(r)$. The second H–K theorem states that the energy functional, $E[ρ]$, is a minimum when $ρ=ρ0$, the true ground-state density.

The next step is to find the universal functional $FHK$. The approach of Kohn and Sham (1965) is to replace the system of n interacting electrons by an auxiliary system of n (p.411) independent electrons. We denote this non-interacting system by the subscript s, with the total energy $Es=Ks+∫drVsρs$. There will be a particular, local potential, $Vs(r)$, such that the exact ground-state density of the interacting system, ρ‎0, is equal to $ρs$. If the ground state of the non-interacting systems is singly degenerate then

(13.22)
$Display mathematics$

where the sum is over the occupied orbitals, fi is the occupancy number of the orbital (0, 1, or 2), and the single-particle orbitals are obtained from the Schrödinger equation

(13.23)
$Display mathematics$

Thus the total energy of the interacting system is

(13.24)
$Display mathematics$

The energy contains four terms: the first is the kinetic energy for the non-interacting system,

(13.25)
$Display mathematics$

the second is the energy of the electrons in the static field of the nuclei ($Vext$ also includes the interaction between the nuclei themselves, $∑I>JqIqJ/|RI−RJ|$); the third is the classical Coulomb energy between the electrons; and the fourth is the exchange–correlation energy. Note that for simplicity, we continue to set $4πϵ0=1$ in the Coulomb terms. $Exc$ is simply the sum of the error made in using a non-interacting kinetic energy and the error made in treating the electron–electron interaction classically. From eqn (13.24), the single-particle potential is given by

(13.26)
$Display mathematics$

This approach would be exact if we knew $Exc$ exactly. However, this is not the case, and to proceed we will need a reasonable approximation for the exchange–correlation energy. In the first instance, it is often represented using the local density approximation (LDA),

(13.27)
$Display mathematics$

where $εxc[ρ0(r)]$ is the exchange–correlation functional for a uniform electron gas, per electron, at a particular density, $ρ0(r)$. $εxc[ρ0(r)]$ is the sum of an exchange term, which is known exactly for the homogeneous electron gas, and a correlation term that can be accurately calculated by quantum Monte Carlo (Ceperley and Alder, 1980), and readily parameterized (Perdew and Zunger, 1981). The LDA approach is surprisingly accurate and this is probably due to a cancellation of errors in the exchange and correlation terms (Harrison, 2003).

(p.412) A significant improvement in $Exc$ can be obtained by using the generalized gradient approximations (GGA). These approximations are still local but take into account the gradient of the density at the same coordinate (Harrison, 2003). The typical form for a GGA functional is;

(13.28)
$Display mathematics$

Considerable ingenuity has produced accurate, locally based functionals, $fxc$, such as the PW91 functional of Perdew et al. (1992) which contains an accurate description of the Fermi holes resulting from the exclusion principle. There are also a number of important semi-empirical approaches; the most widely used of these, Becke–Lee–Yang–Parr (BLYP), uses the exchange functional of Becke (1988) combined with the correlation functional of Lee et al. (1988). There is also an important class of hybrid functionals, such as Becke, 3-parameter, Lee–Yang–Parr (B3LYP), in which the Becke exchange functional is mixed with the energy from Hartree–Fock theory. These hybrid functionals have been implemented in MD calculations (Gaiduk et al., 2014), but are expensive to employ. The development of the simple GGA-based functionals has allowed the application of DFT to real problems in chemical bonding and they are now widely used in ab initio MD methods.

Now all the elements of eqn (13.26) are in place, a simple approach to calculating the ground-state energy for a given nuclear configuration would be as follows:

1. (a) starting from a guess for $ρ0(r)$, calculate the single particle potential, $Vs(r)$, using eqn (13.26);

2. (b) solve the single orbital Schrödinger equations, eqn (13.23), by diagonalizing the KS Hamiltonian;

3. (c) calculate a new estimate for the ground-state density from eqn (13.22);

4. (d) iterate until convergence and calculate the ground-state energy from eqn (13.24).

The time for this calculation is determined by the diagonalization step which scales as $O(n3)$.

Before leaving this section, we must address two further, important, considerations: representation of the single-particle orbitals, $ψi$, in the KS approach, and the use of pseudopotentials to represent the tightly bound electrons. The single-electron orbitals are often expanded in Gaussians or plane waves that fit neatly into the periodic boundary conditions of the simulation. For plane waves

(13.29)
$Display mathematics$

where $k=2πn/L$ is a reciprocal lattice of the MD cell, the wavevector $g$ is in the first Brillouin zone of the reciprocal lattice of the MD cell, and the coefficient $cig$ is a complex number. For ordered systems, we need to consider wave functions corresponding to many different points in the Brillouin zone, but for disordered or liquid-like systems, in a large MD cell, it is possible to work at the zone centre, $g=0$. In this case, the orbitals are simply

(13.30)
$Display mathematics$

(p.413) and $ci(−k)=ci∗(k)$. The KS potential converges rapidly with increasing $k$ and in practical calculations it is necessary to choose a cutoff energy, $Ecut$ such that

(13.31)
$Display mathematics$

The precision of the density functional calculation is determined by the choice of $Ecut$.

There are many other orthonormal, localized functions that can be used to expand $ψi$; these include Gaussians and wavelets. At present there is considerable interest in the use of Wannier functions, the Fourier transforms of the Bloch eigenstates, as a basis set (Martin, 2008, Chapter 21). In principle, these alternative basis sets might be more efficient at describing highly inhomogeneous charge distributions. However, the plane-wave basis is versatile and accurate and is still widely used. For example, for plane waves, the kinetic-energy term in $EKS$ can be written simply in terms of the coefficients, $ci(k)$,

(13.32)
$Display mathematics$

and the Pulay force (the correction to the Hellmann–Feynman approximation) vanishes exactly. A Gaussian basis also provides an analytical expression for $K$ and the Coulomb potential for an isolated system.

The electrons around the nucleus can be divided into two types: core electrons that are strongly bound to a particular nucleus (e.g. the 1s electrons in Cl); and the valence electrons that are polarizable and take part in the formation of new bonds (e.g. the 2pz electron in Cl). In AIMD, it is efficient to concentrate on the evolution of the valence electrons and to combine the core electrons with the nucleus by defining a pseudopotential. This potential, $vPP(r)$, is the potential acting on the valence electrons from the nucleus and the core electrons. In our calculations, it will replace the potential of the bare nuclear charge when constructing the $Vext$ as used in eqns (13.24) and (13.26). In the rest of this section, when we discuss the dynamics of the ions, we mean the dynamics of the positively charged nucleus and the core electrons.

A useful pseudopotential for a particular atom is calculated by comparing the solutions of the Schrödinger equation for the all-electron system and for the corresponding system with a trial pseudopotential. An accurate and transferable $vPP(r)$ is designed so that, at large distance from the nucleus (outside a core region, $r>Rc$), the pseudo-wavefunction matches the all-electron wave function and that, in the core region, the pseudo-wavefunction produces the same charge as the all-electron wave function (i.e. it is norm-conserving). In addition, the pseudopotential is chosen to be as smooth as possible to minimize the number of plane waves required to describe it accurately.

The overall pseudopotential around a nucleus is often constructed separately for each angular momentum component, $ℓ,m$ of the wave function

(13.33)
$Display mathematics$

where L is a combined index ${ℓ,m}$, $vL(r)$ is the pseudopotential for a particular angular momentum channel and $PL=|L⟩⟨L|$ is the projection operator for the angular momentum (p.414) which picks out a particular state, L. Note that $vL(r)$ is equal to the L-independent, all-electron potential outside the core and $vL(r)→−qion/r$ as $r→∞$, where $qion$ is the net charge on the ion. The ionic pseudopotential can be split into a local L-independent part, which contains all the effects of the long-range Coulomb potential, and a non-local part, $ΔvL(r)$

(13.34)
$Display mathematics$

where $ΔvL(r)=0$ for $r>Rc$ and $L≥Lmax$. Normally $Lmax$ is set to 1 for first-row atoms and 2 or 3 for heavier atoms. $vlocal(r)$ can be readily included in the KS energy

(13.35)
$Display mathematics$

The non-local contribution to the pseudopotential can be calculated by projecting the operator onto a local basis set, using the Kleinman–Bylander projection (Kleinman and Bylander, 1982). The energies and forces associated with the non-local part of the pseudopotential are readily evaluated in k-space. Using these approaches it is straightforward to generate pseudopotentials fitted to simple analytical forms such as a combination of polynomials and Gaussians, and examples of such calculations are available for many different atom types (Gonze et al., 1991; Goedecker et al., 1996).

This short introduction to pseudopotentials has allowed us to establish the vocabulary and principles associated with their use in ab initio MD. Fuller accounts of their properties can be found in Marx and Hu.er (2012, Chapter 4) and Martin (2008, Chapter 11).

## 13.2.3 Car–Parrinello dynamics revisited

As we have seen in the last section, the electronic structure problem can be efficiently solved for a given position of the ions using DFT. The development of the Car–Parrinello (CP) method, in 1985, was a breakthrough that avoided the explicit diagonalization of the KS Hamiltonian. There are a number of excellent reviews (Remler and Madden, 1990; Galli and Parrinello, 1991; Galli and Pasquarello, 1993) and a book (Marx and Hutter, 2012) covering the CP method and we provide a short reprise here.

For a plane-wave basis set, the Lagrangian, eqn (13.15) can be written as

(13.36)
$Display mathematics$

where $Λij$ is the Lagrange multiplier that keeps the orbitals, $ψi$ and $ψj$, orthonormal. The resulting equations of motion are

(13.37a)
$Display mathematics$

(13.37b)
$Display mathematics$

(p.415) Equation (13.37a) describes the motion of the Fourier components of the plane-wave expansion in time and eqn (13.37b) the motion of the ions. These coupled equations can be solved using the Verlet or velocity Verlet algorithm and the constraints can be applied using a standard technique such as SHAKE or RATTLE (Tuckerman and Parrinello, 1994). These algorithms are discussed in Chapter 3.

A number of parameters need to be fixed at the beginning of a Car–Parrinello MD simulation, and we use a number of studies of liquid water to illustrate some of these choices. The first consideration would be the basis set used to describe the electronic structure. For example, Laasonen et al. (1993) and Zhang et al. (2011a) both used a plane-wave basis with the simplicity and advantages already discussed. In contrast, Lee and Tuckerman (2006) have employed a discrete variable representation (DVR) of the basis. This approach combines some of the localization of the density that would be typical of a Gaussian basis set with the orthogonality offered by the plane-wave functions. This choice improves energy conservation for a particular $Ecut$ and grid size, but avoids problems with the Pulay forces and the basis set superposition errors associated with an atom-centred basis set. Second, the exchange interaction for the calculations needs to be chosen. The LDA is not appropriate for liquid water but Laasonen et al. (1993) used the GGA method of Becke, while Lee and Tuckerman (2006)) used the BLYP. Zhang et al. (2011b) have employed the hybrid functional PBE0 with a Hartree–Fock correlation energy and a more advanced functional designed to improve the modelling of the O–O dispersion interactions. The time required for the simulations increases significantly as the exchange functional becomes more complicated and accurate, so that the simulations using more advanced functionals are often performed on as few as 32 H2O molecules. Water simulations are typically performed at the zone centre, $g=0$, with between 32 and 100 molecules, and take into account explicitly the four electronic states per molecule corresponding to the valence electrons of the oxygen atom. The hydrogen atoms (nucleus and electron) and the oxygen nucleus and core electrons can be represented by pseudopotentials of the type developed by Troullier and Martins (1991) and others (Vanderbilt, 1985; Goedecker et al., 1996)). For H, the wave function corresponding to its pseudopotential is smooth, avoiding the kink at the nuclear position (associated with the divergence of the Coulomb potential) and it can be represented with fewer plane waves. The accuracy of the calculation is controlled by the choice of $Ecut$ or the number of wavevectors representing each electronic state. $Ecut$ might range from 35 Ha to 150 Ha. (Note that, in much of the literature, the cutoff energy is given in either hartree or rydberg units, where $1Ha=2Ry=4.3597×10−18kJmol−1$.) From eqn (13.31), the number of $k$ vectors corresponding to a particular energy cutoff is given by

(13.38)
$Display mathematics$

where, in this equation, V is the volume corresponding to a single molecule of water, and V and $Ecut$ are in atomic units ((bohr)3 and hartrees respectively). For a cutoff of 35 Ha, each orbital requires 1000 wavevectors for each of the electronic states associated with each of the water molecules in the simulation. Note that this number takes into account the symmetry in the coefficients at the zone centre. The density,

(13.39)
$Display mathematics$

(p.416) and the corresponding single-particle potential $Vs(r)$ vary twice as fast as the wave function and would require 8000 wavevectors per electronic state.

In a plane-wave calculation, the real-space grid spacing is (Lee and Tuckerman, 2006)

(13.40)
$Display mathematics$

where ℓ is in bohr and $Ecut$ in hartrees. So, for example, a single water molecule in a small cubic box at a density of $997kgm−3$ with an orbital cutoff of 35 Ha would require (31)3 grid points (the number of grid points scales linearly with the number of water molecules in the simulation). The DVR basis can use a much coarser grid for the real-space density than the plane-wave basis (Lee and Tuckerman, 2006) at the same level of accuracy as judged by the energy conservation in the simulation.

Finally, the timestep for the CP simulation used by Lee and Tuckerman (2007) is 0.05 fs (or 2.07 a.u.). This is used with a fictitious mass $μ=500$ a.u. (see eqn (13.37a)). The earlier work of Laasonen et al. (1993) used a timestep of 0.169 fs (6.99 a.u.). In order to achieve a separation of the ionic and electronic kinetic energies with this long timestep, they used a fictitious mass $μ=1100$ a.u. and doubled the mass of the protons to simulate D2O rather than water.

The starting configuration for the CP simulation comprises the ionic positions and velocities and the initial values of all the $ci(k)$ and their first time derivatives at $t=0$. The choice of the ionic positions and velocities follows the ideas developed in Sections 5.6.1 and 5.6.2. The coefficients for the wave functions can be chosen at random. It is possible to reflect the importance of the different plane waves by sampling from a Gaussian distribution in $|k|$. Alternatively one can superimpose the electron densities of the underlying atomic configuration to produce an estimate of the overall density and diagonalize the corresponding KS matrix in any reasonable basis to give the starting coefficients. Marx and Hu.er (2012) suggest using the pseudo-atom densities and the pseudo-atomic wave functions, already used in the calculation of the pseudopotential, in this context. The velocities of the coefficients at $t=0$ can be set consistently by moving forward and backward one timestep, calculating the $ci(k,−δt)$ and $ci(k,+δt)$ from $R(N)(−δt)$ and $R(N)(+δt)$ and using these coefficients to estimate $c˙i(k,0)$ (Remler and Madden, 1990).

Once the simulation parameters and the starting configuration are established the simulation itself proceeds like a classical MD calculation. The heart of a typical CP MD code is as follows. At $t=0$ an initial guess is made for $ψi=∑kci(k)exp(ik⋅r)$. Then, at each subsequent timestep the following operations are carried out.

1. 1. Calculate $ρ0(r)$, $Vs[ρ0(r)]$, $EKS[ρ0(r)]$, $∇IEKS$.

2. 2. For each electronic state i and for each plane wave $k$:

1. (a) calculate $∂EKS/∂ci∗(k)$;

2. (b) integrate $μc¨i(k)=−∂EKS/∂ci∗(k)$.

3. 3. Orthogonalize wave functions.

4. 4. For each ion, integrate $MIR¨I=−∇IEKS$.

5. 5. Compute averages.

At a particular timestep, the $ci(k)$ are used to calculate $ρˆ0(k)$, which is transformed to produce $ρ0(r)$ at every point on a real-space lattice and hence $EKS$ (eqn (13.24)). The (p.417) force on the ions is calculated from the derivatives of the energy with respect to the ionic positions. This force is determined by the derivatives of the local and non-local pseudopotentials and the electrostatic energy with respect to $RI$; the kinetic energy and exchange energy make no contribution. We recall that the second and third terms in $EKS$, eqn (13.24), are long-ranged and the energies and forces resulting from these electrostatic interactions are calculated using the Ewald summation method, or one of the other techniques described in Chapter 6.

Looping over all of the electronic states i and wavevectors $k$, the force on the wave function coefficients, $−∂EKS/∂ci∗(k)$, is calculated from all the terms in eqn (13.24). For example, for a plane-wave basis, the force from the kinetic energy,

(13.41)
$Display mathematics$

is evaluated in k-space and the same is true of the electronic force associated with the non-local pseudopotential. In contrast the electronic forces involving the local pseudopotential, the Coulomb and the exchange energies are convolutions (double sums in reciprocal space). They are most efficiently evaluated by multiplying the corresponding real-space functions and then Fourier transforming to obtain the force. For the precise pathway, through real and reciprocal space to $∂EKS/∂ci∗(k)$, see Payne et al. (1989) and Galli and Pasquarello (1993, Fig. 7). The ability to move backwards and forwards between real and reciprocal space using fast Fourier transforms is at the heart of these molecular dynamics calculations and can often be achieved efficiently using standard scientific libraries tuned for a particular machine.

Once the forces on the coefficients are calculated, the coefficients are moved forwards in time without constraints using, say, the velocity Verlet algorithm, and the orthogonalization is applied using the RATTLE algorithm. Finally, the forces on the ions are used to advance the ionic positions.

The hallmark of a properly functioning MD simulation is that the energy associated with the Lagrangian, eqn (13.36) should be conserved. That is

(13.42)
$Display mathematics$

where $Eelec$ is the sum of the last three terms in eqn (13.24) including the potential between the ions. $Etotal$ should not drift and should remain constant to ca. $±0.00001Ha$ over the course of the simulation. Remler and Madden (1990) have pointed out, for a well-defined trajectory on the BO surface, it is

(13.43)
$Display mathematics$

that should be conserved, and that this can only be achieved if the fictitious kinetic energy is several orders of magnitude smaller than the variation in $Ereal$. For example, Fig. 2 of Lee and Tuckerman (2007) shows the clear separation of the kinetic energies associated with the ionic and electronic degrees of freedom and excellent overall conservation of energy over the 60 ps of a simulation of water. The separation of the characteristic frequencies (p.418) of the ionic and electronic degrees of freedom prevents energy exchange between these systems and is at the root of the success of the CP method.

The CP method just described samples states in the microcanonical ensemble with $Etotal$ constant. It is also possible to simulate at constant NVT (Tuckerman and Parrinello, 1994) using the Nosé–Hoover chain (Martyna et al., 1992) described in Section 3.8.2, where the heat bath is coupled to the ionic degrees of freedom. Normally, one would not apply a separate thermostat to the electrons since this can disturb the correlation between the electronic and ionic motion, resulting in an additional friction on the ions with a corresponding flow of energy from the ionic system to the electrons. When the gap between the electronic and nuclear degrees of freedom is very small, separate heat baths with different chain lengths can be coupled to the ions and the electrons to ensure that the system remains on the BO surface.

CP simulations in the constant-NPT ensemble can be performed with isotropic changes in a cubic box (Ghiringhelli and Meijer, 2005), or with varying box shape (Focher et al., 1994). In a constant-NPT simulation the volume of the cell changes, and therefore the plane-wave basis ($2πn/L$) changes. This creates a Pulay force arising from the Hellmann–Feynman theorem. This problem can be avoided by imposing an effective energy cutoff, $Ecuteff≪Ecut$ (eqn (13.31)) and smoothly suppressing the contribution of all plane waves above $Ecuteff$ (Marx and Hutter, 2012, p.187).

## 13.2.4 Summary

The CP method is a powerful technique in the simulation of liquids and solids. For example, it can be used to calculate equilibrium and structural properties of water (Pan et al., 2013; Alfe et al., 2014) and the time correlation functions and their corresponding condensed phase spectra (Zhang et al., 2010). It can be readily extended to consider fluids in confined geometries and pores (Donadio et al., 2009) and to study mixture such as ions in aqueous solution (Kulik et al., 2012).

The development of the CP method in 1985 was the historical breakthrough in AIMD. However, once that Rubicon had been crossed, a reexamination of BO dynamics confirmed the power of this direct approach, and BO dynamics is the most widely used method at the present time. BO dynamics, eqn (13.14), implies a full diagonalization of the KS matrix for a fixed position of the ions. This can be recast as a constrained minimization with respect to the orbitals

(13.44)
$Display mathematics$

which can be efficiently solved using a conjugate gradient method (Teter et al., 1989; Arias et al., 1992). The speed of these minimizations depends on the quality of the initial guess for the orbitals for a given ionic configuration $RI(N)$ and this can be improved by efficiently extrapolating the electronic configuration or the density matrix forward in time (Payne et al., 1992; VandeVondele et al., 2005).

Which of these methods is most efficient in terms of the computer time required to solve a particular problem? This issue is discussed in some detail by Marx and Hu.er (2012, section 2.6) and we simply highlight a number of observations from their discussion. We start from the premise that the method which allows us to use the longest timestep will allow us to calculate more accurate properties for a particular computing budget. (p.419) CP dynamics can be conducted with a timestep approximately one order of magnitude larger than that used with Ehrenfest dynamics (eqn (13.13)). The timestep advantages of BO dynamics over CP dynamics are more pronounced for heavier nuclei: in this case there is an approximate order of magnitude advantage in favour of the BO approach. Model (p.420) studies of Si employed a timestep at least five times larger than for the CP method with only a small loss in the quality of the energy conservation. However, in the important case of water, the longer timestep that can be employed in the BO dynamics is outweighed by the number of iterations required to achieve convergence of the minimizations, and the CP dynamics is more efficient by a factor of between 2 and 4, at comparable levels of energy conservation. For water, the choice of CP over BO depends on how tightly one wants to control the energy conservation for a given expenditure of computer time. Those from a classical simulation background will probably feel most comfortable with the higher level of energy conservation provided by the CP dynamics. As one would expect both methods produce the same results within the estimated error for the static and dynamic properties of water (Kuo et al., 2006). The issue of the choice of timestep to control the energy conservation is particularly important when one wants to accurately sample in a particular ensemble or to calculate time-dependent properties. Both BO and CP dynamics are also frequently used as simulated annealing techniques to reach a minimum energy state particularly in the study of solids. In this case the BO approach offers some clear advantages. When comparing different techniques, the implementation of multiple timestep methods may also tip the balance one way or the other (Luehr et al., 2014).

This section has highlighted some of the complexities in developing AIMD codes and it is unlikely that a single researcher is going to create a new AIMD code from scratch. Fortunately, there are many packages available that can perform BO, CP, or Ehrenfest dynamics, with a variety of pseudopotentials and basis sets. We have listed a number of these in Table 13.1. There are web references to the software in the Table which will allow the interested reader to access the software under the terms of its distribution.

Table 13.1 A number of codes available for performing density functional calculations and AIMD. This list is not complete and is merely representative of the range of packages available. A number of these are free for academic user in particular countries and some are also available through commercial suppliers. The precise terms and conditions for use are available at the referenced websites.

Code

Functionality

Source

ABINIT

A DFT electronic structure code, with pseudopotentials, plane waves and wavelets; relaxation by BO dynamics, TDDFT.

www.abinit.org/

CASTEP

A DFT electronic structure code with pseudopotential, plane waves, direct minimization of the KS energy functional; a wide range of spectroscopic features.

www.castep.org/

VASP

A DFT electronic structure code with BO dynamics; plane waves, pseudopotentials, hybrid functionals plane-wave basis; robust code for non-experts.

www.vasp.at/

SIESTA

A DFT electronic structure code, with relaxation and BO dynamics; linear scaling; based on numerical atomic orbitals.

departments.icmab.es/leem/siesta/

CPMD

AIMD code with DFT; CP dynamics, BO dynamics, Ehrenfest dynamics; plane-wave basis, PIMD, QMMM.

www.cpmd.org/

CP2K

AIMD; a mixed Gaussian and plane-wave approach, BO dynamics.

www.cp2k.org/

Quantum Espresso

CP dynamics, BO dynamics, plane-wave basis; quantum transport; normal DFT features.

www.quantum-espresso.org/

Throughout this discussion, we have adopted the common ‘ab initio’ nomenclature for this class of molecular dynamics. It is worth reflecting on how ab initio these methods really are. There is considerable freedom in choosing the exchange functional and in choosing the combination of a particular exchange functional with a pseudopotential and a basis set. In many recent papers one observes some ingenuity in adjusting these combinations to give the best possible fit to a particular experimental observable. This in itself is no mean feat because the simulations are often on small system sizes for quite short times. It is difficult to ascribe a disagreement with experiment unambiguously to either a limitation in the methodology or a particular choice in constructing the KS energy. Unquestionably, these methods have opened up important new areas such as aqueous electrochemistry, ionic liquids, and liquid metals to the power of accurate simulation, but there is still some way to go before we have full control of the model required to study a new system without recourse to adjustment against experiment.

Finally, we note that the techniques developed for the ab initio dynamics of quantum systems can be applied with good effect to more classical systems, containing the type of induced interactions described by eqn (1.36). In such cases (see Section 3.11), we can perform a molecular dynamics on the ions and allow the induced dipole moments to follow this motion on the appropriate Born–Oppenheimer surface (Salanne et al., 2012).

# 13.3 Combining quantum and classical force-field simulations

The techniques described in Section 13.2 are computationally demanding and are normally applied to small systems. In modelling a large system such as a solvated protein, it (p.421) (p.422) is possible to treat a particular region of interest using a quantum mechanical (QM) approach and to model the environment around this region classically. An example is the transformation of an unconjugated α‎-keto acid to its conjugated isomers by enzyme catalysis in aqueous solution (Siegbahn and Himo, 2009). It is essential to model the acid molecule and the four important residues of the enzyme, 4-oxalocrotonate tautomerase, in a full QM way to capture the making and breaking of the bonds. However it would not be efficient or necessary to include other parts of the enzyme and the solvent in such detail and these could be usefully represented by a classical force field.

The idea of embedding a quantum mechanical sub-system in a larger classical system was first developed for static calculations by Warshel and Levi. (1976). For this reason it is often described as the QMMM method (Groenhof, 2013). In the approach, the energy of the QM part of the system can be calculated using post-HF methods, DFT, or semi-empirical quantum methods such as AM1 and CNDO. The dynamics in the quantum region can be explored using the BO or CP methods described in Section 13.2. The classical region can be treated using a standard force field and an MD calculation. In this section, we will focus on the use of DFT methods in the quantum region. Fig. 13.1 shows a QM system embedded in a cluster of classical atoms and molecules. The choice of the QM sub-system is always arbitrary and its definition often requires an initial insight into the problem at hand. The sub-system can be defined either by labelling atoms or by defining a geometrical region to be treated quantum mechanically. For the moment, we will assume a fixed QM region with no movement of molecules across the QMMM boundary. If the molecules (p.423) in the two regions are distinct, the forces across the boundary will normally consist of dispersion interactions and electrostatic interactions. As is often the case, the boundary cuts a covalent bond and then there are also bond stretching, bond angle, and torsional forces between the regions.

Fig. 13.1 A QM system (dashed box) embedded in a classical MM system. The atoms that are treated quantum mechanically are light grey, the classical atoms are dark grey. The arrowed dark atom is a capping H atom used to preserve the valence of the quantum fragment at the quantum–classical boundary. The nearby horizontal dotted line indicates a monovalent pseudopotential discussed in the text.

There are two ways to calculate the energy of the combined system. In the simplest subtractive scheme (Maseras and Morokuma, 1995), the energy of the whole system is calculated using an MM force field, the energy of the QM system is added, and finally the MM energy of the QM sub-system is subtracted:

(13.45)
$Display mathematics$

All the calculations are performed with a particular QM or MM method and there is no specific consideration of the interface. The interaction between the two regions is only included at the MM level. A more accurate, additive, approach is widely used in practice (Röthlisberger and Carloni, 2006). In this case the energy is composed of three terms

(13.46)
$Display mathematics$

where $E(QM)$ is the QM energy in the QM region, $E(MM)$ is the MM energy in the MM region, and the interaction energy between the regions is (Ippoliti et al., 2012)

(13.47)
$Display mathematics$

The index I labels nuclei (or cores) in the QM region, while $I′$ labels the MM atoms. The first two terms represent the interaction between the total charge density (due to electrons and cores) in the QM region and the classical charges in the MM region. The third term represents the dispersion interactions across the QMMM boundary, and the fourth term consists of all the covalent bond stretching potentials that cross this boundary. The final two terms account for the energy across the boundary due to covalent bond angle bending and torsional potentials; here it is understood that at least one of the atoms involved in the angles θ‎ and ϕ‎ is a QM atom, with the others being MM atoms.

The terms $E(MM)+E(QM/MM)$ are included in $EKS$ in the Lagrangian, eqn (13.36), and a plane-wave basis set is used to solve the CP equations of motion for the QM system. The equations of motion of the MM charges are solved using a standard MD technique including all the classical forces and the forces from the QM region calculated from the gradient of eqn (13.47).

For a system of 106 grid points for the electron density in the QM region and 104 MM atoms, the full evaluation of the charge–charge term in $E(QM/MM)$, eqn (13.47), remains prohibitively expensive. This problem can be mitigated by apportioning the MM charges (p.424) into three concentric spheres around the QM region. In the innermost sphere, the classical MM charges interact with the QM density as described by eqn (13.47).

The MM charges in the next, intermediate, region interact with a set of constructed charges in the QM region. These constructed charges are associated with each of the QM nuclei and are calculated using the RESP approach, to mimic the electron density, $ρ(r)$, as described in Section 1.4.2. This dynamical-RESP approach allows the fluctuating charges on the QM nuclei to be calculated on-the-fly during the course of the simulation. It also includes a restraining term to prevent these charges from fluctuating too strongly away from the underlying values of the Hirshfeld charges (Laio et al., 2002b). (Note: the Hirshfeld charges are calculated by sharing the molecular charge density at each point between the atoms, in proportion to their free-atom densities at the distances from the nuclei (Hirshfeld, 1977).)

In the third, outermost region, the classical charges interact with the multipole moments of the quantum charge distribution (Laio et al., 2004). The calculation of the charge–charge interaction using different, successively more approximate, methods in the three shells enables a very significant reduction in computational cost without a significant loss of accuracy (Laio et al., 2004). Finally, the behaviour of the dynamical-RESP charges provides a useful method for monitoring the chemical state of the QM atoms (Laio et al., 2002b).

When the QM–MM boundary intersects a covalent bond, an artificial dangling bond is created in the QM system. This is known as the link-atom problem and it can be tackled by capping the unsaturated valence with an H or F atom (Field et al., 1990). (This capping atom, which belongs to the QM region, is indicated by an arrow in Fig. 13.1.) This is not a perfect solution, since new, unphysical atoms are introduced into the QM region and considerable care has to be taken to remove the interactions between these capping atoms and the real MM atoms; this is particularly important for any charge–charge interaction (Laio et al., 2004). A more sophisticated approach compensates for the dangling bond by the addition of a localized orbital centred on the terminal QM atom before the boundary (Assfeld and Rivail, 1996). These localized orbitals have to be determined by calculations on small fragments of the real system and they remain frozen throughout the simulation without responding to changes in the electronic structure of the QM system. The most flexible approach is to introduce a monovalent pseudopotential at the first MM atom beyond the boundary; this is along the dotted line in Fig. 13.1. This is constructed so that the electrons in the QM region are scattered correctly by the classical region (von Lilienfeld et al., 2005). Analytic, non-local pseudopotentials (Hartwigsen et al., 1998) have been employed to good effect in this approach. They do not introduce additional interactions or degrees of freedom into the QM systems and they are sufficiently flexible to change as the QM region evolves.

There are a number of important issues to consider when implementing the QMMM method. First, the method facilitates the simulation of larger systems, which include a QM core, but the maximum timestep that is applied to the whole system is controlled by the CP or BO dynamics. The short timestep required would normally limit the simulation to a few picoseconds, unless the classical dynamics is included using a multiple-timestep method as described by Nam (2014). Second, the method does not include the Pauli repulsion between the electrons in the QM region and the electrons associated with the classical (p.425) atoms in the MM region. The result is that electrons from the QM region can be strongly and incorrectly associated with positively charged atoms close to the boundary in the MM region. This problem, known as electron spill-out, can be avoided by replacing the MM atoms and any corresponding point charges in the region close to the boundary by ionic pseudopotentials with screened electrostatic interactions (Laio et al., 2002a).

Finally, it is useful to be able to lift the restriction that all the atoms that start in the QM region remain in that region throughout the simulation. This is particularly important for liquid-like environments where solvent atoms (water) might diffuse around the active site of a protein. One possibility is to label atoms as either QM or MM at the start and to stick with these labels throughout. Unfortunately, in this case, solvent atoms that are explicitly involved in the reaction could be treated as MM atoms while solvent atoms which have moved far away from the reaction could be treated as QM atoms. An improvement is to define the QM region geometrically, as (say) a sphere of radius $RQM$, and to change the nature of the atoms that cross the sphere from QM to MM and vice-versa. This will lead to significant discontinuities in the energy as atoms change their type, which can create instabilities in the dynamics and drifts in the temperature (Bulo et al., 2009). This problem has been addressed by defining an additional transition region between the QM and MM regions. The outer boundary of the transition region is set to $RMM>RQM$. The N adaptive atoms in the transition region, at any point in time, can be either QM or MM atoms. For example, if $N=3$, there are $2N=8$ partitions between the two types: {QM, QM, QM}, {QM, QM, MM} … {MM, MM, MM}. The potential energy and force from the adaptive atoms is given by a weighted sum over all the possible partitions, p:

(13.48)
$Display mathematics$

where wp is a weight and $Vp(R(N))$ is the potential energy of a particular partition. Each of the N atoms is given a λ‎-value which determines its degree of MM character:

(13.49)
$Display mathematics$

λ‎ depends on the distance R of an atom from the centre of the QM region. In a particular partition, the set of QM atoms has λ‎ values denoted by ${λ}pQM$ while the set of MM atoms is described by ${λ}pMM$. The weight given to a particular partition is

(13.50)
$Display mathematics$

A detailed rationalization of this particular choice of wp is provided by Bulo et al. (2009). This approach produces a smooth force that changes from pure QM to pure MM across the transition region. The computational overhead for the calculation of $Vad$ and its derivatives appears to scale as 2N. However, using eqn (13.50), any partition in which an MM atom is closer to the QM region than any of the QM atoms in the transition region, (p.426) has a weight of zero. There are only $N+1$ contributing partitions and this makes the calculation of the adaptive potential efficient enough for many applications (Park et al., 2012).

The QMMM method can be used in the CPMD program, see Table 13.1. The CPQM option in this code requires the definition of the topology and coordinates of the protein and solvent, the classical force field to be used (either AMBER or GROMOS), and the three radii required to define the charge–charge interaction in the calculation of $E(QM/MM)$.

# 13.4 Path-integral simulations

In this section, we consider fluids where the nuclei of the atoms are modelled quantum mechanically. In Section 2.9 we described the way in which first-order quantum corrections can be applied to classical simulations. The corrections to the thermodynamic properties arise from the Wigner–Kirkwood expansion of the phase-space distribution function (Green, 1951; Oppenheim and Ross, 1957) which may in turn be treated as an extra term in the Hamiltonian (Singer and Singer, 1984)

(13.51)
$Display mathematics$

Here $∇I$ is short for the gradient $∇RI$ with respect to the position $RI$ of the atom or, more precisely, the nucleus, and $PI$ is the momentum of the nucleus. Of course, as an alternative, the quantum potential can be obtained by integrating over the momenta

(13.52)
$Display mathematics$

This potential may be used in a conventional Monte Carlo simulation to generate quantum-corrected configurational properties. It is the treatment of this additional term by thermodynamic perturbation theory that gives rise to the quantum corrections mentioned in Section 2.9. Alternatively, a molecular dynamics simulation, based on the Hamiltonian of eqn (13.51), can be employed (Singer and Singer, 1984). Apart from measures to cope with numerical instabilities resulting from derivatives of the repulsive part of the potential, the technique is quite standard. In this section, we consider techniques that go beyond the expansion in $ℏ2$ typified by eqn (13.51).

One of the most straightforward of these simulation techniques is that based on a discretization of the path-integral form of the density matrix (Feynman and Hibbs, 1965), because the method essentially reduces to performing a classical simulation. Since the early simulation work (Fosdick and Jordan, 1966; Jordan and Fosdick, 1968) and the work of Barker (1979), the technique has become popular, in part because the full implications of the quantum–classical isomorphism have become clear (Chandler and Wolynes, 1981; Schweizer et al., 1981). This type of approach is particularly useful when we wish to consider the nucleus as a quantum particle to capture phenomena such as quantum tunnelling and zero-point motion. Consider a single neon atom, which remains in its ground electronic state and whose position is described by $R1$. Starting with eqn (13.7) we (p.427) can use the Trotter factorization (Tuckerman, 2010, Appendix C) to divide the exponential into P equal parts

(13.53)
$Display mathematics$

and inserting unity in the form

(13.54)
$Display mathematics$

between each pair of exponentials gives

(13.55)
$Display mathematics$

We seem to have complicated the problem; instead of one integral over diagonal elements of $ϱ$, we now have many integrals involving off-diagonal elements. However, each term involves, effectively, a higher temperature (or a weaker Hamiltonian) than the original. At sufficiently large values of P, the following approximation becomes applicable:

(13.56)
$Display mathematics$

where $Vcl(Ra)$ is the classical potential energy as a function of the configurational coordinates, and where the free-particle density matrix is known exactly. For a single molecule, of mass m, it is

(13.57)
$Display mathematics$

where $Rab2=|Ra−Rb|2$. Now the expression for Q is

(13.58)
$Display mathematics$

These formulae are almost unchanged when we generalize to a many-molecule system. For N atoms,

(13.59)
$Display mathematics$

(p.428) We must consider carefully what eqn (13.59) represents. Each vector $Ra(N)$ represents a complete set of 3N coordinates, defining a system like our N-atom quantum system of interest. The function $Vcl(Ra(N))$ is the potential-energy function for each one of these systems, calculated in the usual way. Imagine a total of P such systems, which are more or less superimposed on each other. Each atom in system a is quite close to (but not exactly on top of) the corresponding atom in systems $b,c,…$ etc. Each contributes a term $Vcl(Ra(N))$ to the Boltzmann factors in eqn (13.59), but the total is divided by P to obtain, in a sense, an averaged potential. The systems interact with each other through the first exponential term in the integrand of eqn (13.59). Each vector $Rab(N)$ ($R12(N)$, $R23(N)$, etc.) represents the complete set of N separations between corresponding atoms of the two systems a and b. Specifically the squared terms appearing in eqn (13.59) are

(13.60)
$Display mathematics$

where $Ria$ is the position of atom i in system a. These interactions are of a harmonic form, that is, the systems are coupled by springs.

There is an alternative and very fruitful way of picturing our system of NP atoms (Chandler and Wolynes, 1981). It can be regarded as set of N molecules, each consisting of P atoms which are joined together by springs to form a classical ring polymer. This is illustrated in Fig. 13.2. We write the integral in eqn (13.59) in the form of a classical configurational integral

(13.61)
$Display mathematics$

where $R(NP)$ is the complete set of NP atomic coordinates, $Ria$ corresponding to atom a on molecule i, with the configurational energy consisting of two parts

(13.62)
$Display mathematics$

Fig. 13.2 Two ring-polymer ‘molecules’ ($P=5$) representing the interaction between two atoms in a path-integral simulation. The straight dashed lines are the intermolecular potential interactions, the wavy lines represent the intramolecular spring potentials.

(p.429) The classical part is

(13.63)
$Display mathematics$

We have assumed pairwise additivity here and in Fig. 13.2 for simplicity, although this is not essential. The quantum part of the potential is

(13.64)
$Display mathematics$

where we take $a+1$ to equal 1 when $a=P$. Note how the interactions between molecules only involve correspondingly numbered atoms a (i.e. atom 1 on molecule i only sees atom 1 on molecules $j,k$, etc.), while the interactions within molecules just involve atoms with adjacent labels. The system is formally a set of polymer molecules, but an unusual one: the molecules cannot become entangled, because of the form of eqn (13.63), and the equilibrium atom–atom bond lengths in each molecule, according to eqn (13.64), are zero.

The term outside the integral of eqn (13.59) may be regarded as the kinetic contribution to the partition function, if the mass of the atoms in our system is chosen appropriately. Actually, this choice is not critical, if the configurational averaging is the key problem to solve. Nonetheless it proves convenient (as we shall see shortly) to use an MD-based simulation method, and De Raedt et al. (1984) recommend making each atom of mass $Pm=M$. Then the kinetic energy of the system becomes

(13.65)
$Display mathematics$

and the integration over momenta, in the usual quasi-classical way, yields

(13.66)
$Display mathematics$

Apart from the indistinguishability factors, which may usually be ignored as far as the calculation of averages is concerned, this is the approximate quantum partition function eqn (13.59) for our N-particle system.

Thus a Monte Carlo simulation of the classical ring polymer system with potential energy $V$ given by eqn (13.62) may be used to generate averages in an ensemble whose configurational distribution function approximates that of a quantum system. Examples appear in Code 13.1.

(p.430) Equally, an MD simulation with Hamiltonian

(13.67a)
$Display mathematics$

or indeed

(13.67b)
$Display mathematics$

will achieve the same result. We have explicitly written down two versions of the Hamiltonian to remind ourselves that we are completely free to choose the fictitious mass M corresponding to the momenta whereas the m appearing in $Vqu$ is not adjustable. The use of MD techniques to generate equilibrium states in this way is often referred to as path-integral molecular dynamics (PIMD).

These approaches can be readily extended to the simulation of an isolated quantum atom in a classical solvent bath, where the classical atoms behave like polymers contracted to a point. The method has also been used to study the behaviour of an excess electron in a classical fluid (Miller, 2008), the transfer of an electron between ions in water (Menzeleev et al., 2011), and to shed light on the long-standing problem of the anomalous mobility of the proton (Marx et al., 1999) and the hydroxyl ion (Tuckerman et al., 2002) in water. The extension of PIMD to molecular systems is possible and desirable when, in a case such as water, translational motion may be regarded as classical while rotation is quantum-mechanical (Kuharski and Rossky, 1984). There are additional complications in the case of asymmetric tops (Noya et al., 2011). These simulations of water have been extended to consider models such as SPCE (Berendsen et al., 1987), which comprise a Lennard-Jones (p.431) site on the oxygen atom and partial charges on the hydrogen and oxygen atoms. Each polymer bead in the PIMD simulation is a replica of this scaffold and a technique such as SHAKE is used to constrain the internal structure of these beads during the dynamics (Miller and Manolopoulos, 2005).

As the number of particles P in our ring polymer grows we obtain a better approximation to the quantum partition function; these equations become formally exact as $P→∞$, going over to the Feynman path-integral representation (Feynman and Hibbs, 1965). How well do we expect to do at finite P? Some idea of what we may expect can be obtained by studying the quantum harmonic oscillator for which exact solutions are available in the classical $P=1$ case, in the quantum-mechanical limit $P→∞$, and for all intermediate P (Schweizer et al., 1981). The computed average energy is plotted in Fig. 13.3. It can be seen that the finite-P energy curves deviate from the true result as the temperature decreases, leaving the zero-point level ($12ℏω$) and dropping to the classical value at $T=0$, $⟨E⟩=0$. The region of agreement may be extended to lower temperatures by increasing P.

Fig. 13.3 The average energy of the path-integral approximation to the quantum harmonic oscillator of frequency ω‎ as a function of temperature. We give the results for various values of P, the number of ‘atoms’ in the ring polymer. $P=1$ is the classical result, and $P→∞$ is the quantum mechanical limit.

Now to the practical matters. The classical polymer model is easily simulated by standard techniques, such as constant-NVT MC or MD. In principle, a better approximation to the quantum system will be obtained by making P as large as possible. A P-bead system is expected to be roughly P times more expensive than the classical one. Markland and Manolopoulos (2008b,a) have shown that if the interaction potentials can be separated into short- and long-ranged contributions, including electrostatics, the latter can be evaluated more economically by constructing a contracted ring polymer with fewer beads. Therefore, some of the expense associated with increasing P may be offset. There are, however, some (p.432) additional technical problems to be expected as P increases. According to eqn (13.64) the internal spring constant increases with P as well as with temperature, while the external forces felt by each individual atom decrease as P increases. In an MC simulation this might mean that separate attention would have to be given to moves which altered intramolecular distances and those involving the molecule as a whole. In some cases, a normal mode analysis of the polymer may help in choosing MC step sizes (Herman et al., 1982). More directly, one can abandon intramolecular Metropolis moves and build the polymer from scratch, by sampling from the internal, free-molecule, distribution (Jacucci and Omerti, 1983). In PIMD simulations there is the corresponding danger that the time-scale separation of the internal and external motions may become acute at high P and this will necessitate a shorter timestep. A sensible choice of the dynamic mass M helps to overcome this problem: with the choice of eqn (13.65) the stiffest internal modes of the polymer will be characterized by a frequency $kBT/ℏ$; alternative choices of M may be made so as to match the internal frequencies with the external time scales (De Raedt et al., 1984). Nonetheless, the danger of slow energy exchange between internal and external modes, leading to non-ergodic behaviour, is a real concern. One of a number of approaches to this problem (Berne, 1986) is to use the freedom we have in changing the individual masses of the polymer beads. A judicious choice of these masses will ensure that all wavelength modes of the ring polymer are optimally sampled in the dynamics. This can be achieved by linearly transforming the internal degrees of freedom of the ring polymer

(13.68)
$Display mathematics$

into normal modes ${uia}$ so as to diagonalize the harmonic quantum potential

(13.69)
$Display mathematics$

$Ui$ is a matrix of unit determinant describing the transformation for each polymer i, and ${mˉia}$ are the transformed masses of the beads. The first normal mode $a=1$ corresponds to the translational motion of the centroid of the ring polymer, and does not contribute to $Vqu$.

There are two common choices for $U$ used in path-integral simulations. The traditional normal-mode transformation uses a simple Fourier transform. The alternative ‘staging’ transformation breaks up the path-integral chain to achieve a separation of the long- and short-wavelength modes of the system. (Staging was originally developed as a method for improving sampling in path-integral Monte Carlo by creating a primary chain with the correct thermal wavelength, but composed of only a few particles, and then adding the secondary chains with adjacent particles in the primary chain as the end points (Sprik et al., 1985).) The precise form of $U$ for both these transformations is described in detail in Martyna et al. (1999, Appendix A).

Comparing with eqn (13.67b), the Hamiltonian for the transformed variables is

(13.70)
$Display mathematics$

(p.433) where, in the case of the normal mode transformation, the $mˉia$ are proportional to the normal mode eigenvalues. There remains the freedom to choose a set of new masses, ${m˜ia}$, associated with the fictitious momenta. For the normal mode transformation, all modes can be placed on the same time scale, with $m˜i1=mi=M/P$ and $m˜ia∝mˉia$, for $a=2,…P$. The dynamics associated with eqn (13.70) can then be performed to follow the development of the normal modes with time. The $u$ coordinates and associated forces can be efficiently calculated using FFTS. Similar relationships can be established for the ‘staging’ transformation (Tuckerman et al., 1993).

In addition to the transformations, the sampling of the phase space of the stiff ring polymers can be enhanced by coupling a Nosé–Hoover chain thermostat to each of the 3NP degrees of freedom of the system (Martyna et al., 1999). The optimal sampling of phase space is achieved for a thermostat ‘mass’ of $Qia=βℏ2/P$. Alternatively, a specially designed stochastic thermostat may be applied (Ceriotti et al., 2010). Finally, multiple-timestep methods of the type described in Section 3.5 can also be applied to improve the efficiency. A short timestep can be applied to the harmonic bonds while the classical force from the external degrees of freedom can be evaluated using a significantly longer step (Tuckerman et al., 1993).

There are some subtleties in the way a path-integral simulation is used to estimate ensemble averages. Averages that are only a function of the position variables, such as the spatial distribution of atoms, can be calculated using the diagonal elements of the density matrix. Averages that are only a function of the momentum operators, such as the momentum distribution, require both diagonal and off-diagonal elements of the density matrix, and in a path-integral representation this requires the evaluation of paths that are not cyclic. In the canonical ensemble, the partition function is accessible through the evaluation of cyclic paths, and properties that depend on both position and momentum can be evaluated through the normal thermodynamic relationships (Tuckerman, 2010). For example, the energy is obtained in the usual way by forming $−QNVT−1(∂QNVT/∂β)$; however, the ‘quantum spring’ potential is temperature-dependent, and the result is

(13.71)
$Display mathematics$

Note the sign of the $Vqu$ term. This might cause some confusion in an MD simulation, where the total energy $Vcl+Vqu+K$ is the conserved variable (between stochastic collisions if these are applied). This quantity is not the correct estimator for the quantum energy. There is yet a further wrinkle. The ‘quantum kinetic energy’ part of eqn (13.71) is the difference between two quantities, $32NPkBT$ and $⟨Vqu⟩$, both of which increase with P. This gives rise to loss of statistical precision: in fact the relative variance in this quantity increases linearly with P, making the estimate worse as the simulation becomes more accurate (Herman et al., 1982). The solution to this is to use the virial theorem for the harmonic potentials to replace eqn (13.71) by the following

(13.72)
$Display mathematics$

Actually, it is not clear that this will give a significant improvement. Jacucci (1984) has pointed out that the statistical inefficiency (see Section 8.4.1) may be worse for the quantity (p.434) in eqn (13.72) than for that in eqn (13.71), due to the persistence of correlations, thus wiping out the advantage. Other configurational properties are estimated in a straightforward way. For example, the pressure may be obtained from the volume derivative of Q. The method is particularly well-suited to estimating the free-energy differences needed to calculate isotope effects (Ceriotti and Markland, 2013). The pair distribution function becomes essentially a site–site g(r) for atoms with the same atom label (Chandler and Wolynes, 1981). It is accumulated in the normal way. The ‘size’ of each quantum molecule is also of interest (De Raedt et al., 1984; Parrinello and Rahman, 1984). This may be measured by the radius of gyration Ri of polymer i,

(13.73)
$Display mathematics$

where $Ri=(1/P)∑a=1PRia$ is the centre of mass of the polymer.

Having described a set of developments that enable accurate PIMD to be performed, it is tempting to use the time evolution in these systems to measure the quantum correlations and spectra in the fluid phase, or perhaps to follow the motion of a proton or electron in a liquid. Unfortunately, Hamiltonians of the form eqn (13.67a) or eqn (13.67b) do not result in a dynamics with any physical meaning.

An indirect approach, based on the time–temperature analogy, has been advanced by .irumalai and Berne (1983; 1984). This involves measuring the internal ‘spatial’ correlation functions of the polymer chain. More recently, Craig and Manolopoulos (2004; 2005a,b) have developed a RPMD method, in an attempt to study the dynamics of quantum systems (for a review see Habershon et al., 2013). In this method a system is prepared using a PIMD calculation with the full panoply of transformation, thermostatting and multiple timesteps to ensure a number of well-equilibrated starting configurations. All of this is then switched off and, following a suggestion by Parrinello and Rahman (1984), the fictitious mass associated with the momenta is set to m. The system exhibits a classical equilibrium density corresponding to a fluid with 3NP degrees of freedom at a temperature of PT. Although the trajectories developed by RPMD are not those of the quantum dynamical operator $exp(iHt/ℏ)$, Craig and Manolopoulos (2004) show that ‘classical correlation’ functions of the system exhibit some properties of the full quantum time correlation function. For two operators $A$ and $B$, the Kubo-transformed real-time correlation function (Kubo, 1957) is

(13.74)
$Display mathematics$

For these correlation functions

(13.75)
$Display mathematics$

The standard correlation functions of the RPMD obey these symmetries for all P. One can also show that the RPMD produces the exact result for $C˜AB(t)$, at all times, for a harmonic potential, when $A$ and $B$ are linear operators. In the case of weak anharmonic oscillators, RPMD agrees well, but not exactly, with the full quantum results. RPMD is exact (p.435) in the high-temperature limit and in the short-time limit. It evolves the system so that it preserves the exact quantum distribution of states. Also, a classical molecular dynamics result such as

(13.76)
$Display mathematics$

where $Rc$ is the centroid and $vc(t)$ is the bead-averaged or centroid velocity of a particular ring polymer, is obeyed.

Significant indirect evidence has been gathered to assess the performance of RPMD in describing quantum dynamics. For example, calculations indicate that RPMD can also be used to probe the ‘deep-tunnelling’ regime at low temperatures where the barrier is not parabolic. Richardson and Althorpe (2009) have rationalized this observation by establishing a connection between RPMD and semi-classical instanton theory (an established tool for calculating tunnelling rates). In studying the dynamics of an excess electron in a fluid of helium atoms, Miller (2008) calculated the exact mean-squared displacements, $⟨|Ria(s)−Ria(0)|2⟩$, in imaginary time, s, for the polymer beads as a function of solvent density. The mean-squared displacement can also be constructed indirectly from the analytic continuation of the velocity auto-correlation function for the solvated electron calculated by RPMD. A comparison of these two approaches shows that the RPMD becomes more accurate with increasing solvent density in the regime where the electron rattles in the solvent cage.

These results might encourage one to use RPMD to look at correlation functions and even to move beyond this formalism and the linear response regime to model processes far from equilibrium; for example, the dynamics of electron transfer between mixed-valence transition metal ions in water (Menzeleev et al., 2011) or the prediction of quantum reaction rates for bimolecular gas-phase reactions (Stecher and Althorpe, 2012).

However, great care is required with this approach; the RPMD description of the dynamics is not exact, and it is known to fail in a number of specific instances. RPMD methods overestimate the tunnelling of electrons in electron transfer reactions, particularly in the inverted regime. RPMD fails to describe the dynamics in a number of strongly coherent quantum systems (Craig and Manolopoulos, 2004). Witt et al. (2009) have used RPMD to calculate vibrational spectra of the diatomic and polyatomic molecules OH, H2O and CH4, using between 16 and 64 polymer beads on each atom and averages accumulated over 50 independent runs. The spectra are calculated from the Fourier transform of the quantum dipole autocorrelation function. In these simulations the intrinsic dynamics of the ring polymers can interfere with the physical frequencies of the molecules. Spurious artificial peaks can grow at the ring polymer frequencies and physical peaks can split due to resonant coupling. Similar artefacts are observed in the simulation of the far infrared spectrum of water (Habershon et al., 2008).

Progress in RPMD edges forward by comparison with more exact quantum simulation results, where possible, and by detailed comparison with experiment. It is early in its development for a widespread and general application of the method. This situation will improve as efforts continue to develop a real justification of RPMD starting from the quantum Liouville equation (Jang et al., 2014).

An important alternative approach to quantum time correlation has been the development and application of centroid molecular dynamics (CMD) (Voth, 1996), following the (p.436) path centroid ideas developed by Feynman and Hibbs (1965). The equations of motion of the centroid of the ring polymer are

(13.77)
$Display mathematics$

where $pc$ is the momentum of the centroid and $Vc$ is the centroid potential of mean force, corresponding to the density $ϱc(Rc)=exp[−βVc(Rc)]$ (Cao and Voth, 1994). The force on the centroid is

(13.78)
$Display mathematics$

where the gradient describes the total force on a bead, a, in one ring polymer from the corresponding bead a in the other ring polymers and the delta function extracts those configurations of the ring polymer where the centroid is at $Rc$.

The direct use of eqn (13.78) is problematic. At first glance, the constraint of including only configurations corresponding to the centroid at $Rc$ means that a full PIMD has to be run to sample the whole configuration space and the constraint applied retrospectively. However, the centroid force can be evaluated at each step by switching to the normal mode description of the ring polymer. The first normal mode, with associated mass $m˜1=m$, corresponds to the translation of the centroid. Solving the equations of motion of the other $P−1$ normal modes automatically imposes the constraint in eqn (13.78).

In an alternative formulation of the problem, ‘adiabatic’ CMD, the masses associated with non-centroid modes are decreased (Cao and Martyna, 1996):

(13.79)
$Display mathematics$

where $λa$ are the eigenvalues of the normal-mode transformation and $0<γ<1$ is the adiabaticity parameter. As γ‎ decreases, the timestep required to solve accurately the dynamics of the $P−1$ non-centroid modes can be reduced. A multiple-timestep approach can then be applied where the dynamics of the non-centroid modes are followed over a series of short timesteps, calculating the force on the centroid at each step. At each of the longer timesteps, the average force on the centroid is calculated from the preceding short timesteps and is used to update the position of the centroid (Pavese et al., 2000).

This ‘adiabatic’ CMD has been widely applied to calculate time correlation functions of liquids (Voth, 1996). There have been extensive comparisons between CMD and RPMD. Perez et al. (2009a) have shown that for para-H2 at 14 K, the Kubo-transformed velocity autocorrelation function from both methods is essentially the same. However in the calculation of vibrational and far infrared spectra in liquids (Witt et al., 2009), the unphysical resonances that appear in the RPMD spectra do not appear in CMD. In other cases, CMD exhibits broadening and shifting of peak positions as compared with experiment. The assumptions in both the CMD and RPMD methods have been analysed in detail (Jang et al., 2014).

The path-integral method described in this section is often termed the ‘primitive algorithm’; it uses the most crude approximation to the density matrix. Other improved approximations have been advocated (Schweizer et al., 1981) with the aim of reducing (p.437) the number of polymer units required and possibly improving the convergence. One important improvement (Takahashi and Imada, 1984) is

(13.80)
$Display mathematics$

where the second term in the exponential of eqn (13.80) arises from the double commutator $[[V,K],V]$ in the expansion of $exp(−βH)$ (Brualla et al., 2004). This density matrix is fourth order in the action and has been used in MC simulations of liquid He and Ne at low temperatures (Brualla et al., 2004). Zillich et al. (2010) have developed a new class of propagators for path-integral Monte Carlo that are fourth, sixth and eighth order in the action, and that do not require higher-order derivatives of the potential, by using an extrapolation of the primitive second-order propagator.

# 13.5 Quantum random walk simulations

The methods discussed in Section 13.4 are suitable for the simulation of liquid neon, liquid water and gas-phase infrared spectra of CH4, CH2+. These are systems where the quantum effects are significant but not dominant. However, where the behaviour is essentially quantum mechanical, as in liquid helium, we need to consider other techniques such as diffusion Monte Carlo, where the many-body Schrödinger equation is solved by generating a random walk in imaginary time. These simulations are normally used to calculate the ground-state wave function and the corresponding energy for systems of bosons and fermions. They are zero-temperature methods.

The adoption of an imaginary time evolution converts the Schrödinger equation into one of a diffusional kind.

(13.81)
$Display mathematics$

where $s=it/ℏ$, $V$ is the potential and $ET$ is an arbitrary zero of the energy which is useful in this problem. In this section, for notational simplicity, $Ψ(r,s)$ is the wave function for all the n particles in the fluid and $r$ rather than $r(n)$ is used to represent their 3n coordinates. The ‘diffusion coefficient’ is defined to be

(13.82)
$Display mathematics$

The simulation of this equation to solve the quantum many-body problem is a very old idea, possibly dating back to Fermi (Metropolis and Ulam, 1949), but it is the implementation of Anderson (1975; 1976) that interests us here.

If we interpret $Ψ(r,s)$ (note: not $|Ψ|2$!) as a probability density, then eqn (13.81) is essentially the Schmoluchowski equation for the configurational distribution (Doi and Edwards, 1988),

(13.83)
$Display mathematics$

with the systematic force, $f$, set to 0 and $kBT=1$. The diffusive part of the Schmoluchowski equation can be simulated by using the Brownian dynamics algorithm of eqn (12.5). The (p.438) additional complication is that the $(V(r)−ET)Ψ$ term in eqn (13.81) acts as a birth and death process (or a chemical reaction) which changes the weighting (probability) of configurations with time. To incorporate this in a simulation means allowing the creation and destruction of whole systems of molecules. These copies of the systems are often referred to as walkers. Simulations of many individual systems are run in parallel with one another. Although this sounds cumbersome, in practice it is a feasible route to the properties of the quantum liquid. That such a simulation may yield a ground-state stationary solution of the Schrödinger equation may be seen by the following argument. Any time-dependent wave function can be expanded in a set of stationary states $Ψn(r)$, when the time evolution becomes

(13.84)
$Display mathematics$

and the cn are the initial condition coefficients. In the imaginary time formalism, the state with the lowest energy becomes the dominant term at long times. If we have chosen $ET, then the ground-state exponential decays the least rapidly with time, while if $ET>E0$, the ground-state function grows faster than any other. If we are lucky enough to choose $ET=E0$, then the function $Ψ(r,s)$ tends to $Ψ0(r)$ at long times while the other state contributions decay away. For $Ψ(r,s)$ to be properly treated as a probability density, it must be everywhere positive (or negative) and this will be true for the ground state of a liquid of bosons.

The reaction part of the ‘reaction-diffusion’ equation is treated as usual, by integrating over a short timestep δ‎s. Formally

(13.85)
$Display mathematics$

This enters into the simplest quantum Monte Carlo algorithm as follows. Begin with a large number (100–1000) of replicas of the N-body system of interest. Then:

1. (a) Perform a Brownian dynamics step using eqn (12.5), with $f(t)=0$ and with D given by eqn (13.82), on each system. (Note that the temperature does not enter into the random walk algorithm since there are no systematic forces.)

2. (b) For each system, evaluate $V(r)$, compute $exp[−(V(r)−ET)δs]=K$ and replace the system by K identical copies (clones) of itself.

Step (b), the cloning step, requires a little more explanation, since in general K will be a positive real number. The system is replaced by $⌊K⌋$ replicas of itself ($⌊K⌋$ is the largest integer less than K, as given by the Fortran floor function) and a further copy is added with a probability $K−⌊K⌋$ using a random number ζ‎ generated uniformly on the range (0, 1). If $K<1$, this is equivalent to deleting the current system from the simulation with probability $(1−K)$, and retaining it (a single copy) with probability K. A little thought reveals that the number of copies can always be expressed as $⌊K+ζ⌋$. If new copies are generated, they evolve independently of each other thereafter.

The simple example of a particle in a one-dimensional quadratic potential is shown in Fig. 13.4, and in Code 13.2. The seven walkers are lined up at the start with an arbitrary distribution in the x-coordinate. As the simulation progresses, walkers that stray into (p.439) regions of high potential are likely to be annihilated, while those that sample the region of low potential (high probability) are likely to proceed to the finish line, and occasionally to be cloned. The distribution of walkers on the finishing line, in the limit of long simulation time and a large number of walkers, will be the Gaussian distribution corresponding to the ground-state wave function of the particle.

Fig. 13.4 An impression of a quantum Monte Carlo simulation of a particle in a one-dimensional quadratic potential $V(x)$, after Foulkes et al. (2001). In the long-time limit, the distribution of walkers should approach the form of the ground-state wave function $Ψ0(x)$ (dashed).

This scheme is fairly crude. Clearly, depending on $ET$, the number of systems still under consideration may grow or fall dramatically, and the value of this parameter is continually adjusted during the simulation to keep the current number approximately (p.440) constant (Anderson, 1975). Hopefully in the course of a run conducted in this way $ET→E0$. The fluctuation in the number of systems is substantial and this makes the estimate of $ET$ subject to a large statistical error. A number of ways around this difficulty have been proposed (Anderson, 1980; Mentch and Anderson, 1981) and we shall concentrate on one approach (Kalos et al., 1974; Reynolds et al., 1982) which uses importance sampling. Suppose we multiply $Ψ(r,s)$ by a specified trial wave function $ΨT(r,s)$ and use the result

(13.86)
$Display mathematics$

in the Schrödinger equation. Then we obtain

(13.87)
$Display mathematics$

where the local energy is defined by

(13.88)
$Display mathematics$

and should not be confused with $ET$. We have also defined the quantum force $Fqu$, which is derived from the pseudopotential $vPP(rij)$ if $ΨT$ is given (as is common) by

(13.89)
$Display mathematics$

Eqn (13.87), often described as the ‘importance-sampled’ Schrödinger equation (Gillan and Towler, 2011), resembles the Schmoluchowski equation, eqn (13.83), with the force term retained and with $kBT=1$ throughout. It can be solved using eqn (12.5) with $f(t)$ replaced by $Fqu$. All the techniques described in this section are now applied to the function ϒ‎ rather than to Ψ‎. The procedure for duplicating or deleting systems now depends on $(ET(r)−ET)$, where $ET(r)$ is evaluated for each system. This process is controlled more easily by a sensible choice of $ΨT(r)$ as discussed by Reynolds et al. (1982). The quantum force appears in these simulations just as the classical force appears in the smart MC method described in Chapter 9, or the Brownian dynamics of Chapter 12. This force guides the system in its search for low ‘energy’, that is, high $ΨT2$. If $ΨT$ is a good approximation to the ground state Ψ‎0, then the energy $ET(r)$ tends to E0 independently of $r$, and so is subject to little uncertainty. If $ET$ is adjusted so as to maintain the steady-state population of systems, then this will also tend to E0. As Reynolds et al. (1982) point out, the average $⟨ET(r)⟩$ obtained without any system creation/destruction attempts would correspond to a variational estimate based on $ΨT$. Identical variational estimates can also be calculated using the Monte Carlo method (McMillan, 1965; Schmidt and Kalos, 1984; Morales et al., 2014). In the random walk techniques, the MC simulation is replaced by Brownian dynamics. The inclusion of destruction and creation allows $Ψ(r)$ to differ from $ΨT$ and the simulation probes the improved $Ψ(r)$. Of course making $ΨT$ more complicated and hence more complete, adds to the computational expense.

The particular problems of fermion systems are discussed by Reynolds et al. (1982). The essential point is that the ground-state fermion wave function must contain multidimensional nodal surfaces. Each region of configuration space bounded by these ‘hypersurfaces’ (p.441) within which $ΨT$ may be taken to have one sign throughout, may be treated separately by the random walk technique. The nodal positions themselves are essentially fixed by the predetermined form of $ΨT(r)$. This introduces a further variational element into the calculation. The fixed-node approximation, and alternative approaches for fermion systems, are described in detail by Reynolds et al. (1982).

The work of Booth et al. (2009) represents an important advance in the solution of the fermion problem. This approach switches the focus from the wave function in terms of the particle position, $Ψ(r)$, to the space of Slater determinants, $|Di⟩$. The structure of the Hartree–Fock Slater determinant is given in eqn (13.18). For a general problem of N electrons chosen from 2M spin-orbitals there are approximately $MN/22$ such determinants, a number which grows factorially with both M and N. The full configuration interaction (FCI) wave function for the ground state can be represented as

(13.90)
$Display mathematics$

where ci are the coefficients of the expansion with the appropriate sign. The beauty of this representation is that the structure of $|Di⟩$ ensures the overall antisymmetry of the fermionic wave function. The vector ${sgn(ci)}$ is composed of a set of $±1$’s and 0’s. Any useful trial wave function must have the same sign vector as the true wave function, and this vector cannot be determined without a full knowledge of Ψ‎0; this is a restatement of the familiar sign problem in the Slater determinant space. We note that this space, consisting of many excitations from the ground state, can be very large with i ranging from 106 to 1014 for calculations on small molecular species. Two determinants in this space are said to be connected or coupled if $⟨Di|H|Dj⟩≠0$.

Substituting eqn (13.90) into the imaginary-time Schrödinger equation gives a set of coupled equations for the coefficients

(13.91)
$Display mathematics$

where $ET$ is the familiar shift energy used to control the population size in any stochastic solution of the equation and

(13.92)
$Display mathematics$

where $EHF$ is the Hartree–Fock energy of the uncorrelated problem. In the long-time limit $ET→Ecorr$, where $Ecorr$ is the correlation energy, and in the steady state, the coefficients ${ci}$ represent the ground state, which is the exact wave function for the basis set under consideration.

Eqn (13.91) can be solved by the type of diffusion Monte Carlo algorithm already described in this section. The full configuration interaction quantum Monte Carlo (FCIQMC) method considers only the birth and death of the community of walkers; there are no diffusive moves in this approach. There are three processes at each step:

1. (a) For each walker α‎ on $|Di⟩$, a coupled determinant $|Dj⟩$ is chosen with a fixed probability, and an attempt is made to spawn one or more new walkers at j with a probability (p.442) proportional to $|Kiαj|δs$, where δs is the imaginary timestep in the simulation. The spawned walker has the same sign as its parent if $Kiαj<0$ or the opposite sign to the parent otherwise (Booth et al., 2009).

2. (b) A particular walker α‎ at $|Di⟩$ is cloned or dies at i with a probability given by $(Kiαiα−ET)δs$. Walkers that die disappear from the space immediately.

3. (c) Finally, at each determinant, i, the signs of all of the spawned, cloned and surviving walkers are considered, and pairs of the opposite sign are annihilated, reducing the number of walkers by two in each case.

In the steady state each walker, α‎, on a particular determinant $iα$ will have a sign $sα=±1$. The resulting coefficient or amplitude is

(13.93)
$Display mathematics$

where $Nw=∑i|Ni|$. Values of $δs=10−4−10−3$ a.u. have been used in the simulations and the important details of the techniques can be found in the original literature (Booth et al., 2009).

The method has been successfully used to calculate the ground-state energy of species such as C2, H2O, N2, O2, and NaH (Booth et al., 2009). The results are as accurate as the FCI quantum calculations which require the diagonalization in the full space of Slater determinants. The FCIQMC method requires less storage and scales more efficiently than the brute-force diagonalization. Steps such as the spawning and the death of walkers only require information at a particular determinant so that the algorithm is eminently parallelizable (Gillan and Towler, 2011). The method has also been used to model the homogeneous electron gas (Shepherd et al., 2012) to obtain a finite-basis energy, which is significantly and variationally lower than any previously published work. The FCIQMC method has been extended to complex wave functions and used to study rare gas, ionic, and covalent solids (Booth et al., 2013). In FCIQMC we have a direct approach to solving the many-body fermion problem and a confluence of stochastic simulation methods and traditional quantum chemistry. It remains to be seen if these methods can be applied more widely to disordered systems and systems with very strong quantum correlations. At the very least, FCIQMC will provide detailed information on the force fields and exchange energies of classical and ab initio MD.

# 13.6 Over our horizon

We conclude our short introduction to quantum simulations by pointing to some of the many important techniques that lie beyond the scope of this book.

In considering ground-state or zero-temperature methods, Section 13.5, we have concentrated on the diffusion Monte Carlo algorithm. The simplest and most widely used approach to the problem is the variational Monte Carlo method. The total energy and its variance are calculated through a Monte Carlo evaluation of their expectation values using a trial wave function. The trial wave function is adjusted so that the energy and the variance are minimized at the ground state (Ceperley et al., 1977; Morales et al., 2014). There are a whole class of projector methods, Green’s function Monte Carlo, that attempt (p.443) to extract the ground-state wave function for many-body problems using stochastic algorithms (Kalos and Whitlock, 2008). The Schrödinger equation in imaginary time has an associated Green’s function, G(s), which drives the wave function forward in ‘time’ towards its ground state. Eqn (13.85) can be written as

(13.94)
$Display mathematics$

The imaginary time Green’s function Monte Carlo method (Schmidt et al., 2005) uses a random walk technique with importance sampling to determine an accurate G(s) from an initial guess. One important development in this area has been to treat segments of the Langevin trajectories produced in stochastic simulations as ‘polymers’. Well-known Monte Carlo polymer algorithms, such as reptation (see Section 4.8.2), are used to develop these trajectories to produce accurate expectation values of local observables, and their static and dynamic response functions in imaginary time (Baroni and Moroni, 1999).

At non-zero temperatures, the path-integral methods, described for semi-classical systems in Section 13.4, can be readily extended to more strongly quantum systems by increasing the number of beads in the ring polymer and by using more accurate forms of the density matrix (Morales et al., 2014). Indeed path-integral approaches can be applied to electrons and other fermions by restricting paths to those that are node avoiding, as exemplified by recent simulations of the electron gas (Brown et al., 2013). Simulations of nuclei and electrons have been performed using the coupled electron–ion Monte Carlo method. In this method, classical or path-integral Monte Carlo simulations of the nuclei are conducted with the BO energy of the ground-state electrons determined by a separate variational or diffusion Monte Carlo simulation (Pierleoni and Ceperley, 2006).

So far in this chapter, we have focused on ground-state simulations. It is possible to use the AIMD methods described in Section 13.4 to tackle problems involving many-electron excited states. Ehrenfest dynamics, eqn (13.13), produces strong adiabatic coupling which mixes the excited states in the dynamics (Tully, 1998) and is therefore unsuitable for following trajectories that split onto different and distinct potential surfaces. This problem can be addressed using surface-hopping molecular dynamics (Tully, 1990). Typically, the equations of motion for the atoms are solved on the energy surface

$Display mathematics$

where the integration is over the electronic degrees of freedom at fixed nuclear position. The density matrix elements, describing the electronic degrees of freedom, can be integrated along this surface

(13.95)
$Display mathematics$

with

(13.96)
$Display mathematics$

where $Vkℓ$ is the matrix element of the electronic Hamiltonian and $dkℓ=⟨Ψk|∇RΨℓ⟩$ is the non-adiabatic coupling vector (Tully, 1990). At a particular integration step, the (p.444) switching probabilities, gkj, from state k to all other states j, are computed in terms of the density matrix element $ϱkk$ and the matrix $bkℓ$

(13.97)
$Display mathematics$

The switch to a state j is made with a probability gkj. If the switch is successful, the velocity of the particles has to be adjusted and the motion proceeds on the potential surface $Vjj$. This technique is often described as the ‘fewest switches’ algorithm, since it makes the minimum number of changes of state required to maintain the correct statistical distribution among the possible states. For example, the technique has been used successfully to study tunnelling and electronic excitation (Müller and Stock, 1997).

More generally, the DFT method described in Section 13.4 can be extended to excited states using the time-dependent density functional theory (TDDFT) or Runge–Gross theory (Marques and Gross, 2004; Ullrich, 2011). In the dynamical cases, the quantum mechanical action

(13.98)
$Display mathematics$

has to be stationary to produce the time-dependent Schrödinger equation and $S[ρ]$ is stationary at the exact time-dependent density. This stationarity gives rise to the TDDFT equations of motion, which, in a similar spirit to equilibrium density functional theory, require an approximation for the time-dependent exchange correlation potential. It is possible to propagate the KS orbitals directly in real time using a number of mean-field approaches (Watanabe and Tsukada, 2002; Marx and Hutter, 2012) and excursions to excited electronic states can be achieved by imposing an external perturbation such as a strong laser field. Alternatively a version of TDDFT can be combined directly with surface-hopping to explore excited states (Craig et al., 2005).

Finally, we briefly mention methods that move us towards the real quantum dynamics of systems. One approach starts from the von Neumann equation for the time-dependent density matrix

$Display mathematics$

where $[A,B]$ is the normal commutator. The density matrix can be transformed to the Wigner distribution function (Singer and Smith, 1990), which is the solution to the Wigner–Liouville equation (the quantum mechanical equivalent of the Liouville equation for movement in phase space). In this representation it may be possible to split the system into a small quantum sub-system surrounded by a bath of particles. The mass of the bath particles is increased so that they can be treated classically (Kapral and Ciccotti, 1999). Propagation of the Wigner–Liouville equation for this type of mixed quantum–classical system is a useful route to the accurate calculation of the time correlation functions of the quantum sub-system (Egorov et al., 1999; Shi and Geva, 2004).

There have been important developments in the application of path-integral methods to study dynamics of strongly quantum systems. Hernandez et al. (1995) have developed a Feynman path centroid density that offers an alternative perspective to the Wigner prescription for the evaluation of equilibrium and dynamical properties. Poulsen et al. (2004) have applied the Feynman–Kleinert linearized path-integral approximation to study (p.445) quantum diffusion in liquid para-hydrogen. Coker and Bonella (2006) have also developed a new approach to calculating time correlation functions for mixed systems. The evolution of the highly quantum mechanical part of the system, the electronic sub-system, is mapped onto the evolution of a set of fictitious harmonic oscillators. At the same time the bath is treated using a path-integral representation of the nuclear part of the problem, which is approximated using a linearization of the path integrals. The linearization makes this approach tractable and it has been used to study the diffusion of an excess electron in a dilute metal–molten salt solution.