Particle-in-cell explained

In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.

PIC methods were already in use as early as 1955,[1] even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh. [2]

Technical aspects

For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures:

Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M.

Since the early days, it has been recognized that the PIC method is susceptible to error from so-called discrete particle noise.[3] This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.

Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system.[4] [5] These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics.

Basics of the PIC plasma simulation technique

Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called pusher or particle mover of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the (field) solver.

Super-particles

The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called super-particles are used. A super-particle (or macroparticle) is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would.

The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them.

The particle mover

Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes.

The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. [6] Also the Boris algorithm is used which cancel out the magnetic field in the Newton-Lorentz equation.[7] [8]

For plasma applications, the leapfrog method takes the following form:

xk+1-xk
\Deltat

=vk+1/2,

vk+1/2-vk-1/2
\Deltat

=

q
m

\left(Ek+

vk+1/2+vk-1/2
2

x Bk\right),

where the subscript

k

refers to "old" quantities from the previous time step,

k+1

to updated quantities from the next time step (i.e.

tk+1=tk+\Deltat

), and velocities are calculated in-between the usual time steps

tk

.

The equations of the Boris scheme which are substitute in the above equations are:

xk+1=xk+{\Deltat}vk+1/2,

vk+1/2=u'+q'Ek,

with

u'=u+(u+(u x h)) x s,

u=vk-1/2+q'Ek,

h=q'Bk,

s=2h/(1+h2)

and

q'=\Deltat x (q/2m)

.

Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown[9] that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields.

The field solver

The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories:

With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations.

Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached.

Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters.

Particle and field weighting

The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the particle weighting). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function

S(x-X),

where

x

is the coordinate of the particle and

X

the observation point. Perhaps the easiest and most used choice for the shape function is the so-called cloud-in-cell (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions:[10] space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms.

The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the field weighting:

E(x)=\sumiEiS(xi-x),

where the subscript

i

labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time[10]

Collisions

As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the binary collision model,[11] in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided.

In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the direct Monte-Carlo scheme, in which all particles carry information about their collision probability, or the null-collision scheme,[12] [13] which does not analyze all particles but uses the maximum collision probability for each charged species instead.

Accuracy and stability conditions

As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code.

For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size

\Deltax

and the time step

\Deltat

should be fulfilled in order to ensure the stability of the solution:

\Deltax<3.4λD,

\Deltat\leq2

-1
\omega
pe

,

which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of

\Deltat\leq0.1

-1
\omega
pe

,

is typical.[10] [14] Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency
-1
\omega
pe
and length scale by the Debye length

λD

.

For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition:

\Deltat<\Deltax/c,

where

\Deltax\simλD

, and

c

is the speed of light.

Applications

Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas.

Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model.

PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics.[15] [16]

Electromagnetic particle-in-cell computational applications

Computational applicationWeb siteLicenseAvailabilityCanonical Reference
SHARP[17] Proprietary
ALaDyn[18] GPLv3+Open Repo:[19]
EPOCH[20] GPLv3Open Repo:[21]
FBPIC[22] 3-Clause-BSD-LBNLOpen Repo:[23]
LSP[24] Proprietary Available from ATK
MAGIC[25] ProprietaryAvailable from ATK
OSIRIS[26] GNU AGPLOpen Repo [27]
PICCANTE[28] GPLv3+Open Repo:[29]
PICLas[30] GPLv3+Open Repo:[31]
PICMC[32] ProprietaryAvailable from Fraunhofer IST
PIConGPU[33] GPLv3+Open Repo:[34]
SMILEI[35] CeCILL-BOpen Repo:[36]
iPIC3D[37] Apache License 2.0Open Repo:[38]
The Virtual Laser Plasma Lab (VLPL)[39] ProprietaryUnknown
Tristan v2[40] 3-Clause-BSDOpen source,[41] but also has a private version with QED/radiative[42] modules [43]
VizGrain[44] ProprietaryCommercially available from Esgee Technologies Inc.
VPIC[45] 3-Clause-BSDOpen Repo:[46]
VSim (Vorpal)[47] ProprietaryAvailable from Tech-X Corporation
Warp[48] 3-Clause-BSD-LBNLOpen Repo:[49]
WarpX[50] 3-Clause-BSD-LBNLOpen Repo:[51]
ZPIC[52] AGPLv3+Open Repo:[53]
ultraPICAProprietaryCommercially available from Plasma Taiwan Innovation Corporation.

See also

Bibliography

External links

Notes and References

  1. F.H. Harlow. Francis H. Harlow. A Machine Calculation Method for Hydrodynamic Problems. Los Alamos Scientific Laboratory report LAMS-1956. 1955.
  2. Dawson, J.M.. John M. Dawson. Particle simulation of plasmas. Reviews of Modern Physics. 1983. 55. 2. 403–447. 10.1103/RevModPhys.55.403. 1983RvMP...55..403D.
  3. Hideo Okuda. Nonphysical noises and instabilities in plasma simulation due to a spatial grid. Journal of Computational Physics. 1972. 10. 3. 475–486. 10.1016/0021-9991(72)90048-4. 1972JCoPh..10..475O .
  4. Qin, H. . Liu, J. . Xiao, J. . Canonical symplectic particle-in-cell method for long-term large-scale simulations of the Vlasov-Maxwell system. Nuclear Fusion. 2016. 56. 1. 014001. 10.1088/0029-5515/56/1/014001. 2016NucFu..56a4001Q. etal. 1503.08334 . 29190330 .
  5. Xiao, J. . Qin, H. . Liu, J. . Explicit high-order non-canonical symplectic particle-in-cell algorithms for Vlasov-Maxwell systems. Physics of Plasmas. 2015. 22. 11. 12504. 10.1063/1.4935904. 2015PhPl...22k2504X. etal. 1510.06972 . 12893515 .
  6. Book: Birdsall . Charles K. . A. Bruce Langdon . Plasma Physics via Computer Simulation . McGraw-Hill . 1985 . 0-07-005371-5.
  7. Relativistic plasma simulation-optimization of a hybrid code. Boris. J.P.. November 1970. Proceedings of the 4th Conference on Numerical Simulation of Plasmas. Naval Res. Lab., Washington, D.C.. 3 - 67.
  8. Qin, H. . etal . Why is Boris algorithm so good? . Physics of Plasmas. 2013. 20. 5. 084503 . 10.1063/1.4818428 . 2013PhPl...20h4503Q .
  9. Higuera, Adam V.. John R. Cary. Structure-preserving second-order integration of relativistic charged particle trajectories in electromagnetic fields. Physics of Plasmas. 24. 5. 2017. 052104. 10.1016/j.jcp.2003.11.004. 2004JCoPh.196..448N.
  10. Book: Tskhakaya , David . Fehske. Holger. Schneider. Ralf. Weiße. Alexander. Computational Many-Particle Physics. 739. Lecture Notes in Physics 739. 2008. Springer, Berlin Heidelberg. 978-3-540-74685-0. 10.1007/978-3-540-74686-7. Chapter 6: The Particle-in-Cell Method. 2024-05-03. https://cds.cern.ch/record/1105877.
  11. Takizuka . Tomonor . Abe . Hirotada . A binary collision model for plasma simulation with a particle code . Journal of Computational Physics . 25 . 3 . 1977 . 205 - 219 . 10.1016/0021-9991(77)90099-7. 1977JCoPh..25..205T .
  12. Birdsall, C.K. . Particle-in-cell charged-particle simulations, plus Monte Carlo collisions with neutral atoms, PIC-MCC . IEEE Transactions on Plasma Science . 19 . 2 . 1991 . 65 - 85 . 10.1109/27.106800 . 0093-3813. 1991ITPS...19...65B .
  13. Vahedi . V. . Surendra . M. . A Monte Carlo collision model for the particle-in-cell method: applications to argon and oxygen discharges . Computer Physics Communications . 87 . 1 - 2 . 1995 . 179 - 198 . 10.1016/0010-4655(94)00171-W . 0010-4655. 1995CoPhC..87..179V .
  14. Tskhakaya . D. . Matyash . K. . Schneider . R. . Taccogna . F. . The Particle-In-Cell Method . Contributions to Plasma Physics . 47 . 8–9 . 2007 . 563 - 594 . 10.1002/ctpp.200710072. 2007CoPP...47..563T . 221030792 .
  15. Book: Liu . G.R. . M.B. Liu . Smoothed Particle Hydrodynamics: A Meshfree Particle Method . World Scientific . 2003 . 981-238-456-1.
  16. Byrne, F. N. . Ellison, M. A. . Reid, J. H. . The particle-in-cell computing method for fluid dynamics . Methods Comput. Phys. . 3 . 1964 . 3 . 319 - 343 . 10.1007/BF00230516 . 2024-05-03 . 1964SSRv....3..319B . 121512234 .
  17. SHARP: A Spatially Higher-order, Relativistic Particle-in-Cell Code. Mohamad. Shalaby. Avery E.. Broderick. Philip. Chang. Christoph. Pfrommer. Astrid. Lamberts. Ewald. Puchwein. 23 May 2017. The Astrophysical Journal. 841. 1. 52. 10.3847/1538-4357/aa6d13. 1702.04732. 2017ApJ...841...52S. 119073489 . free .
  18. Web site: ALaDyn. ALaDyn. 1 December 2017.
  19. Web site: ALaDyn: A High-Accuracy PIC Code for the Maxwell-Vlasov Equations. 18 November 2017. 1 December 2017. GitHub.com.
  20. Web site: EPOCH. epochpic. 14 March 2024.
  21. Web site: EPOCH. GitHub.com. 14 March 2024.
  22. Web site: FBPIC documentation — FBPIC 0.6.0 documentation. fbpic.github.io. 1 December 2017.
  23. Web site: fbpic: Spectral, quasi-3D Particle-In-Cell code, for CPU and GPU. 8 November 2017. 1 December 2017. GitHub.com.
  24. Web site: Orbital ATK. Mrcwdc.com. 1 December 2017.
  25. Web site: Orbital ATK. Mrcwdc.com. 1 December 2017.
  26. Web site: OSIRIS open-source - OSIRIS. osiris-code.github.io. 13 December 2023.
  27. Web site: osiris-code/osiris: OSIRIS Particle-In-Cell code. 13 December 2023. GitHub.com.
  28. Web site: Piccante. Aladyn.github.io. 1 December 2017.
  29. Web site: piccante: a spicy massively parallel fully-relativistic electromagnetic 3D particle-in-cell code. 14 November 2017. 1 December 2017. GitHub.com.
  30. Web site: PICLas.
  31. Web site: piclas-framework/piclas. .
  32. Web site: Fraunhofer IST Team Simulation. ist.fraunhofer.de. 7 August 2024.
  33. Web site: PIConGPU - Particle-in-Cell Simulations for the Exascale Era - Helmholtz-Zentrum Dresden-Rossendorf, HZDR. picongpu.hzdr.de. 1 December 2017.
  34. Web site: ComputationalRadiationPhysics / PIConGPU — GitHub. 28 November 2017. 1 December 2017. GitHub.com.
  35. Web site: Smilei — A Particle-In-Cell code for plasma simulation. Maisondelasimulation.fr. 1 December 2017.
  36. Web site: SmileiPIC / Smilei — GitHub. 29 October 2019. 29 October 2019. GitHub.com.
  37. Multi-scale simulations of plasma with iPIC3D. Stefano. Markidis. Giovanni. Lapenta. Rizwan-uddin. 17 Oct 2009. Mathematics and Computers in Simulation. 80. 7. 1509. 10.1016/j.matcom.2009.08.038.
  38. Web site: iPic3D — GitHub. 31 January 2020. 31 January 2020. GitHub.com.
  39. Web site: Relativistic Laser Plasma. Matthias. Dreher. 2.mpq.mpg.de. 1 December 2017.
  40. Web site: Tristan v2 wiki Tristan v2 . 2022-12-15 . princetonuniversity.github.io.
  41. Web site: Tristan v2 public github page . .
  42. Web site: QED Module Tristan v2 . 2022-12-15 . princetonuniversity.github.io.
  43. Web site: Tristan v2: Citation.md . .
  44. Web site: VizGrain. esgeetech.com. 1 December 2017.
  45. Web site: VPIC. github.com. 1 July 2019.
  46. Web site: LANL / VPIC — GitHub. github.com. 29 October 2019.
  47. Web site: Tech-X - VSim. Txcorp.com. 1 December 2017.
  48. Web site: Warp. warp.lbl.gov. 1 December 2017.
  49. Web site: berkeleylab / Warp — Bitbucket. bitbucket.org. 1 December 2017.
  50. Web site: WarpX Documentation. ecp-warpx.github.io. 29 October 2019.
  51. Web site: ECP-WarpX / WarpX — GitHub. GitHub.org. 29 October 2019.
  52. Web site: Educational Particle-In-Cell code suite. picksc.idre.ucla.edu. 29 October 2019.
  53. Web site: ricardo-fonseca / ZPIC — GitHub. GitHub.org. 29 October 2019.