GROMACS explained

GROMACS
Developer:University of Groningen
Royal Institute of Technology
Uppsala University[1]
Latest Release Version:2024.2
Latest Release Date:[2]
Programming Language:C++, C, CUDA, OpenCL, SYCL
Operating System:Linux, macOS, Windows, any other Unix variety
Platform:Many
Language:English
Genre:Molecular dynamics simulation
License:LGPL versions >= 4.6[3],
GPL versions < 4.6[4]

GROMACS is a molecular dynamics package mainly designed for simulations of proteins, lipids, and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers worldwide.[5] [6] [7] GROMACS is one of the fastest and most popular software packages available,[8] [9] and can run on central processing units (CPUs) and graphics processing units (GPUs).[10] It is free, open-source software released under the GNU Lesser General Public License (LGPL)[3] (GPL prior to Version 4.6).

History

The GROMACS project originally began in 1991 at Department of Biophysical Chemistry, University of Groningen, Netherlands (1991–2000). Its name originally derived from this time (GROningen MAchine for Chemical Simulations) although currently GROMACS is not an abbreviation for anything, as little active development has taken place in Groningen in recent decades. The original goal was to construct a dedicated parallel computer system for molecular simulations, based on a ring architecture (since superseded by modern hardware designs). The molecular dynamics specific routines were rewritten in the programming language C from the Fortran 77-based program GROMOS, which had been developed in the same group.

Since 2001, GROMACS is developed by the GROMACS development teams at the Royal Institute of Technology and Uppsala University, Sweden.

Features

GROMACS is operated via the command-line interface, and can use files for input and output. It provides calculation progress and estimated time of arrival (ETA) feedback, a trajectory viewer, and an extensive library for trajectory analysis.[3] In addition, support for different force fields makes GROMACS very flexible. It can be executed in parallel, using Message Passing Interface (MPI) or threads. It contains a script to convert molecular coordinates from Protein Data Bank (PDB) files into the formats it uses internally. Once a configuration file for the simulation of several molecules (possibly including solvent) has been created, the simulation run (which can be time-consuming) produces a trajectory file, describing the movements of the atoms over time. That file can then be analyzed or visualized with several supplied tools.[11]

GROMACS has had GPU offload support since Version 4.5, originally limited to Nvidia GPUs. GPU support has been expanded and improved over the years,[12] and, in Version 2023, GROMACS has CUDA, OpenCL, and SYCL backends for running on GPUs of AMD, Apple, Intel, and Nvidia, often with great acceleration compared to CPU. [13]

Easter eggs

, GROMACS' source code contains approximately 400 alternative backronyms to GROMACS as jokes among the developers and biochemistry researchers. These include "Gromacs Runs On Most of All Computer Systems", "Gromacs Runs One Microsecond At Cannonball Speeds", "Good ROcking Metal Altar for Chronical Sinner", "Working on GRowing Old MAkes el Chrono Sweat", and "Great Red Owns Many ACres of Sand". They are randomly selected to possibly appear in GROMACS's output stream. In one instance, such an bacronym, "Giving Russians Opium May Alter Current Situation", caused offense.[14]

Applications

Under a non-GPL license, GROMACS is widely used in the Folding@home distributed computing project for simulations of protein folding, where it is the base code for the project's largest and most regularly used series of calculation cores.[15] [16] EvoGrid, a distributed computing project to evolve artificial life, also employs GROMACS.[17]

External links

Notes and References

  1. Web site: The GROMACS development team . 2012-06-27 . 2020-02-26 . https://web.archive.org/web/20200226063101/http://www.gromacs.org/About_Gromacs/People . dead .
  2. Web site: Downloads — GROMACS 2024.2 documentation . gromacs.org . 2024-05-24.
  3. Web site: About GROMACS . gromacs.org . 17 May 2021 . 2024-05-24.
  4. Web site: About Gromacs . gromacs.org . 16 August 2010 . 2012-06-26 . 2020-11-27 . https://web.archive.org/web/20201127061817/http://www.gromacs.org/About_Gromacs . dead .
  5. Web site: People — Gromacs . gromacs.org . 14 March 2012 . 26 June 2012 . 26 February 2020 . https://web.archive.org/web/20200226063101/http://www.gromacs.org/About_Gromacs/People . dead .
  6. Van Der Spoel D, Lindahl E, Hess B, Groenhof G, Mark AE, Berendsen HJ . GROMACS: fast, flexible, and free . J Comput Chem . 26 . 16 . 1701–18 . 2005 . 16211538 . 10.1002/jcc.20291. 1231998 .
  7. Hess B, Kutzner C, Van Der Spoel D, Lindahl E . GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation . J Chem Theory Comput . 4 . 2 . 435–447 . 2008 . 10.1021/ct700301q. 26620784 . 11858/00-001M-0000-0012-DDBF-0 . 1142192 . free .
  8. Carsten Kutzner . David Van Der Spoel . Martin Fechner . Erik Lindahl . Udo W. Schmitt . Bert L. De Groot . Helmut Grubmüller . Speeding up parallel GROMACS on high-latency networks . Journal of Computational Chemistry . 2007 . 28 . 12 . 2075–2084 . 10.1002/jcc.20703 . 17405124. 11858/00-001M-0000-0012-E29A-0 . 519769 . free .
  9. Berk Hess . Carsten Kutzner . David van der Spoel . Erik Lindahl . GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation . Journal of Chemical Theory and Computation . 2008 . 4 . 3 . 435–447 . 10.1021/ct700301q . 26620784. 11858/00-001M-0000-0012-DDBF-0 . 1142192 . free .
  10. Web site: Installation guide . gromacs.org . 10 May 2024 . 24 May 2024.
  11. Web site: Flow Chart — GROMACS 2024.2 documentation . gromacs.org . 10 May 2024 . 24 May 2024.
  12. Páll S, Zhmurov A, Bauer P, Abraham M, Lundborg M, Gray A, Hess B, Lindahl E . Heterogeneous parallelization and acceleration of molecular dynamics simulations in GROMACS . J Chem Phys . 153 . 13 . 134110 . 2020 . 33032406 . 10.1063/5.0018516 . free. 2006.09167 .
  13. Web site: Heterogeneous parallelization and GPU acceleration — GROMACS webpage . gromacs.org . 10 May 2024 . 24 May 2024.
  14. Web site: Re: Working on Giving Russians Opium May Alter Current Situation . Folding@home . 17 January 2010 . 2012-06-26.
  15. Web site: Folding@home Open Source FAQ . Pande lab . Folding@home . . FAQ . 11 June 2012 . 26 June 2012 . dead . https://web.archive.org/web/20120717063257/http://folding.stanford.edu/English/FAQ-OpenSource . 17 July 2012 .
  16. Book: Adam Beberg . Daniel Ensign . Guha Jayachandran . Siraj Khaliq . Vijay Pande . 2009 IEEE International Symposium on Parallel & Distributed Processing . Folding@home: Lessons from eight years of volunteer distributed computing . 2009 . 1–8 . 10.1109/IPDPS.2009.5160922 . 1530-2075 . 978-1-4244-3751-1. 15677970 .
  17. News: Wanted: Home Computers to Join in Research on Artificial Life . Markoff . John . The New York Times . 29 September 2009 . 26 June 2012.