GPFS explained

GPFS
Developer:IBM
Full Name:IBM Spectrum Scale
Introduction Os:AIX
Max File Size:8 EB
Max Files No:264 per file system
Max Volume Size:8 YB
File System Permissions:POSIX
Encryption:yes
Os:AIX, Linux, Windows Server

GPFS (General Parallel File System, brand name IBM Storage Scale and previously IBM Spectrum Scale)[1] is high-performance clustered file system software developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes, or a combination of these. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List.[2] For example, it is the filesystem of the Summit[3] at Oak Ridge National Laboratory which was the #1 fastest supercomputer in the world in the November 2019 Top 500 List.[4] Summit is a 200 Petaflops system composed of more than 9,000 POWER9 processors and 27,000 NVIDIA Volta GPUs. The storage filesystem is called Alpine.[5]

Like typical cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX clusters, Linux clusters,[6] on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes running on x86, Power or IBM Z processor architectures.

History

GPFS began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Tiger Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.[7]

Another ancestor is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992 and 1995.[8] Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystems. With partitioning, a file is not a sequence of bytes, but rather multiple disjoint sequences that may be accessed in parallel. The partitioning is such that it abstracts away the number and type of I/O nodes hosting the filesystem, and it allows a variety of logically partitioned views of files, regardless of the physical distribution of data within the I/O nodes. The disjoint sequences are arranged to correspond to individual processes of a parallel application, allowing for improved scalability.[9] [10]

Vesta was commercialized as the PIOFS filesystem around 1994,[11] and was succeeded by GPFS around 1998.[12] [13] The main difference between the older and newer filesystems was that GPFS replaced the specialized interface offered by Vesta/PIOFS with the standard Unix API: all the features to support high performance parallel I/O were hidden from users and implemented under the hood.[7] [13] GPFS also shared many components with the related products IBM Multi-Media Server and IBM Video Charger, which is why many GPFS utilities start with the prefix mm—multi-media.[14]

In 2010, IBM previewed a version of GPFS that included a capability known as GPFS-SNC, where SNC stands for Shared Nothing Cluster. This was officially released with GPFS 3.5 in December 2012, and is now known as FPO[15] (File Placement Optimizer).

Architecture

It is a clustered file system. It breaks a file into blocks of a configured size, less than 1 megabyte each, which are distributed across multiple cluster nodes.

The system stores data on standard block storage volumes, but includes an internal RAID layer that can virtualize those volumes for redundancy and parallel access much like a RAID block storage system. It also has the ability to replicate across volumes at the higher file level.

Features of the architecture include

Other features include high availability, ability to be used in a heterogeneous cluster, disaster recovery, security, DMAPI, HSM and ILM.

Compared to Hadoop Distributed File System (HDFS)

Hadoop's HDFS filesystem, is designed to store similar or greater quantities of data on commodity hardware — that is, datacenters without RAID disks and a storage area network (SAN).

Information lifecycle management

Storage pools allow for the grouping of disks within a file system. An administrator can create tiers of storage by grouping disks based on performance, locality or reliability characteristics. For example, one pool could be high-performance Fibre Channel disks and another more economical SATA storage.

A fileset is a sub-tree of the file system namespace and provides a way to partition the namespace into smaller, more manageable units. Filesets provide an administrative boundary that can be used to set quotas and be specified in a policy to control initial data placement or data migration. Data in a single fileset can reside in one or more storage pools. Where the file data resides and how it is migrated is based on a set of rules in a user defined policy.

There are two types of user defined policies: file placement and file management. File placement policies direct file data as files are created to the appropriate storage pool. File placement rules are selected by attributes such as file name, the user name or the fileset. File management policies allow the file's data to be moved or replicated or files to be deleted. File management policies can be used to move data from one pool to another without changing the file's location in the directory structure. File management policies are determined by file attributes such as last access time, path name or size of the file.

The policy processing engine is scalable and can be run on many nodes at once. This allows management policies to be applied to a single file system with billions of files and complete in a few hours.

Notes and References

  1. Web site: GPFS (General Parallel File System) . IBM . 2020-04-07 .
  2. Frank . Schmuck . Roger Haskin . GPFS: A Shared-Disk File System for Large Computing Clusters . Proceedings of the FAST'02 Conference on File and Storage Technologies . 231–244 . USENIX . January 2002 . Monterey, California, US . 1-880446-03-0 . 2008-01-18.
  3. Web site: Summit compute systems . Oak Ridge National Laboratory . 2020-04-07 .
  4. Web site: November 2019 top500 list . top500.org . 2020-04-07 . 2020-01-02 . https://web.archive.org/web/20200102235204/https://www.top500.org/list/2019/11/ . dead .
  5. Web site: Summit FAQ . Oak Ridge National Laboratory . 2020-04-07 .
  6. Book: BPAR: A Bundle-Based Parallel Aggregation Framework for Decoupled I/O Execution. IEEE. Nov 2014. 10.1109/DISCS.2014.6. 2014 International Workshop on Data Intensive Scalable Computing Systems. 25–32. Wang. Teng. Vasko. Kevin. Liu. Zhuo. Chen. Hui. Yu. Weikuan. 978-1-4673-6750-9. 2402391.
  7. Book: May , John M. . Parallel I/O for High Performance Computing . Morgan Kaufmann . 2000 . 978-1-55860-664-7 . 2008-06-18 . 92.
  8. Book: Corbett. Peter F.. Feitelson. Dror G.. Prost. J.-P.. Baylor. S. J.. Proceedings of the 1993 ACM/IEEE conference on Supercomputing - Supercomputing '93 . ACM/IEEE. 1993. Portland, Oregon, United States. 472–481. Parallel access to files in the Vesta file system. 10.1145/169627.169786. 978-0818643408. 46409100.
  9. Peter F. . Corbett . Dror G. . Feitelson . The Vesta parallel file system . Transactions on Computer Systems . 14 . 3 . 225–264 . August 1996 . 10.1145/233557.233558 . 11975458 . 2008-06-18 . 2012-02-12 . https://web.archive.org/web/20120212075707/http://www.cs.umd.edu/class/fall2002/cmsc818s/Readings/vesta-tocs96.pdf . bot: unknown .
  10. Teng Wang. Kevin Vasko. Zhuo Liu. Hui Chen. Weikuan Yu. Enhance parallel input/output with cross-bundle aggregation. The International Journal of High Performance Computing Applications. 30. 2. 241–256. 2016. 10.1177/1094342015618017. 12067366.
  11. Corbett . P. F. . D. G. Feitelson . J.-P. Prost . G. S. Almasi . S. J. Baylor . A. S. Bolmarcich . Y. Hsu . J. Satran . M. Snir . R. Colao . B. D. Herr . J. Kavaky . T. R. Morgan . A. Zlotek . Parallel file systems for the IBM SP computers . IBM Systems Journal . 34 . 2 . 222–248 . 1995 . 2008-06-18 . 10.1147/sj.342.0222 . 10.1.1.381.2988 . 2004-04-19 . https://web.archive.org/web/20040419115328/http://www.research.ibm.com/journal/sj/342/corbett.pdf . bot: unknown .
  12. Book: Barris , Marcelo . Terry Jones . Scott Kinnane . Mathis Landzettel Safran Al-Safran . Jerry Stevens . Christopher Stone . Chris Thomas . Ulf Troppens . Sizing and Tuning GPFS . IBM Redbooks, International Technical Support Organization . September 1999 . true . 2022-12-06 . see page 1 ("GPFS is the successor to the PIOFS file system") . 2010-12-14 . https://web.archive.org/web/20101214215324/https://www.redbooks.ibm.com/redbooks/pdfs/sg245610.pdf . bot: unknown.
  13. Web site: Snir . Marc . Scalable parallel systems: Contributions 1990-2000 . HPC seminar, Computer Architecture Department, Universitat Politècnica de Catalunya . June 2001 . 2008-06-18.
  14. Book: General Parallel File System Administration and Programming Reference Version 3.1 . IBM . April 2006 .
  15. Web site: IBM GPFS FPO (DCS03038-USEN-00) . IBM Corporation . 2013 . 2012-08-12 .