Comparison of cluster software explained

The following tables compare general and technical information for notable computer cluster software. This software can be grossly separated in four categories: Job scheduler, nodes management, nodes installation and integrated stack (all the above).

General information

SoftwareMaintainerCategoryDevelopment statusLatest releaseArchitectureOCSHigh-Performance / High-Throughput ComputingLicensePlatforms supportedCostPaid support available
AmoebaNo active developmentMIT
Base One Foundation Component LibraryProprietary
class="table-rh" DIETINRIA, SysFera, Open SourceAll in oneGridRPC, SPMD, Hierarchical and distributed architecture, CORBAHTC/HPCCeCILLUnix-like, Mac OS X, AIXFree
DxEnterpriseDH2iNodes managementActively developedv23.0ProprietaryWindows 2012R2/2016/2019/2022 and 8+, RHEL 7/8/9, CentOS 7, Ubuntu 16.04/18.04/20.04/22.04, SLES 15.4CostYes
class="table-rh" Enduro/XMavimax, Ltd.Job/Data Scheduleractively developedSOA GridHTC/HPC/HAGPLv2 or CommercialLinux, FreeBSD, MacOS, Solaris, AIXFree / CostYes
class="table-rh" Monitoringactively developedBSDUnix, Linux, Microsoft Windows NT/XP/2000/2003/2008, FreeBSD, NetBSD, OpenBSD, DragonflyBSD, Mac OS X, Solaris, AIX, IRIX, Tru64, HPUX.Free
Univa (formerly United Devices)Job Schedulerno active developmentDistributed master/workerHTC/HPCProprietaryWindows, Linux, Mac OS X, SolarisCost
class="table-rh" Apacheactively developedApache license v2.0LinuxFreeYes
class="table-rh" Adaptive ComputingJob Scheduleractively developedHPCProprietaryLinux, Mac OS X, Windows, AIX, OSF/Tru-64, Solaris, HP-UX, IRIX, FreeBSD & other UNIX platformsCostYes
class="table-rh" Runtime Design Automationactively developedHTC/HPCProprietaryUnix-like, WindowsCost
class="table-rh" OpenHPCOpenHPC projectall in oneactively developedv2.61 HPCLinux (CentOS / OpenSUSE Leap)FreeNo
class="table-rh" OpenLavaNone. Formerly TeraprocJob SchedulerHalted by injunctionMaster/Worker, multiple admin/submit nodesHTC/HPCIllegal due to being a pirated version of IBM Spectrum LSFLinuxNot legally availableNo
class="table-rh" PBS ProAltairJob Scheduleractively developedMaster/worker distributed with fail-overHPC/HTCAGPL or ProprietaryLinux, WindowsFree or CostYes
class="table-rh" Proxmox Virtual EnvironmentProxmox Server SolutionsCompleteactively developedOpen-source AGPLv3Linux, Windows, other operating systems are known to work and are community supportedFreeYes
Rocks Cluster DistributionOpen Source/NSF grantAll in oneactively developed (Manzanita) HTC/HPCOpenSourceCentOSFree
Popular Power
ProActiveINRIA, ActiveEon, Open SourceAll in oneactively developedMaster/Worker, SPMD, Distributed Component Model, SkeletonsHTC/HPCGPLUnix-like, Windows, Mac OS XFree
RPyCTomer Filibaactively developedMIT License
  • nix/Windows
Free
SLURMSchedMDJob Scheduleractively developedv23.11.3 HPC/HTCGPLLinux/*nixFreeYes
class="table-rh" IBMJob Scheduleractively developedMaster node with failover/exec clients, multiple admin/submit nodes, Suite addOnsHPC/HTCProprietaryUnix, Linux, WindowsCost and Academic - model - Academic, Express, Standard, Advanced and SuitesYes
AltairJob Scheduleractive Development moved to Altair Grid EngineMaster node/exec clients, multiple admin/submit nodesHPC/HTCProprietary
  • nix/Windows
Cost
Some Grid Engine / Son of Grid Engine / Sun Grid Engine daimhJob Scheduleractively developed (stable/maintenance)Master node/exec clients, multiple admin/submit nodesHPC/HTCOpen-source SISSL
  • nix
FreeNo
SynfiniWayFujitsuactively developedHPC/HTC?Unix, Linux, WindowsCost
class="table-rh" Techila Technologies Ltd.All in oneactively developedMaster/worker distributedHTCProprietaryLinux, WindowsCostYes
Adaptive ComputingJob Scheduleractively developedProprietaryLinux, *nixCostYes
class="table-rh" UnivaAll in OneFunctionality and development moved to UniCloud (see above)FreeYes
UNICORE
XgridApple Computer
WarewulfProvision and clusters managementactively developedv4.4.1 HPCOpen SourceLinuxFree
xCATProvision and clusters managementactively developedv2.16.5 HPCEclipse Public LicenseLinuxFree
SoftwareMaintainerCategoryDevelopment statusLatest releaseArchitectureHigh-Performance/ High-Throughput ComputingLicensePlatforms supportedCostPaid support available

Table explanation

Technical information

SoftwareImplementation LanguageAuthenticationEncryptionIntegrityGlobal File SystemGlobal File System + KerberosHeterogeneous/ Homogeneous exec nodeJobs priorityGroup priorityQueue typeSMP awareMax exec nodeMax job submittedCPU scavengingParallel jobJob checkpointingPython interface
class="table-rh" C/C++OS AuthenticationGPG, AES-128, SHA1NoneAny cluster Posix FS (gfs, gpfs, ocfs, etc.)Any cluster Posix FS (gfs, gpfs, ocfs, etc.)HeterogeneousOS Nice levelOS Nice levelSOA Queues, FIFOYesOS LimitsOS LimitsYesYesNoNo
class="table-rh" C++GSI, SSL, Kerberos, Password, File System, Remote File System, Windows, Claim To Be, AnonymousNone, Triple DES, BLOWFISHNone, MD5None, NFS, AFSNot official, hack with ACL and NFS4HeterogeneousYesYesFair-share with some programmabilitybasic (hard separation into different node)tested ~10000?tested ~100000?YesMPI, OpenMP, PVMYesYes, and native Python Binding
class="table-rh" C/PythonOS Authentication, MungeAny, e.g., NFS, Lustre, GPFS, AFSLimited availabilityHeterogeneousYesYesFully configurableYestested ~50,000MillionsYesMPI, OpenMPYesYes
class="table-rh" C/C++OS authenticationNoneNFSHeterogeneous LinuxYesYesConfigurableYesYes, supports preemption based on priorityYesYesNo
class="table-rh" CMunge, None, KerberosHeterogeneousYesYesMultifactor Fair-shareyestested 120ktested 100kNoYesYes PySlurm
class="table-rh" Spectrum LSFC/C++Multiple - OS Authentication/KerberosOptionalOptionalAny - GPFS/Spectrum Scale, NFS, SMBAny - GPFS/Spectrum Scale, NFS, SMBHeterogeneous - HW and OS agnostic (AIX, Linux or Windows)Policy based - no queue to computenode bindingPolicy based - no queue to computegroup bindingBatch, interactive, checkpointing, parallel and combinationsyes and GPU aware (GPU License free)> 9.000 compute hots > 4 mio jobs a dayYes, supports preemption based on priority, supports checkpointing/resumeYes, fx parallel submissions for job collaboration over fx MPIYes, with support for user, kernel or library level checkpointing environmentsYes
TorqueCSSH, mungeNone, anyHeterogeneousYesYesProgrammableYestestedtestedYesYesYes Yes
SoftwareImplementation LanguageAuthenticationEncryptionIntegrityGlobal File SystemGlobal File System + KerberosHeterogeneous/ Homogeneous exec nodeJobs priorityGroup priorityQueue typeSMP awareMax exec nodeMax job submittedCPU scavengingParallel jobJob checkpointing

Table Explanation

See also