Developer: | Proxmox Server Solutions GmbH |
Family: | Linux (Unix-like) |
Working State: | Current |
Source Model: | Free and open source software |
Latest Preview Version: | 8.0 beta1[1] |
Language: | 25 languages[2] |
Updatemodel: | APT |
Package Manager: | dpkg |
Supported Platforms: | AMD64 |
Kernel Type: | Monolithic (Linux) |
Userland: | GNU |
Ui: | Web-based |
License: | GNU Affero General Public License[3] |
Programmed In: | Perl,[4] Rust[5] |
Proxmox Virtual Environment (Proxmox VE or PVE) is a virtualisation platform designed for the provisioning of hyper-converged infrastructure.
Proxmox allows deployment and management of virtual machines and containers.[6] [7] It is based on a modified Ubuntu LTS kernel.[8] Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included[9]), and full virtualization with KVM.[10]
It includes a web-based management interface.[11] [12] There is also a mobile application available for controlling PVE environments.[13]
Proxmox is released under the terms of the GNU Affero General Public License, version 3.
Development of Proxmox VE started when Dietmar Maurer and Martin Maurer, two Linux developers, found out OpenVZ had no backup tool and no management GUI. KVM was appearing at the same time in Linux, and was added shortly afterwards.[14]
The first public release took place in April 2008. It supported container and full virtualization, managed with a web-based user interface similar to other commercial offerings.[15]
Proxmox VE is an open-source server virtualization platform to manage two virtualization technologies: Kernel-based Virtual Machine (KVM) for virtual machines and LXC for containers - with a single web-based interface.[10] It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.[16]
Proxmox VE supports live migration for guest machines between nodes in the scope of a single cluster, which allows smooth migration without interrupting their services.[17] Since PVE 7.3 there is experimental feature for migration between unrelated nodes in different clusters.[18]
To authenticate users to the web GUI, Proxmox can use its own internal authentication database, PAM, OIDC, LDAP or Active Directory.[19] Multi-factor authentication is also available using TOTP, WebAuthn, or YubiKey OTP.[20]
Since PVE 8.1 there is a full Software-Defined Network (SDN) stack implemented and is compatible with Secure Boot.[21]
Guest machines backup can be done using the included standalone vzdump tool.[22] PVE can be also integrated with separate machine Proxmox Backup Server (PBS) using web GUI[23] or with text based Proxmox Backup Client application.[24]
Since PVE 8 along with standard GUI installer there's a semi-graphic (TUI) installer integrated into the ISO image.[19] From PVE 8.2 it's possible to make automatic scripted installation.[25]
Proxmox VE (PVE) can be clustered across multiple server nodes.[26]
Since version 2.0, Proxmox VE offers a high availability option for clusters based on the Corosync communication stack. Starting from the PVE v.6.0 Corosync v.3.x is in use (not compatible with the earlier PVE versions). Individual virtual servers can be configured for high availability, using the built-in ha-manager.[27] [28] If a Proxmox node becomes unavailable or fails, the virtual servers can be automatically moved to another node and restarted.[29] The database and FUSE-based Proxmox Cluster filesystem (pmxcfs[30]) makes it possible to perform the configuration of each cluster node via the Corosync communication stack with SQLite engine.
Another HA-related element in PVE is distributed filesystem Ceph, which can be used as a shared storage for guest machines.[31]
Proxmox VE has pre-packaged server software appliances which can be downloaded via the GUI.[32]