OpenVZ | |
OpenVZ | |
Developer: | Virtuozzo and OpenVZ community |
Programming Language: | C |
Operating System: | Linux |
Platform: | x86, x86-64 |
Language: | English |
Genre: | OS-level virtualization |
License: | GPLv2 |
OpenVZ (Open Virtuozzo) is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.
While virtualization technologies such as VMware, Xen and KVM provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.[1]
Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for disk caching. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using chroot), current versions of OpenVZ allow each container to have its own file system.[2]
The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.[3]
Each container is a separate entity, and behaves largely as a physical server would. Each has its own:
[[/proc]]
and [[/sys]]
, virtualized locks, etc.[[init]]
). PIDs are virtualized, so that the init PID is 1 as it should be.
iptables
), and routing rules.
OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user bean counters (see below). These resources can be changed during container run time, eliminating the need to reboot.
--cpulimit
), limit number of CPU cores available to container (--cpus
), and bind a container to a specific set of CPUs (--cpumask
).[4]
/proc/user_beancounters
and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.
A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.
By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,[6] PCI devices[7] or physical network cards.[8]
/dev/loopN
is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use FUSE.
OpenVZ is limited to providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. IPsec is supported inside containers since kernel 2.6.32.
A graphical user interface called EasyVZ was attempted in 2007,[9] but it did not progress beyond version 0.1. Up to version 3.4, Proxmox VE could be used as an OpenVZ-based server virtualization environment with a GUI, although later versions switched to LXC.