Hardware virtualization explained
Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization emulates the hardware environment of its host architecture, allowing multiple OSes to run unmodified and in isolation. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.[1]
Concept
The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has also been called "platform virtualization", or "server virtualization", more recently.[2] [3]
Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device's native capabilities, depending on the hardware access policy implemented by the virtualization host.[4]
Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine.
Reasons for virtualization
- In the case of server consolidation, many small physical servers can be replaced by one larger physical server to decrease the need for more (costly) hardware resources such as CPUs, and hard drives. Although hardware is consolidated in virtual environments, typically OSs are not. Instead, each OS running on a physical server is converted to a distinct OS running inside a virtual machine. Thereby, the large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation. The average utilization of a server in the early 2000s was 5 to 15%, but with the adoption of virtualization this figure started to increase to reduce the number of servers needed.[5]
- In addition to reducing equipment and labor costs associated with equipment maintenance, consolidating servers can also have the added benefit of reducing energy consumption and the global footprint in environmental-ecological sectors of technology. For example, a typical server runs at 425 W[6] and VMware estimates a hardware reduction ratio of up to 15:1.[7]
- A virtual machine (VM) can be more easily controlled and inspected from a remote site than a physical machine, and the configuration of a VM is more flexible. This is very useful in kernel development and for teaching operating system courses, including running legacy operating systems that do not support modern hardware.[8]
- A new virtual machine can be provisioned as required without the need for an up-front hardware purchase.
- A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to their laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of the OS crashing on the laptop.
- Because of this ease of relocation, virtual machines can be readily used in disaster recovery scenarios without concerns with impact of refurbished and faulty energy sources.
However, when multiple VMs are concurrently running on the same physical host, each VM may exhibit varying and unstable performance which highly depends on the workload imposed on the system by other VMs. This issue can be addressed by appropriate installation techniques for temporal isolation among virtual machines.
There are several approaches to platform virtualization.
Examples of virtualization use cases:
- Running one or more applications that are not supported by the host OS: A virtual machine running the required guest OS could permit the desired applications to run, without altering the host OS.
- Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS.
- Server virtualization: Multiple virtual servers could be run on a single physical server, in order to more fully utilize the hardware resources of the physical server.
- Duplicating specific environments: A virtual machine could, depending on the virtualization software used, be duplicated and installed on multiple hosts, or restored to a previously backed-up system state.
- Creating a protected environment: If a guest OS running on a VM becomes damaged in a way that is not cost-effective to repair, such as may occur when studying malware or installing badly behaved software, the VM may simply be discarded without harm to the host system, and a clean copy used upon rebooting the guest .
Full virtualization
In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family.
Paravirtualization
In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the "guest" OS. For this to be possible, the "guest" OS's source code must be available. If the source code is available, it is sufficient to replace sensitive instructions with calls to VMM APIs (e.g.: "cli" with "vm_handle_cli"), then re-compile the OS and use the new binaries. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under VM (which was the origin of the term hypervisor)..
Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a virtual machine monitor and allows guest OSs to be run in isolation.[9] This can be used to assist either full virtualization or paravirtualization. Hardware-assisted virtualization was first introduced on the IBM 308X processors in 1980, with the Start Interpretive Execution (SIE) instruction.[10]
In 2005 and 2006, Intel and AMD developed additional hardware to support virtualization ran on their platforms. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005.
In 2006, first-generation 32- and 64-bit x86 hardware support was found to rarely offer performance advantages over software virtualization.[11]
Operating-system-level virtualization
See main article: OS-level virtualization.
In operating-system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system.
Hardware virtualization disaster recovery
A disaster recovery (DR) plan is often considered good practice for a hardware virtualization platform. DR of a virtualization environment can ensure high rate of availability during a wide range of situations that disrupt normal business operations. In situations where continued operations of hardware virtualization platforms is important, a disaster recovery plan can ensure hardware performance and maintenance requirements are met. A hardware virtualization disaster recovery plan involves both hardware and software protection by various methods, including those described below.[12] [13]
- Tape backup for software data long-term archival needs
This common method can be used to store data offsite, but data recovery can be a difficult and lengthy process. Tape backup data is only as good as the latest copy stored. Tape backup methods will require a backup device and ongoing storage material.
- Whole-file and application replication
The implementation of this method will require control software and storage capacity for application and data file storage replication typically on the same site. The data is replicated on a different disk partition or separate disk device and can be a scheduled activity for most servers and is implemented more for database-type applications.
- Hardware and software redundancy
This method ensures the highest level of disaster recovery protection for a hardware virtualization solution, by providing duplicate hardware and software replication in two distinct geographic areas.[14] See also
External links
Notes and References
- Web site: R.J. . Creasy . 1981 . The Origin of the VM/370 Time-sharing System . . 26 February 2013.
- Web site: Thomson . Julian . 2018-05-23 . Virtual Machines: An Introduction to Platform Virtualization . 2023-07-08 . Performance Software . en-US.
- Web site: What is Server Virtualization? .
- Book: Bugnion . Edouard . Nieh . Jason . Tsafrir . Dan . Hardware and software support for virtualization . 2017 . Morgan & Claypool Publishers . San Rafael, CA . 9781627056939.
- Web site: Chip Aging Accelerates . 14 February 2018 .
- Web site: Profiling Energy Usage for Efficient Consumption . Rajesh Chheda . Dan Shookowsky . Steve Stefanovich . Joe Toscano. 14 January 2009 .
- Web site: VMware server consolidation overview . https://web.archive.org/web/20220108130221/https://www.vmware.com/solutions/consolidation.html . 2022-01-08 . dead.
- Examining VMware . . August 2000 . Jason Nieh . Jason Nieh . Ozgur Can Leonard . https://web.archive.org/web/20191122083847/http://systems.cs.columbia.edu/archive/pub/2000/08/examining-vmware/ . 22 November 2019 . dead.
- L. . Smith . A. . Kägi . F. M. . Martins . G. . Neiger . F. H. . Leung . D. . Rodgers . A. L. . Santoni . S. M. . Bennett . R. . Uhlig . A. V. . Anderson . Intel virtualization technology . . 38 . 5 . 48–56 . May 2005 . 10.1109/MC.2005.163.
- IBM System/370 Extended Architecture Interpretive Execution . SA22-7095-0 . January 1984 . First . IBM . October 27, 2022 .
- A Comparison of Software and Hardware Techniques for x86 Virtualization . Keith Adams . Ole Agesen . ASPLOS’06 . 21–25 October 2006 . San Jose, California, USA . Surprisingly, we find that the first-generation hardware support rarely offers performance advantages over existing software techniques. We ascribe this situation to high VMM/guest transition costs and a rigid programming model that leaves little room for software flexibility in managing either the frequency or cost of these transitions. . https://web.archive.org/web/20220108130207/https://www.vmware.com/pdf/asplos235_adams.pdf . 2022-01-08 . dead.
- Web site: 2010 . The One Essential Guide to Disaster Recovery: How to Ensure IT and Business Continuity . Vision Solutions, Inc. . dead . https://web.archive.org/web/20110516030950/http://www.visionsolutions.com/downloads/whitepapers/EssentialDR.pdf . 16 May 2011 . dmy-all .
- Web site: Wold . G . 2008 . Disaster Recovery Planning Process . dead . https://web.archive.org/web/20120815054111/http://www.drj.com/new2dr/w2_002.htm . 15 August 2012 . dmy-all .
- Web site: 2010 . Disaster Recovery Virtualization Protecting Production Systems Using VMware Virtual Infrastructure and Double-Take . VMware . dead . https://web.archive.org/web/20100923021435/http://www.vmware.com/files/pdf/DR_VMware_DoubleTake.pdf . 23 September 2010 . dmy-all .