Timeline of virtualization development explained
The following is about Virtualization development. In computing, virtualization is the use of a computer to simulate another computer. Through virtualization, a host simulates a guest by exposing virtual hardware devices, which may be done through software or by allowing access to a physical device connected to the machine.
Timeline
Note: This timeline is missing data for important historical systems, including: Atlas Computer (Manchester), GE 645, Burroughs B5000.
1960s
See main article: IBM CP-40, CP/CMS, History of CP/CMS and IBM System/360 Model 67. In the mid-1960s, IBM's Cambridge Scientific Center develops CP-40, the first version of CP/CMS. Experience on the CP-40 project provides input to the development of the IBM System/360 Model 67, announced in 1965. CP-40 is re-implemented for the S/360-67 as CP-67, and by April 1967, both versions are in daily production use.
- 1964
- IBM Cambridge Scientific Center begins development of CP-40.
- 1965
- 1966
- IBM ships the S/360-67 computer in June 1966.
- IBM begins work on CP-67, a re-implementation of CP-40 for the S/360-67.
- 1967
- In January, CP-40 goes into production time-sharing use, followed by CP-67 in April.
- 1968
- CP/CMS is installed at eight initial customer sites.
- CP/CMS is submitted to IBM Type-III Library by MIT's Lincoln Laboratory.
- Resale of CP/CMS access begins at time-sharing vendor National CSS (becoming a distinct version, eventually renamed VP/CSS).
1970s
See main article: System/370, VM (operating system), History of CP/CMS and hypervisor. IBM announces the System/370 in 1970. In 1972, IBM announces that virtual memory would be made available on all S/370 models, and also announces several virtual storage operating systems, including VM/370. By the mid-1970s, CP/CMS, VM, and VP/CSS are running on numerous large IBM mainframes.
- 1971
- The first System/370, the S/370-155, is shipped in January.
- 1972
- Announcement of virtual memory being added to System/370 series.
- VM/370 announced – and running on announcement date. VM/370 includes the ability to run VM under VM (previously implemented both at IBM and at user sites under CP/CMS, but not made part of standard releases)
- 1973
- First shipment of announced virtual memory S/370 models (April: -158, May: -168).
- 1977
- Initial commercial release of VAX/VMS, later renamed OpenVMS.
- 1979
- The chroot system call is introduced during development of Version 7 Unix, laying a foundation for container virtualization.[1] [2]
1980s
- 1985
- 1987
- January 1987: A "product evaluation" version of Merge/386 from Locus Computing Corporation is made available to OEMs. Merge/386 made use of the Virtual 8086 mode provided by the Intel 80386 processor, and supported multiple simultaneous virtual 8086 machines. The virtual machines supported unmodified guest operating systems and standalone programs such as Microsoft Flight Simulator; but in typical usage the guest was MS-DOS with a Locus proprietary redirector (also marketed for networked PCs as "PC-Interface") and a "network" driver that provided communication with a regular user-mode file server process running under the host operating system on the same machine.
- October 1987: Retail Version 1.0 of Merge/386 begins shipping, offered with Microport Unix System V Release 3.
- 1988
1990s
- 1991
- 1994
- Kevin Lawton leaves MIT Lincoln Lab and starts the Bochs project. Bochs was initially coded for x86 architecture, capable of emulating BIOS, processor and other x86-compatible hardware, by simple algorithms, isolated from the rest of the environment, eventually incorporating the ability to run different processor algorithms under x86-architecture or the host, including bios and core processor (Itanium x64, x86_64, ARM, MIPS, PowerPC, etc.), and with the advantage that the application is multi platform (BSD, Linux, Windows, Mac, Solaris).[3]
- 1997
- The first version of Virtual PC for the Macintosh platform is released in June 1997 by Connectix.
- 1998
- 1999
See main article: x86 virtualization.
- February 8: VMware introduces the first x86 virtualization product for the Intel IA-32 architecture, known as VMware Virtual Platform, based on earlier research by its founders at Stanford University. VMware Virtual Platform is based on software emulation with a guest/host OS design that required all guest environments be stored as files under the host OS filesystem.
2000s
- 2000
- 2001
- 2003
- 2005
- 2006
- July 12, 2006 VMware releases VMware Server, a free machine-level virtualization product for the server market.
- Microsoft Virtual PC 2006 is released as a free program, also in July.
- July 17, 2006 Microsoft bought Softricity.
- August 16, 2006 VMware announces the winners of the virtualization appliance contest.
- September 26, 2006 moka5 delivers LivePC technology.
- HP releases Integrity Virtual Machines Version 2.0, which supports Windows Server 2003, CD and DVD burners, tape drives and VLAN.
- December 11, 2006 Virtual Iron releases Virtual Iron 3.1, a free bare-metal virtualization product for the enterprise server virtualization market.
- 2007
- KVM, a virtualization module integrated into the Linux kernel, is released.
- January 15, 2007 InnoTek releases VirtualBox Open Source Edition (OSE), the first professional PC virtualization solution released as open source under the GNU General Public License (GPL). It includes some code from the QEMU project.
- Sun releases Solaris 8 Containers to enable migration of a Solaris 8 computer into a Solaris Container on a Solaris 10 system – for SPARC only.
- 2008
- The first Linux kernel mainline featuring cgroups (developed by Google since 2006) is released, laying a foundation for later technologies like LXC, Docker, Systemd-nspawn and Podman.
- January 15, 2008 VMware, Inc. announces it has entered into a definitive agreement to acquire Thinstall, a privately held application virtualization software company.
- February 12, 2008 Sun Microsystems announces that it had entered into a stock purchase agreement to acquire InnoTek, makers of VirtualBox.
- April: VMware releases VMware Workstation 6.5 beta, the first program for Windows and Linux to enable DirectX 9 accelerated graphics on Windows XP guests http://www.vmware.com/products/beta/ws/releasenotes_ws65_beta.html.
- August 6: LXC, an OS-level virtualization method for Linux, is released.
2010s
- 2011
- The first stable version of QEMU is released.[6]
- 2013
- 2014
- The first public build of Kubernetes is released on September 8, 2014.[7] When Kubernetes debuted, it offered a number of advantages over Docker, the most popular containerization platform at the time. The purpose of Kubernetes was to make it simple for users to deploy containerized applications across a sizable cluster of container hosts. In order to offer more features and functionality for managing containerized applications at scale, Kubernetes was created to complement Docker rather than to completely replace it.[8] [9]
Overview of Virtualization
As an overview, there are three levels of virtualization
- At the hardware level, the VMs can run multiple guest OSes. This is best used for testing and training that require networking interoperability between more than one OSes, since not only can the guest OSes be different from the host OS, there can be as many guest OS as VMs, as long as there is enough CPU, RAM and storage space. IBM introduced this around 1990 under the name logical partitioning (LPAR), at first only in the mainframe field.
- At the operating system level, it can only virtualize one OS: the guest OS is the host OS. This is similar to having many terminal server sessions without locking down the desktop. Thus, this is the best of both worlds, having the speed of a TS session with the benefit of full access to the desktop as a virtual machine, where the user can still control the quotas for CPU, RAM and HDD. Similar to the hardware level, this is still considered a Server Virtualization where each guest OS has its own IP address, so it can be used for networking applications such as web hosting.
- At the application level, it is running on the Host OS directly, without any guest OS, which can be in a locked down desktop, including in a terminal server session. This is called Application Virtualization or Desktop Virtualization, which virtualizes the front end, whereas Server Virtualization virtualizes the back end. Now, Application Streaming refers to delivering applications directly onto the desktop and running them locally. Traditionally in terminal server computing, the applications are running on the server, not locally, and streaming the screenshots onto the desktop.
Application virtualization
Application virtualization solutions such as VMware ThinApp, Softricity, and Trigence attempt to separate application-specific files and settings from the host operating system, thus allowing them to run in more-or-less isolated sandboxes without installation and without the memory and disk overhead of full machine virtualization. Application virtualization is tightly tied to the host OS and thus does not translate to other operating systems or hardware. VMware ThinApp and Softricity are Intel Windows centric, while Trigence supports Linux and Solaris. Unlike machine virtualization, Application virtualization does not use code emulation or translation so CPU-related benchmarks run with no changes, though filesystem benchmarks may experience some performance degradation. On Windows, VMware ThinApp and Softricity essentially work by intercepting filesystem and registry requests by an application and redirecting those requests to a preinstalled isolated sandbox, thus allowing the application to run without installation or changes to the local PC. Though VMware ThinApp and Softricity both began independent development around 1998, behind the scenes VMware ThinApp and Softricity are implemented using different techniques:
- VMware ThinApp works by packaging an application into a single "packaged" EXE which includes the runtime plus the application data files and registry. VMware ThinApp's runtime is loaded by Windows as a normal Windows application, from there the runtime replaces the Windows loader, filesystem, and registry for the target application and presents a merged image of the host PC as if the application had been previously installed. VMware ThinApp replaces all related API functions for the host application, for example the ReadFile API supplied to the application must pass through VMware ThinApp before it reaches the operating system. If the application is reading a virtual file, VMware ThinApp handles the request itself otherwise the request will be passed on to the operating system. Because VMware ThinApp is implemented in user-mode without device drivers and it does not have a client that is preinstalled, applications can run directly from USB Flash or network shares without previously needing elevated security privileges.
- Softricity (acquired by Microsoft) operates on a similar principle using device drivers to intercept file requests in ring0 at a level closer to the operating system. Softricity installs a client in Administrator mode which can then be accessed by restricted users on the machine. An advantage of virtualizing at the kernel level is the Windows Loader (responsible for loading EXE and DLL files) does not need to be re-implemented and greater application compatibility can be achieved with less work (Softricity claims to support most major applications). A disadvantage of ring0 implementation is that it requires elevated security privileges to be installed and crashes or security defects can occur system-wide rather than being isolated to a specific application.
Because Application Virtualization runs all application code natively, it can only provide security guarantees as strong as the host OS is able to provide. Unlike full machine virtualization, Application virtualization solutions currently do not work with device drivers and other code that runs at ring0 such as virus scanners. These special applications must be installed normally on the host PC to function.
Managed runtimes
Another technique sometimes referred to as virtualization, is portable byte code execution using a standard portable native runtime (aka Managed Runtimes). The two most popular solutions today include Java and .NET. These solutions both use a process called JIT (Just in time) compilation to translate code from a virtual portable Machine Language into the local processor's native code. This allows applications to be compiled for a single architecture and then run on many different machines. Beyond machine portable applications, an additional advantage to this technique includes strong security guarantees. Because all native application code is generated by the controlling environment, it can be checked for correctness (possible security exploits) prior to execution. Programs must be originally designed for the environment in question or manually rewritten and recompiled to work for these new environments. For example, one cannot automatically convert or run a Windows / Linux native app on .NET or Java. Because portable runtimes try to present a common API for applications for a wide variety of hardware, applications are less able to take advantage of OS-specific features. Portable application environments also have higher memory and CPU overheads than optimized native applications, but these overheads are much smaller compared with full machine virtualization. Portable Byte Code environments such as Java have become very popular on the server where a wide variety of hardware exists and the set of OS-specific APIs required is standard across most Unix and Windows flavors. Another popular feature among managed runtimes is garbage collection, which automatically detects unused data in memory and reclaims the memory without the developer having to explicitly invoke "free" operations.
Neutral view of application virtualization
Given the industry bias of the past, to be more neutral, there are also two other ways to look at the Application Level:
- The first type is application packagers (VMware ThinApp, Softricity) whereas the other is application compilers (Java and .NET). Because it is a packager, it can be used to stream applications without modifying the source code, whereas the latter can only be used to compile the source code.
- Another way to look at it is from the Hypervisor point of view. The first one is "hypervisor" in user mode, whereas the other is "hypervisor" in runtime mode. The hypervisor was put in quotation, because both of them have similar behavior in that they intercept system calls in a different mode: user mode; and runtime mode. The user mode intercepts the system calls from the runtime mode before going to kernel mode. The real hypervisor only needs to intercept the system call using hypercall in kernel mode. Hopefully, once Windows has a Hypervisor, Virtual machine monitor, there may even be no need for JRE and CLR. Moreover, in the case of Linux, maybe the JRE can be modified to run on top of the Hypervisor as a loadable kernel module running in kernel mode, instead of having slow legacy runtime in user mode. Now, if it were running on top of the Linux Hypervisor directly, then it should be called Java OS, not just another runtime mode JIT.
- Mendel Rosenblum[10] called the runtime mode a High-level language virtual machine in August 2004. However, at that time, the first type, intercepting system calls in user mode, was irresponsible and unthinkable, so he didn't mention it in his article. Hence, Application Streaming was still mysterious in 2004.[11] Now, when the JVM, no longer High-level language virtual machines, becomes Java OS running on Linux Hypervisor, then Java Applications will have a new level of playing field, just as Windows Applications already has with Softricity.
- In summary, the first one is virtualizing the Binary Code so that it can be installed once and run anywhere, whereas the other is virtualizing the source code using Byte code or Managed code so that it can be written once and run anywhere. Both of them are actually partial solutions to the twin portability problems of: application portability; and source code portability. Maybe it is time to combine the two problems into one complete solution at the hypervisor level in the kernel mode.
See also
References
- Web site: Mell . Emily . April 2, 2020 . The evolution of containers: Docker, Kubernetes and the future . January 7, 2023 . TechTarget.
- Web site: Dillenburg . Stefan . May 3, 2020 . A brief history of container virtualization . January 7, 2023 . Medium (website).
- Web site: Hess . Ken . August 25, 2011 . Thinking inside and outside the Bochs with Kevin Lawton . December 3, 2015 . zdnet.
- Web site: Hochstätter . Christoph H. . March 14, 2007 . Virtuozzo Company History Timeline . January 7, 2023 . zdnet.
- Web site: Standard project directories initialized by cvs2svn. (e63c3dc7) · Commits · QEMU / QEMU · GitLab . February 18, 2003 . July 23, 2024.
- Web site: QEMU 1.0 released [LWN.net] ]. December 2, 2011 . . July 23, 2024.
- Web site: Release Kubernetes v0.2 . .
- Web site: Red Hat and Google collaborate on Kubernetes to manage Docker containers at scale . Red hat.
- Web site: Buhr . Martin . Everything you wanted to know about Kubernetes but were afraid to ask . 22 December 2022 . Google.
- http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=168 The Reincarnation of Virtual Machines
- http://www.zdnetasia.com/insight/software/0,39044822,39175522,00.htm Application streaming anyone?
External links