In virtualization, input/output virtualization (I/O virtualization) is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections.[1]
The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs).[2] Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards.
In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks.
Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations.[3]
In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers .
However, increased utilization created by virtualization placed a significant strain on the server’s I/O capacity. Network traffic, storage traffic, and inter-server communications combine to impose increased loads that may overwhelm the server's channels, leading to backlogs and idle CPUs as they wait for data.[4]
Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual connections to both storage and network resources. In I/O intensive applications, this approach can help increase both VM performance and the potential number of VMs per server.
Virtual I/O systems that include quality of service (QoS) controls can also regulate I/O bandwidth to specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus increases the applicability of server virtualization for both production server and end-user applications.
Blade server chassis enhance density by packaging many servers (and hence many I/O connections) in a small physical space. Virtual I/O consolidates all storage and network connections to a single physical interconnect, which eliminates any physical restrictions on port counts. Virtual I/O also enables software-based configuration management, which simplifies control of the I/O devices. The combination allows more I/O ports to be deployed in a given space, and facilitates the practical management of the resulting environment.[9]