I/O virtualization explained

In virtualization, input/output virtualization (I/O virtualization) is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections.[1]

The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs).[2] Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards.

In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks.

Background

Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations.[3]

In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers .

However, increased utilization created by virtualization placed a significant strain on the server’s I/O capacity. Network traffic, storage traffic, and inter-server communications combine to impose increased loads that may overwhelm the server's channels, leading to backlogs and idle CPUs as they wait for data.[4]

Virtual I/O addresses performance bottlenecks by consolidating I/O to a single connection whose bandwidth ideally exceeds the I/O capacity of the server itself, thereby ensuring that the I/O link itself is not a bottleneck. That bandwidth is then dynamically allocated in real time across multiple virtual connections to both storage and network resources. In I/O intensive applications, this approach can help increase both VM performance and the potential number of VMs per server.

Virtual I/O systems that include quality of service (QoS) controls can also regulate I/O bandwidth to specific virtual machines, thus ensuring predictable performance for critical applications. QoS thus increases the applicability of server virtualization for both production server and end-user applications.

Benefits

Blade server chassis enhance density by packaging many servers (and hence many I/O connections) in a small physical space. Virtual I/O consolidates all storage and network connections to a single physical interconnect, which eliminates any physical restrictions on port counts. Virtual I/O also enables software-based configuration management, which simplifies control of the I/O devices. The combination allows more I/O ports to be deployed in a given space, and facilitates the practical management of the resulting environment.[9]

See also

Notes and References

  1. Web site: Scott Lowe. Virtualization strategies > Benefiting from I/O virtualization. 2008-04-21. Tech Target. 2009-11-04.
  2. Web site: Strategies to Optimize Virtual Machine Connectivity . Dell. 2009-11-04. Scott Hanson.
  3. Web site: New Things to Virtualize, Virtualization Review. Keith Ward. March 31, 2008. virtualizationreview.com. 2009-11-04.
  4. Web site: Virtualization's Promise And Problems. Charles Babcock. May 16, 2008. Information Week. 2009-11-04.
  5. Web site: Tech Road Map: Keep An Eye On Virtual I/O. Paul . Travis. June 8, 2009. Network Computing. 2009-11-04.
  6. Web site: PrimaCloud offers new cloud computing service built on Xsigo's Virtual I/O. David . Marshal. July 20, 2009. InfoWorld. 2009-11-04.
  7. Web site: I/O Virtualization (IOV) & its uses in the network infrastructure: Part 1. https://archive.today/20130122020822/http://www.embedded.com/design/networking/217701325?pgno=1. dead. January 22, 2013. Damouny. Neugebauer. Rolf. Neugebauer. June 1, 2009. Embedded.com. 2009-11-04. Embedded.com.
  8. Web site: Unified Fabric Options Are Finally Here, Lippis Report: 126. Nick . Lippis. May 2009. Lippis Report. 2009-11-04.
  9. Web site: I/O Virtualization for Blade Servers. David . Chernicoff. Windows IT Pro. 2009-11-04.