SCIENCE
PCIe Sharing: Breaking the “1 to 1” Model Without Breaking Transparency
In an era where data center managers and administrators are endlessly focused on lowering their TCO (total cost of ownership) while they are measured against the most stringent of service levels - the need has arrived for innovative technology that can address the challenge of scaling out virtualized applications without bankrupting the business. Most data center administrators will agree that the issue of achieving higher performance, scale and virtualized server density is limited by the lack of sufficient I/O and memory, not CPU.
Many data center administrators spend valuable time and resources provisioning physical I/O resources directly to a single server. Some have even over-provisioned in fear that they will not be able to sufficiently support and supply the required I/O to meet the service level of the application; for example, providing each server its own dedicated 10 Gigabit Ethernet (10 GbE) network interface card, a dedicated network switch port, and cable. This practice is common and causes a "static" 1 to 1 relationship between resource and server. One might challenge that this seems counterintuitive, as the de facto best practice in today's data center to pool and share resources (compute, network, and storage) - via virtualization.
These types of I/O resources are PCIe (PCI Express) based adapters (also known as peripherals) that allow servers to connect to high-speed networks and high performance storage. PCIe is a well-known, well-understood and universally accepted server-based technology, used to connect and access resources such as network cards, Fibre Channel host bus adapters (HBAs), RAID (redundant array of independent disk) controllers, and recently PCIe-based SSDs (solid state drives).
Through the use of disruptive technologies, such as hardware-based virtualization, data center administrators can now pool high value, in-demand I/O resources and share them across multiple servers, instead of dedicating a single resource to each server. This can be done without sacrificing the native capabilities of the I/O resource, which make it valuable to the administrator, such as offloading checksum operations through hardware acceleration or improving how the card buffers information when sharing with resources among virtualized servers.
Servers can have access to a virtualized instance of a traditional physical I/O resource. Sharing the physical I/O resource means that communications between those servers that are connected and sharing will achieve higher performance, improved resource utilization and drastically lower their TCO.
Another aspect beyond improving the performance and utilization of the shared resources is also the value of centralized provisioning and management. A server can now gain access to an I/O resource that it previously could not, for economic or logistical reasons.
Imagine provisioning each server in your data center with its own physical PCIe-based SSD adapter; most would agree that this would be cost prohibitive. Now imagine pooling two or four of these same resources in a centralized appliance, and sharing/presenting a virtualized PCIe-based SSD to multiple server hosts; now each has its own portion of the resource or depending on the configuration, all server hosts can see the resource as a shared pool, though connected via PCIe.
The performance benefit alone is a huge improvement, such as block copies within a shared datastore, to support operations within a clustered database or virtual machine live migration between server hosts. The operation can be achieved at local speeds/feeds as if the storage resource was directly attached and local to the server host.
Sharing in the data center has been proven for decades as a better way to achieve scale, resource utilization and even performance. As we expand beyond what we know can be shared (server, network, and storage) to other areas of the data center, we realize that it is prime time to leverage sharing for PCIe-based resources.