Tempering Convergence

The pace of enterprise network convergence is tempered by a range of factors—some centered on technology, some are not. Despite a long-standing goal to converge LAN and SAN traffic on the same network fabric, most enterprise data centers continue to utilize multiple interconnects, such as InfiniBand and Fibre Channel, while  continuing to evaluate emergent protocols such as Fibre Channel over (Enhanced) Ethernet (FCoE). The technological considerations include issues such as protocol maturity, latencies and most importantly, lossless reliability and distance support. In addition, users need to also take into consideration the needs of new technologies and services like Server and Storage Virtualization and Cloud Computing on these environments. There are political issues at work here, too. Complete convergence—though attractive in theory—suggests that an enterprise’s SAN group would entrust all of its mission-critical traffic to the responsibility and management system of the networking group, and this convergence isn’t likely to occur soon.

 


It's a long-standing goal in enterprise networking to cost-effectively converge all local and storage area network (LAN and SAN) traffic on a single powerful infrastructure via one flexible, reliable, high-performance and low-latency protocol. And the future data center will, indeed, one day rely on some form of converged fabric with server and storage virtualization.

To achieve such convergence, the common underlying technology of choice would have to subsume all of the requirements of all the competing protocols. However, the various protocols used today are highly specialized and designed for specific networking purposes. The consolidation of network adaptors - and hypothetically networks in a data center - is possible only by integrating data center specific requirements into Ethernet. These modifications, which would apply to bridges and switches as well as end devices, add specific features to Ethernet that are already part of FC and other lossless protocols.

Data Center Bridging (DCB) is a collection of upcoming standards aimed at developing Ethernet to become a LOSSLESS, HIGH RELIABILITY layer 1/2 technology within large data centers. Overall, the standardization process for Enhanced Ethernet is still not finalized, so in the meantime, a variety of related technologies and solutions have been rolled out (DCE, CEE, - different vendor brandings of almost the same technology). All of these implementations claim to be a subset of Enhanced Ethernet, but how close to the final standard they will be, and what interoperability issues they will create, remains to be seen. The final success of FCoE and DCB is very much dependent upon the willingness of the different vendors to support the new standard as a common denominator – otherwise each vendor-specific implementation will remain a niche technology.


In its current form, DCB/FCoE offers valuable I/O consolidation for racks of single-rack-unit or blade servers running new converged network adapters (CNAs). Emerging FCoE switches behave effectively as top-of-the-rack aggregators, distributing data traffic to either legacy LAN or SAN infrastructure. The performance characteristics promised by this technology -- low latency and 10 to 40Gb/s bandwidth -- are intriguing. (Figure 3)

The enthusiasm for FCoE is tempered by several factors. First, the protocol is far from proven in the kind of demanding, large-scale deployment that is typical of high-performance computing. Secondly, there are multiple issues with FCoE in relation to distance. Today there are no products available that support the implementation of Inter-switch Links (ISLs) among geographically dispersed DCB/FCoE switches. Multi-hop support and congestion and flow control is still forthcoming so the ability to natively connect the technology across 100 kilometers or more has not been demonstrated. Finally, very few storage vendors offer native FCoE storage interfaces. This means that Fibre Channel will play an even more important role as a transport between data centers and FCoE blade servers.


Overall, the standardization process for Enhanced Ethernet is still not finalized, so in the meantime, a variety of related technologies and solutions have been rolled out (DCE, CEE, - different vendor brandings of almost the same technology). All of them claim to be a subset of Enhanced Ethernet, but how close to the final standard they will be, and what interoperability issues they will create, remains to be seen. The final success of FCoE and DCB is very much dependent upon the willingness of the different vendors to support the new standard as a common denominator – otherwise each vendor-specific implementation will remain a niche technology.

For managers of enterprise data centers, FCoE’s promise for cost-effectively converging Fibre Channel and Ethernet—the two dominant enterprise networking protocols—is too great to be overlooked. At the same time, its hype as today’s single unifying fabric for all enterprise LAN and SAN traffic must be closely scrutinized. There is a compelling value proposition for the adoption of FCoE on some types of midrange and rack mounted servers.  But, for many years to come, the real-world enterprise data center based on high end servers will have to continue to support a range of existing multiprotocol fabrics. The challenge faced by data center managers will be smartly matching technologies with applications to most affordably satisfy technical requirements and business objectives. Protocol-agnostic Wavelength Division Multiplexing (WDM) will continue to serve as the convergence mechanism for an array of uniquely valuable protocols, linking together multiple enterprise-class data centers, as well as transporting converged fabrics where required.

 

IT decision makers are faced with a difficult decision as they consider the merits of a converged network solution and possible migration paths. Indeed, this is why WDM is still the data transport of choice.

Todd Bundy, Director, Global Alliance Management Enterprise, ADVA Optical Networking

Todd Bundy has 26 years in the storage networking industry. He is a recognized expert in SAN and optical networking, and specializes in storage applications over various types of networks to meet corporate contingency plans.

Throughout his career, Todd has participated in many successful large-scale Disaster Recovery and Data Center Consolidation projects with companies like IBM, EMC, HDS, HP and SUN using ADVA (FSP Fiber Service Platform) WDM technology.

In his work with ADVA Optical Networking, Todd is helping support new standards in optical storage networking like 8G and 16G Fiber Channel, 5G and 10G Infiniband, and 10G and 40G FCoE/DCB (Fiber Channel over Enhanced Ethernet). In pursuit of new operating standards, he leads ADVA Optical Networking's Interoperability programs to support the infrastructure intelligence needed to take "Cloud Computing" to the next level.