Fibre Channel - The Need for Speed with OM3/OM4 Optical Connectivity

We use cookies to ensure the best experience on our website.
View Cookie Policy
_self
Accept Cookie Policy
Change My Settings
ESSENTIAL COOKIES
Required for the site to function.
PREFERENCE AND ANALYTICS COOKIES
Augment your site experience.
SOCIAL AND MARKETING COOKIES
Lets Corning work with partners to enable social features and marketing messages.
ALWAYS ON
ON
OFF

Fibre Channel – The Need for Speed with OM3/OM4 Optical Connectivity

The Need for Speed with OM3/OM4 Connectivity

By Doug Coleman, Corning Incorporated
Appearing In Cabling Installation and Maintenance November 2017

Introduction

Fibre Channel transport with laser optimized 50/125 µm OM3/OM4 multimode fiber connectivity is the primary method to reliably link servers to external data storage devices in enterprise data centers. The ongoing evolution of high-performance servers and storage technologies drives the need for increased Fibre Channel data rates to reliably link these devices to maximize operating efficiencies and enable low-cost value propositions. This paper will discuss server and storage technologies that warrant the higher Fibre Channel data rates in addition to the utilization of OM3/OM4 optical connectivity.

 

Fibre Channel – The Need for Speed

Fibre Channel’s deterministic data delivery, low latency and proven reliability have made it the leading transport technology for linking servers to external data storage. As servers and storage technologies have progressed over time, Fibre Channel data rates have increased in tandem to support.  See Figure 1.

Typical enterprise data centers are deploying servers today with integrated multi-core processors that range from four to 12 cores. Each core normally has 2 GHz of processing capability that translates into 8-24 GHz of total capability.  In addition, servers are now using Peripheral Component Interconnect Express-3 (PCIe3 8G/lane) bus speeds and PCIe4 16G/lane is fast approaching to complement the increased number processor cores. The increased server processing necessitates higher Ethernet network data rate input/output (I/O) interconnects (10G/25G) into the server network interface card (NIC) as well as increased Fibre Channel data rates (16 GFC/32 GFC) into the server host bus adapters (HBA) to access and deliver external data for the server applications.  The future server trend is for an increased number of processor cores such that Ethernet 50G/100G (NIC) and Fibre Channel 64 GFC (HBA) interconnects will be required.  See Figure 2.

 

   
  Figure 2: Server Ethernet NIC and Fibre Channel HBA  

 

Using 32G Fibre Channel (32 GFC), Brocade has demonstrated a 71percent reduction in response time to access 8G flash storage, compared to using 8G Fibre Channel.  See Figure 3.  By adopting flash, data centers achieve resource efficiencies that allow them to host more IT services and store more data well into the future. The deployment of flash storage is robust. AFAs are quickly replacing legacy HDD-based systems to become the primary enterprise storage solution. 

 

Data Center Multimode Fiber Connectivity Distances

Ethernet and Fibre Channel transmission standards develop guidance based on specific criteria that includes technical and commercial feasibility. A primary objective is to deliver economical solutions that meet distance objectives representative of deployed multimode fiber connectivity channel lengths. Corning has tracked and modeled multimode and single-mode fiber connectivity data center channel lengths for an extended time period. Trends have shown that as Ethernet data rates have increased from 10 to 40 to 100G, and Fibre Channel data rates have increased for 8 to 16 to 32G, the 100 m channel distance represents approximately 95 percent of deployed OM3 and 90 percent of deployed OM4 channel lengths. See Figure 4 .  In other words, for the vast majority of data center users, a 100 m channel distance is more than sufficient to meet their needs.

 

 

 

   
  Figure 4: Data Center OM3 and OM4 channel length distributions  

 

Fibre Channel – OM3/OM4 Optical Connectivity

Fibre Channel transport is essentially tip-to-tip optical connectivity.  OM3/OM4 multimode fiber connectivity continues as the leading optical media used in the data center for short-reach distances up to 100-150 m. 16 GFC and 32 GFC networks using multimode optical fiber trunks are now being deployed.  OM3/OM4 multimode fiber enables the utilization of vertical-cavity surface-emitting lasers (VCSELs) to provide synergistic and low-price optical connectivity and electronic solutions.

To date, Fibre Channel has only used small form-factor pluggable (SFP+) transceivers with a duplex LC connector interface with the storage area network (SAN) electronics (server HBA, director switch, and storage).  Factory-terminated MTP® connectorized trunks are commonly deployed from a central patching area in the main distribution area (MDA) to each area with servers, storage, and SAN directors.  In the central patching area, MTP/LC modules are used to breakout the MTP connectors on the trunks into LC duplex ports.  LC duplex jumpers are then used to provide the port-to-port connectivity required between any two devices, such as the server to SAN director or storage to SAN director.

At the server cabinets and storage devices, MTP/LC modules are used to breakout the MTP connector of the trunk into duplex ports for interconnection to the server and storage HBAs using LC duplex jumpers.  At the SAN directors, however, it is common to use an MTP/LC harness instead of a module to breakout the trunk MTP connector into LC duplex ports. These high-density harness assemblies reduce the amount of cable bulk and congestion at the director cabinet(s), and the harness LC legs can be staggered to match the port spacing of the individual line cards.  This method of pre-cabling of the SAN director optimizes cable management and reduces risk by moving day-to-day move, add, and change work away from the electronic equipment to the passive patching area in the MDA. See Figure 5.

 

 

   
  Figure 5: Structured cabling for storage area network with Base-8 cabling  

 

The Fibre Channel FC-PI6 Standard includes a 128 GFC data rate that uses a QSFP transceiver with an 8-or 12-fiber MTP interface.  The 128 GFC data rate uses parallel optics transmission technology. Parallel optics differs from traditional duplex fiber optic serial communication in that data is simultaneously transmitted and received over multiple optical fibers. 128 GFC parallel optics require eight OM3 or OM4 fibers with 32 GFC transmission on each fiber: four fibers (4 fibers x 32 GFC/fiber) to transmit (Tx) and four fibers (4 fibers x 32 GFC/fiber) to receive (Rx).  See Figure 6.  

 

 

   
  Figure 6: 128 GFC parallel transmission  

 

The 128 GFC data rate is the first Fibre Channel defined parallel optics transmission variant.  FC-PI7 activity is ongoing to include a 256 GFC parallel optic variant in the future.

Initial 128 GFC deployments are expected for inter-switch (ISL) links using MTP connectivity throughout the link.  Compared to the traditional Fibre Channel architecture with duplex fiber connections at the electronics, parallel transmission optical connectivity will use 8-fiber MTP connectors with adapter panels in lieu of MTP/LC modules for interconnections.  See Figure 7.

 

 

 

   
  Figure 7: 128 GFC - Parallel connectivity with cross-connect structured Base-8 cabling  

 

Summary

Summary

Fibre Channel transmission has a need for speed.  Higher Fibre Channel data rates (32/64/128 GFC) are emerging in response to advances to server and storage technologies.  Fibre Channel deployment distances in enterprise data centers continue to focus on distances up to 100 m.  OM3/OM4 50/125 μm multimode optical fiber is well-positioned to provide reliable and low-cost connectivity solutions for legacy and future Fibre Channel data rates utilized in storage area networks.

Download the Article

Want to save this article for a later read or reference, please download the article.

Download