HERD for the Gigabit Era | Data Center | Corning

HERD for the Gigabit Era

HERD for the Gigabit Era

By Katia Safonova and Kevin Bourg, Corning Optical Communications
Appearing In Broadband Technology Report August 29, 2018

As cable operators continue to expand the capacity of their hybrid fiber-coaxial (HFC) networks, space within the headend or hub is at a premium. Each upgrade cycle brings in new transmitters and receivers deployed alongside new cable modem termination systems (CMTS). With the next wave of upgrades toward fiber deep or an all-passive coax plant, operators may face 10 times the number of nodes in the field – and an even higher premium on space within their facilities.

The good news is the telecommunications industry as a whole has been facing these challenges for years, not just the cable operators. Telephone operators, cable TV operators, competitive carriers and municipal operators alike grapple with limited space in their communications facilities and lack the real estate for continued expansion. Now, one can ask, “Why is this considered good news?” The simple truth is the industry has been for many years focused on ways to provide a more agile environment, a method to deploy capacity and scalability while moving away from proprietary platforms for service delivery in the access network. So today there is an established case for solving these concerns by leveraging network functions virtualization (NFV).

Relieving space concerns through virtualization

The central office network infrastructure deployed today and developed over more than 50 years has become unfit for further expansion. The concept of re-architecting a central office as a data center (CORD) has proven results: By moving away from hardware and into the use of flexible and agile software structures, telcos have been reaping the benefits of scalability and rapid deployment of new and increased services. 

CORD provides three core advantages, with the first being the scalable leveraging of software held in the cloud. This remains a versatile and fundamental advantage of the approach. However, two additional advantages – use of software-defined networking (SDN) and NFV – shall not be overlooked as advantages of CORD when applied to the headend. 

The increased need for network capacity comes from a worldwide thirst for higher speeds. Superfast networks are required to deliver on the promise of emerging technologies, smart cities and the “internet of things.” In the case of MSOs, demand for extended services including everything from video on demand to higher-definition television, along with their accompanying data, create more pressure on the headend and its surrounding network, particularly as we look to scale. 

Traffic increases can be addressed by adding nodes and fiber, but this approach is only feasible and scalable up to a point. In the headend, overuse of fibers can create the need for cumbersome and time-consuming identification of optimum jumper lengths and routes as well as congestion in cabinets. Even worse, it can lead to operational risks and performance issues and, ultimately, to costly upscale.

Applying a CORD paradigm is one robust way for forward-looking cable operators to seize scalable advantages, helping them get ahead of demand by increasing network capability while also reducing costs.

How MSOs fit in

Take a look at the layered approach to optical fiber architecture seen in HFC networks as shown in Figure 1, with three fiber levels from a headend to the final node. While this general case may not apply to all MSOs (as some will not include a secondary node), it does show how strain on the headend can be reduced by applying multiple layers to manage cabling requirements.

As remote PHY device (RPD) deployments increase, MSOs are in a position to begin the virtualization of the headend. In an RPD deployment, the DOCSIS physical layer is moved from the headend to the field within a distributed HFC node. Space for large analog transmitters and digital/analog receivers is no longer necessary at the headend or hub site. Furthermore, the CMTS begins to look like a Layer 2 MAC-based switching machine that can be virtualized into a non-proprietary services platform. This new RPD-based architecture reduces cooling and space constraints at the headend and hub, with standard data center-inspired gear replacing those proprietary CMTS and transmitters/receivers. So, the concept of re-architecture as a data center can be applied effectively in what we will call headend re-architected as a data center (HERD). 

 

   
  Figure 1. Traditional HFC architecture.  

Join the HERD

By adopting a more agile framework, CATV operators can create a streamlined network that can be scaled effectively and, in comparative terms, cost-effectively. A two-level spine-and-leaf architecture, as demonstrated in Figure 2, can help to facilitate HERD, which is more convenient for transferring data from cable modems or set-top boxes to the router (east to west).

 

   
  Figure 2. Two-level spine-and-leaf architecture.  

 

This network architecture mainly consists of two parts – a spine switching layer and leaf switching layer. Each leaf switch is connected to each spine switch, greatly improving the communications efficiency and reducing the delay between servers. In addition, the spine-and-leaf two-level network architecture enables MSOs to avoid purchasing expensive core-layer switching devices, while also making it easier to add switches and network devices for expansion based on business needs instead of buying them as part of the initial investment.

With the HERD approach, cable operators can streamline their infrastructures through moving control and data plane functions from the access network within proprietary CMTS platforms, located in the headend or distributed within hubs, to standard white box servers and switches at the headend.

An implementation of SDN enables a simplified network structure that provides for a separation of a network’s control and data planes while also enabling control programming. The SDN capability is implemented by software running on industry-available servers, controlling standard switches and I/O blades through an open interface. (However, challenges will remain in that some I/O blades will need to be used in remote hub sites or locations.)

In the example of NFV, which promotes the use of virtualization within the data plane, white box servers again provide a central hub within the headend on which to enable more agile software-based orchestration.

Software solutions enabled by hardware

A re-architected approach to cope with the increase in demand for both data and agility within MSO networks provides benefits primarily in improved integration and positioning of a software solution. However, it also requires fit-for-purpose hardware with which to achieve a complete HERD result.

For optimum and scalable real implementation of virtual racks on the headend, a high-density fiber management system should be coupled with optimized fibers. A high-density cabinet with front or rear access that enables cross-connect and interconnect applications translates to easier implementation of the SDN and NFV integrations. Scalable hardware also means that, even if there is significant growth in a given MSO network, the headend node can continue to cope with increased demands and requirements.

A future-ready answer

Adopting a HERD approach takes advantage of available software and hardware to enable future-ready networking. Capacity demands and evolving MSO technologies will continue to add pressure to existing legacy infrastructure, which needs evolving technology to suit. 

The transformation to a standard-based distribution network gives MSOs the opportunity to drive scale and efficiency toward a future-ready optical infrastructure. To deliver the performance and capabilities that their customers expect, cable operators are migrating their networks now to be ready for the gigabit era. High-speed capabilities on the horizon will take the full network, and it begins at the headend with a shift to virtualization.