Future of Hyperscale Data Centers

Hyperscale is at the core of how we navigate the digital world

By Brian Rhoney
Published: November 15, 2021

Google, Facebook, Amazon, and Microsoft – these are just a few of the global brands that have made hyperscale data centers a mainstay in our lives. These mega clusters of computing power increasingly are at the center of our digital ecosystem, and as demand for online services increases, so will the need for more data capacity to keep communication, commerce and collaboration running.

Hyperscale data centers were needed to provide a scalable cloud infrastructure to meet the growing demand of the cloud and its services. These services ranged from productivity tools to online retail, gaming and social media, all things now engrained in our daily lives.

Many of these cloud providers started out by leveraging that computer footprint to support their own applications or platforms, but later started to branch out to provide hosting services within their data centers. This provided the opportunity for smaller businesses and startups to leverage this service without having to incur the high capital cost associated with building a data center themselves and instead focus on their product, application or service. Today, the cloud is leveraged by the smallest to the largest Fortune 100 companies to unload business support applications.

As demand for cloud services like ecommerce, internet search and social media quickly expands across the globe, hyperscale data centers need to both grow their capacity and become more efficient. Many of these services create revenue through advertising or product placement while using their platform. These algorithms used to determine the types of targeted, relative content are becoming more sophisticated and require more powerful compute capacity to support these machine learning applications within a high-performance computing (HPC) environment.      

The next evolution of hyperscale data centers

Two emerging technology trends are quickly shaping the evolution of hyperscale data centers: edge computing and high-performance computing (HPC).

Traditional cloud infrastructure generally means a huge facility covering a large physical footprint serving hundreds of thousands of servers. Edge computing will serve as a complement to the cloud scale and usher in new capabilities for application developers. Rather than construct large network hubs spread across hundreds of miles, edge computing is ushering in more small-scale data center sites to process and compute data closer to where it’s generated. This provides lower latency that can be leveraged to perform some applications right at the edge, resulting in less data on the backbone going back to the large cloud sites.

Edge computing promises to help reduce the growing amount of power required to meet consumer application desires. Supplying power is both a capacity and cost issue. Providers must find ways to maintain efficiency and scale while cutting back on overhead costs, hence why the hybrid approach of cloud plus edge compute will be an interesting balance.

Edge computing also allows for smarter data processing. Imagine the technology that powers your home doorbell system with video sensors. Through edge computing, your system is able to pre-process data so it knows what to and what not to alert you about. This pre-processing at the edge results in less video storage that has to be pushed back for storage in the cloud, reducing the demand on the internet backbone networks. With so many Internet of Thing (IoT) applications in our homes today creating data, this capability to pre-process data at the edge to eliminate the “less than useful” data is required to scale in an efficient manner while maintaining our connected world. 

As previously mentioned, the growth of high performance computing in hyperscale data centers is another emerging trend. This method of computing isn’t new, but advances in HPC clusters now demand more operating capacity than ever before.

One well-known use of HPC is digital advertisements. HPC provides businesses that rely upon hyperscale data centers with the data storytelling necessary to reach consumers directly. As applications such as this become more important to businesses’ bottom lines, so will the role of HPC.

Leaning into the future of hyperscale data centers

The evolution of hyperscale data centers is being driven by the development of more sophisticated services and greater compute/storage demand, requiring data centers to find a way to put more computing capacity into smaller spaces. This space is within the data center as well as the many duct systems which interconnect these data centers within their campus. Corning is leading the charge by supplying the connectivity necessary for compute clusters both at the edge and in the cloud.

Edge compute and HPC also need a new level of cabling, including connectivity requirements between servers and machines. We’re working closely with hyperscale data centers on these new platforms as well as the cables that link one building to another to expand capabilities across a larger footprint. Within hyperscale there is also the push to use smaller optical fibers, optical cables and connectivity – something Corning has pioneered for decades.

Connecting cables to increase computing capacity generally requires highly skilled labor, which can be expensive and – in this job market – hard to come by. Corning makes the installation process more efficient by providing user-friendly, predetermined solutions that reduce the need for highly skilled labor in fiber installation.

Many hyperscale centers are also facing pressure to grow while working towards green, carbon-free technology. We help align quality service with sustainability goals for our customers so they don’t have to sacrifice one over the other.

Corning offers a variety of hyperscale solutions that meet these growing trends and more. From EDGE™ Rapid Connect to Data Center Interconnects (DCI) to EDGE Distribution System, our team is here to work with you in determining the best fit for your company’s needs.

 

With over 17 years of experience at Corning, Brian Rhoney has held positions in product engineering, systems engineering and product line management. He is currently the Director of Data Center Market Development in which the team is responsible for new product innovation. In 2005, Brian received recognition as the Dr. Peter Bark Inventor of the Year, and he also received his professional engineer’s license. Brian graduated from North Carolina State University with a Master of Science in Mechanical Engineering. He also received an MBA from Lenoir-Rhyne University.

 

Interested in learning more?

Contact us today to learn how our end-to-end fiber optic solutions can meet your needs.

Thank you!

A member of our team will reach out to you shortly.