How Data Centers Maneuvre the Climate Crisis| Corning

Climate crisis: How data center operators can keep their cool

Climate crisis: How data center operators can keep their cool

2022 was a year in which record temperatures were documented across the world but you’d be forgiven for missing some of the wide-ranging impacts. This included a notable effect on the data center market, with centers from both Google Cloud and Oracle shutting down due to cooling failures during an extreme UK heatwave.

For data center operators, unfortunately, this threat is only going to become more severe. Not only as a result of the deepening impact of climate change but also because of the energy consumption of data centers as well as the need to meet rising bandwidth demand. As data centers evolve to manage increased speeds such as 400 or 800G, they will need to deploy higher speed transceivers that generally use more power. And, considering what we know of pluggable transceivers, higher speed equals more heat dissemination and so even more cooling needed at the back of it.

These data rates may seem a long way off for some, but with the pressures of climate change that are here right now, becoming more resilient to soaring temperatures is simply a non-negotiable for data center operators. Let’s take a look at some of the ways in which that can be achieved.

Smart choice of location

If looking to relocate or perhaps invest in a new or additional data center, then exactly where you decide to do it can be impactful. It’s particularly key to avoid building at locations where the energy grid is already struggling to support and the risk of outages already exists. While this won’t be an option for every deployment, it’s interesting to see some of the more extreme examples of how the surrounding environment can be utilized for natural cooling.

A few years ago, one of our customers, Green Mountain, built a data center inside a mountain on an island in a remote Norwegian fjord. The 21,000 square meter facility with six mountain halls and several dedicated customer rooms utilises 100 percent renewable hydroelectric power and the efficient cooling of the adjacent fjord to provide a power usage effectiveness (PUE) of less than 1.2 – well below the industry average at the time.

Facebook even has a data center in Luleå, the North Pole that utilises the region’s sub-zero air and sea temperatures and has a PUE of just 1.07. At the opposite end of the scale, some operators have built data centers in desert environments, such as eBay in Phoenix. Clever cooling systems and the benefits of the dry desert air make these locations surprisingly effective – and safe from any potential natural disasters such as flooding.

Achieving higher density

There is of course also a number of smaller, more accessible approaches that can be taken by data center operators to support cooling. Things like choosing the right products to enable better air flow and minimising cooling, as well as selecting equipment with higher port density to maximize the energy usage of racks.

Higher density can be achieved with Very Small Form Factor connectivity (VSFF), as an example. MDC and SN connector formats promise the possibility of connecting directly from one high-speed transceiver to another transceiver, simplifying the insertion of individual connectors into various switches, breaking out from 400G to 4x100G. In addition, up to three MDC or SN duplex connectors fit into the footprint of an LC duplex, which provides an enormous density advantage.

For operators struggling with reduced space in server racks, implementing LC duplex connectivity with LC to MDC patch cords and compatible hardware is an effective approach. And it’s one that can be used for active equipment or to add connectivity for further customers generating additional revenue streams. This approach will not only allow the LC duplex footprint to be retained at the transceiver end but the port density with MDC in modules or cassettes of the same size can also be increased by up to 3x – imagine the possibility of having 432 instead of 144 fibres in one rack-unit.

When it comes to enabling better air flow, take Altice Portugal for example. The telco was struggling with increased cable density as part of its growing network which led to hotspots inside its server racks. Add climate change to the equation and these areas would become even hotter and have a devastating impact on operations if not resolved. Fortunately, when the team upgraded from a duplex system to a 12-fibre based structured cabling system, the hotspot issues were no longer a concern and the data center also benefited from improved efficiency and flexibility while being futureproofed for further upgrades.   

Port breakout applications

In addition to optimizing cabling infrastructure, port breakout applications can positively influence the power consumption of network components and transceivers too. The power consumption of a 100G duplex transceiver for a QSFP-DD is about 4.5 watts, while a 400G parallel optical transceiver operated in breakout mode as four ports with 100G each consumes only three watts per port. This equates to savings of up to 30 percent, notwithstanding the additional savings in air conditioning and switch chassis power consumption and their contribution to space savings.

With port breakout applications, data center operators can also triple the port capacity of a switch card operating in a 10G or 25G network and no conversion modules or harnesses are required, meaning no need for any extra connector pairs that can impact insertion loss.

Prevention better than cure

Of course, there are many thermal management tools and solutions out there for data centers, from hot and cold aisle layouts to pumped refrigerants, liquid cooling and edge and cloud technology. The question is – do they go far enough to deal with climate change? There are further cooling technologies on the horizon that utilize artificial intelligence and robot sensors, so it is true that innovation in this space is ongoing.  

Unfortunately though, there is no time to waste when it comes to coping with climate pressures. Particularly when that is combined with increasing demand for higher bandwidths. Data center operators simply can’t afford to wait for a heat-related outage before taking action. As the saying goes – prevention is always better than the cure and there are big and small changes that data center operators can be making now to increase contribution.

Cindy Ryborz

Cindy Ryborz is the EMEA (Europe, Middle East, Africa) Marketing Manager for Data Center. In this role she oversees online and offline marketing activities for enterprise, colocation/multitenant and hyperscale data center segments across the region.

Cindy has over 14 years of experience at Corning, working in customer care and marketing. Joining the marketing team in June 2012, she took on the execution of Corning’s Local Area Networks strategy across the EMEA region She then moved into the strategic Marketing Manager role for In-Building Networks, Local Area Networks in 2016 before moving into her current role in 2018.

Cindy holds a Master of Arts degree from the Humboldt University in Berlin, Germany.

Cindy Ryborz
Corning Optical Communications
Last update: April 2023