2024 Data Center Trends and Industry Predictions | AI and Edge Computing | Corning

In 2024, artificial intelligence will continue to drive data center power and processing needs, and is enabling new business models for the data center market segment.

Michael Crook 
Published: December 14, 2023

Data Centers are the unseen engines behind everything from everyday transactions like mobile banking and social media posting, to cutting-edge emerging technologies like artificial intelligence (AI) and fully immersive gaming. Operators are in the midst of an exciting time that presents both challenges and opportunities. As more applications move to the cloud, and as the AI boom shifts gears, things like the power, cooling, and security needs of data center operations change as well.

Looking forward to 2024, here are a few major trends we think data center operators should keep their eyes on:

1. AI will continue to drive activity and innovation

Last year, we noted that data center space must evolve to meet the power and density envelope of AI and ML, which can be three times higher than traditional envelopes. The specific processing needs of large language models, or LLMs, require a huge amounts of fiber connectivity and new methods for powering and cooling networking gear that many data centers weren’t set up to facilitate.

These needs haven’t gone away, and AI will still be a driver of many of the trends in data centers in 2024. We will also continue to see AI and machine learning (ML) leveraged within the data centers themselves, particularly at the hyperscale level, to better understand things like power usage and resource management. However, there will be another major dynamic, as companies continue to build networks to support their LLMs, they also need to build new inference networks where predictions are made by analyzing new data sets.  These new networks can require higher throughput and lower latency. 

The vast majority of the work in AI up to this point has been in building these LLMs. Things like GPT-3 and GPT-4 from OpenAI, LLaMA from Meta, and Google’s PaLM2. And this requires a specific kind of processing power, to run the billions of calculations and processes as these models are developed – essentially “taught” what they need to know. Now, most of the major players in the space are starting to move to building out specific applications using these models. This will again change the power and processing needs.

What is one possible downstream consequence of this shift from development to inference? We could see the long-awaited emergence of edge computing as a major factor. As specific AI applications are developed, companies will be looking for processing power closer to where the application is being utilized. That means smaller data centers, keeping the heavy compute closer to where it’s actually being used (i.e. manufacturing campuses, universities, hospitals, etc.), rather than centralized campuses. And that ties in directly with the next trend.

2. Multi-tenant data center spaces will have their moment

Generally, hyperscale operators design and build the largest data center campuses.  But with the increasing power and land requirements necessary to support AI, ML, and other emerging applications, hyperscalers may need to look into alternative methods for build facilities.

Here’s where MTDCs have an opportunity. In many ways, these operators have real estate companies, and technical competencies – they have the land, they know how to supply the power and cooling needs. So, for hyperscalers that need access to facilities in geographic regions where land and power are constrained, a MTDC can be a good option.

Of course, enterprise-level customers also want to take advantage of these new emerging technologies. However, building facilities is a major capital investment. We’ve seen examples where MTDCs and other new “cloud” operators offer “AI as a Service,” where a dedicated server space is leased out to an organization, large or small, to run AI workloads. 

MTDCs will also play a role in the rise of edge computing, as companies will be looking for compute power closer to where the application is being deployed.

3. Advancements in optical transceivers help data center operators maximize space

Data centers are being asked to produce exponentially more processing power, and transmit more data, faster, as new technologies gain adoption. Facility operators know meeting that need by simply adding more fiber optic interconnects is an unsustainable strategy, given land and power constraints.

Especially at the hyperscale level, we've started to see operators deploy 800G optical transceivers to support applications, which will continue – and we will likely see some 1.6 terabyte prototypes in 2024. High performance compute applications, like AI and ML, are driving 800G optical deployments.  The latest network switches that are used to interconnect AI Servers in a data center are well equipped to support 800G interconnects.  In many cases, the transceiver ports on these network switches operate in breakout mode, where the 800G circuit is broken into two 400 or multiple 100 circuits. This enables data center operators to increase the connectivity capability of the switch and interconnect more servers. As we see advancements in optical transceivers where more data can be placed onto optical wavelengths and fibers, we’ll also see higher rate optical transceivers that operate with fewer fiber connections, benefiting our data center customers by reducing cable congestion within a rack and improving air flow. 

For example, a typical Multi-mode 400G SR8 optical transceiver has 16 fiber connections and is a good choice for short reach applications, however advancements in optical technology allows more data on fiber and wavelengths.  400G SR4 optical transceivers are entering the market that cut the number of fibers to eight. These and other new optical transceivers go a long way toward helping data centers meet the rising demand for data.

Related to this trend are advancements in miniaturization, and the development of solutions like very small form factor connectors will help data center operators do more with limited space.

In general, it will behoove CIOs and CTOs to stay on top of these emerging trends to ensure data centers can support emerging business processes and new use cases. There will be a temptation to patch individual component solutions together just to keep up with rapidly advancing technology. But a comprehensive engineering solution that meets the current – and future – data needs of customers will always be a stronger strategy.

Don't let your knowledge 'lag' behind, follow the Signal Blog for the latest fiber optic trends, topics, and thought leadership.

Michael Crook

Michael Crook is a Data Center Market Development Manager. He supports our hyperscale, multi-tenant, and enterprise customers with new fiber optic innovations and commercial solutions. With over 15 years of experience, Michael has amassed a great deal of knowledge related to designing and building fiber optic network infrastructures for data center and carrier environments.

Interested in learning more?

Contact us today to learn how our end-to-end fiber optic solutions can meet your needs.

Thank you!

A member of our team will reach out to you shortly.