To help tech companies overcome power constraints facing the industry’s massive AI data center buildout, Arm executive Mohamed Awad says his company is introducing a new standard for building chiplet-based silicon products that can maximize performance per watt.
The leader of Arm’s data center business said there is “no doubt about the long-run need” for the massive AI data center buildout that’s occurring between the largest and most influential tech companies in the world, including his.
“If you think about it from a long-range perspective, there’s no doubt that all this is going to be required because we’re just in its infancy, and we’re still learning and developing,” said Mohamed Awad, senior vice president and general manager of Arm’s infrastructure line of business, in an interview with CRN last Thursday.
[Related: Analysis: How Two Big Decisions Helped AMD Win The OpenAI Deal]
The last month has seen several gargantuan AI data center deals, including OpenAI’s agreements to build at least 10 gigawatts worth of infrastructure using Nvidia GPU platforms and 6 gigawatts of server farms using AMD GPU platforms.
This week alone saw OpenAI’s deal with Broadcom to design 10 gigawatts worth of custom AI accelerator chips, Oracle’s 50,000-GPU deal with AMD, Google’s new $9 billion investment in U.S. AI infrastructure and a $40 billion deal to acquire Aligned Data Centers by a consortium that includes Nvidia, Microsoft and Elon Musk’s xAI.
A senior executive at Houston-based systems integrator Mark III Systems said demand for such data centers is being increasingly driven by inference thanks to the growing capabilities of agentic AI applications. This is a change from when training was the main source of growth in the AI data center market a few years ago.
“The scale of what’s being needed right now, especially when you’re talking about these agentic models, these larger reasoning models, they’re large, so as more organizations want to take advantage of them versus train your own foundation models, they’re going to require similar platforms,” said Andy Lin, CTO and vice president of strategy and innovation at Mark III Systems, which has been named a top Nvidia partner for multiple years.
But Awad, like others in the industry, acknowledged the ever-apparent and mounting power constraints standing in the way of the growing number of massive AI data centers being planned. These projects include the $500 billion Stargate joint venture Arm is participating in with OpenAI, Oracle, investment firm MGX and its largest investor, SoftBank Group.
“In some ways, money is free, and obviously I don’t mean it’s free, but people can get money. It’s not about money. It’s about how much power can I get, and can I meet that demand? And the reality is, you just can’t build the data centers fast enough,” he said.
To that end, Arm made an announcement on Tuesday at the 2025 OCP Global Summit that Awad said will address these energy consumption issues by giving tech companies a new standard for building custom, chiplet-based silicon products that can maximize performance per watt.
Arm announced that it is contributing the Foundation Chiplet System Architecture specification to the Open Compute Project (OCP), the Meta-founded group that sets standards on a range of things for hyperscalers building data centers. The world’s largest hyperscalers—Amazon, Microsoft and Google—use Arm technology for custom data center chips, with Amazon being the most prominent and prolific user. The company’s tech is also found within Nvidia’s latest rack-scale AI platforms that are being deployed in many new AI data centers.
The new specification is based on the Arm Chiplet System Architecture the company introduced last year as a way to standardize how chiplets are partitioned and communicate with each other on chiplet-based based silicon packages that rely on Arm CPU cores.
“What we’ve decided is that the wider industry can benefit from this. This shouldn’t be just tied to Arm,” said Awad. “There’s certainly an Arm-specific component to it, but we think that the principles that we’re employing are actually much broader than that.”
By contributing the Foundation Chiplet System Architecture specification to OCP, Awad said this will create opportunities for companies to build chiplet-based products that are based on any instruction set architecture. This includes x86, whose main users are AMD and Intel, the latter of which is making a renewed push to design custom chips.
“There are going to be aspects of the system which aren’t necessarily tied to Arm,” he said. “If you’re building an I/O chiplet, which has the latest PCIe version, that’s not necessarily something that we need to control.”
This move will give companies greater freedom in designing chiplet-based silicon products that can pack as much performance as possible within a set power envelope at a time when AI data centers are demanding an unprecedented amount of energy, according to Awad.
“In that environment where performance per watt matters a lot what we’re seeing more and more is that big technology providers who are building data centers are specifying all aspects of the system to eke out as much performance as possible,” he said.
But while the specification is open to any instruction set architecture, Awad said he expects Arm to benefit by offering competitive compute offerings such as its Neoverse CPU core designs that require substantially less investment than a customer building a completely custom product but provide much greater customization than off-the-shelf CPU products.
The executive presented the choice for companies with compute needs as this: “On one end, I can go off and spend a billion or billions of dollars to go just take an [Arm] architectural license and build my own silicon or use some new architecture and have to stand up a whole ecosystem, which is incredibly expensive.
“On the other end of the spectrum, they could go buy a piece of silicon from Ampere or Nvidia—Nvidia’s Grace CPU, for example. Or they could go buy an x86 CPU,” he added.
In between these two options, according to Awad, are Arm’s off-the-shelf CPU cores and adjacent technologies, which companies can license to design their own products. The company has also propped up its own ecosystem program, called Arm Total Design, which connects those who have compute needs with chip designers, chip manufacturers and other technology providers that can aid with custom silicon development.
“It’s a continuum, and each point along that trajectory has a different level of customization associated with it and a different level of cost. And it’s our job, from an ecosystem perspective, to help enable as much customization as possible but lower that cost,” he said, adding that cost also means how quickly a company can introduce a product to the market.
It’s for these reasons along with Arm’s existing footprint of more than 300 billion devices—which includes server farms built by the world’s largest hyperscalers—that Awad feels “very comfortable” with Arm’s long-term prospects in the ongoing AI data center buildout.
“Now, obviously, we look to constantly ensure that we’re actively engaged, but this is a very big market, and there is a lot happening. And I think the breadth of our offering and the breadth of our footprint bodes well,” he said.