Sam Altman’s personal endorsement of AMD’s upcoming data center GPU, which CEO Lisa Su says will best Nvidia’s fastest AI chips next year, serves as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI.
AMD Lisa Su called OpenAI a customer and “very early design partner” for the chip designer’s Instinct MI450 GPU that she said will usurp Nvidia’s fastest AI chips next year.
Near the end of her Advancing AI keynote in San Jose, Calif., on Thursday, Su disclosed that the ChatGPT behemoth has given the company “significant feedback on the requirements for next-generation training and inference” with regard to the MI450.
[Related: The Biggest AMD Advancing AI News: From MI500 GPU TO ROCm Enterprise AI]
She then brought out on stage OpenAI CEO and founder Sam Altman, who said he is “extremely excited for the MI450.”
“The memory architecture is great for inference. I believe it can be an incredible option for training as well,” Altman told Su.
“When you first started telling me what you’re thinking about for the specs, I was like, there’s no way. That just sounds totally crazy. It’s too big. But it’s really been so exciting to see you all get close to delivery on this. I think it’s going to be an amazing thing,” he added.
Altman’s personal endorsement of AMD’s upcoming data center GPU, the first to power a server rack designed by AMD, served as a major boost for the company. Its rival, Nvidia, owes a good deal of the riches it has made over the past few years to OpenAI, which built ChatGPT using Nvidia GPUs and helped kick off insatiable demand for such products.
AMD also received on-stage endorsements for its Instinct GPUs from executives at Microsoft, Meta, Cohere and Oracle Cloud Infrastructure on Thursday.
As AMD revealed on Thursday, the MI400 series will pack 432 GB of HBM4 memory, which it said will give the GPU 50 percent more memory capacity and bandwidth than Nvidia’s Vera Rubin platform while offering roughly the same compute performance.
Seventy-two of AMD’s MI450 GPUs will go into its “Helios” server rack, which Su said the company “designed from the grounds up as a rack-scale solution.”
“When Helios launches in 2026, we believe it’ll set a new benchmark for AI at scale,” she said.
Altman said OpenAI is facing a substantial need for more computing power due to its shift to reasoning models, which has “put pressure on model efficiency and long, complex rollouts,” in part because of the lengthy responses generated by such models.
“We need tons of compute, tons of memory, tons of CPUs as well. And our infrastructure ramp over the last year and what we’re looking over the next year has just been a crazy, crazy thing to watch,” Altman said.
Su said AMD has collaborated with OpenAI over the last few years, particularly working together in conjunction with Microsoft Azure, which has been an important cloud partner to both companies. That relationship eventually evolved to OpenAI becoming a design partner for AMD with what is now known MI450 GPU series.
“One of the things that really sticks in my mind is when we sat down with your engineers, you were like, ‘Whatever you do, just give us lots and lots of flexibility because things change so much. And that framework of working together has been phenomenal,” she said.