Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

AMD Packs 432 GB Of HBM4 Into Instinct MI400 GPUs For Double-Wide AI Racks

CRN by CRN
June 12, 2025
Home News
Share on FacebookShare on Twitter


Packing 432 GB of HBM4 memory, the Instinct MI400 series GPUs due next year will provide 50 percent more memory capacity and bandwidth than Nvidia’s upcoming Vera Rubin platform. They will power rack-scale solutions that offers the same GPU density as Vera Rubin’s rack.

AMD revealed that its Instinct MI400-based, double-wide AI rack systems coming next year will provide 50 percent more memory capacity and bandwidth than Nvidia’s upcoming Vera Rubin platform while offering roughly the same compute performance.

The chip designer shared the first details Thursday during AMD CEO Lisa Su’s keynote at its Advancing AI event in San Jose, Calif. The company also shared many more details for its MI350X GPUs and its corresponding products that will hit the market later this year.

[Related: 9 AMD Acquisitions Fueling Its AI Rivalry With Nvidia]

Andrew Dieckman, corporate vice president and general manager of data center GPU at AMD, said in a Wednesday briefing that the MI400 series “will be the highest-performing AI accelerator in 2026 for both large-scale training as well as inferencing.”

“It will bring leadership performance due to the memory capacity, memory bandwidth, scale-up bandwidth and scale up bandwidth advantages,” he said of the MI400 series and the double-wide “Helios” AI server rack it will power.

Set to launch in 2026, AMD said the MI400 GPU series will be capable of performing 40 petaflops of 4-bit floating point (FP4) and 20 petaflops of 8-bit floating point (FP8), double that of the flagship MI355X landing this year.

Compared to the MI350 series, the MI400 series will increase memory capacity to 432 GB based on the HBM4 standard, which will give it a memory bandwidth of 19.6 TBps, more than double that of the previous generation. The MI400 series will also sport a scale-out bandwidth capacity of 300 GBps per GPU.

AMD plans to pair the MI400 series with its next-generation EPYC “Venice” CPU and Pensando “Vulcano” NIC to power the Helios AI rack.

The Helios rack will consist of 72 MI400 GPUs, giving it 31 TB of HBM4 memory capacity, 1.4 PBps of memory bandwidth and 260 TBps of scale-up bandwidth. This will make it capable of 2.9 exaflops of FP4 and 1.4 exaflops of FP8. The rack will also have a scale-out bandwidth of 43 TBps.

Compared to Nvidia’s Vera Rubin platform that is set to launch next year, AMD said the Helios rack will come with the same number of GPUs and scale-up bandwidth plus roughly the same FP4 and FP8 performance.

At the same time, the company said Helios will offer 50 percent greater HBM4 memory capacity, memory bandwidth and scale-out bandwidth.

In a rendering of the Helios, the rack appeared to be wider than Nvidia’s rack-scale solutions such as the GB200 NVL72 platform.

Dieckmann said Helio is a double-wide rack, which AMD and its key partners felt was the “right design point between complexity [and] reliability.”

The executive said the rack’s wider size was a “trade-off” to achieve Helios’ performance advantages, including the 50 percent greater memory bandwidth.

“That trade-off was deemed to be a very acceptable trade-off because these large data centers tend not to be square footage constrained. They tend to be megawatts constrained. And so we think this is the right design point for the market based upon what we’re delivering,” Dieckmann said.



Source link

Tags: Accelerator ChipsAI
CRN

CRN

Next Post
The Biggest AMD Advancing AI News: From MI500 GPU TO ROCm Enterprise AI

The Biggest AMD Advancing AI News: From MI500 GPU TO ROCm Enterprise AI

Recommended.

Lincata Unveils LincTV: Now Available in Epic Toolbox

Lincata Unveils LincTV: Now Available in Epic Toolbox

February 11, 2025
GIFTEDCROOK Malware Evolves: From Browser Stealer to Intelligence-Gathering Tool

GIFTEDCROOK Malware Evolves: From Browser Stealer to Intelligence-Gathering Tool

June 28, 2025

Trending.

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

October 6, 2025
Cloud Computing on the Rise: Market Projected to Reach .6 Trillion by 2030

Cloud Computing on the Rise: Market Projected to Reach $1.6 Trillion by 2030

August 1, 2025
Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

July 14, 2025
The Ultimate MSP Guide to Structuring and Selling vCISO Services

The Ultimate MSP Guide to Structuring and Selling vCISO Services

February 19, 2025
Translators’ Voices: China shares technological achievements with the world for mutual benefit

Translators’ Voices: China shares technological achievements with the world for mutual benefit

June 3, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio