During the company’s two-hour “Advancing AI” event, held live in Silicon Valley and live-streamed on YouTube, CEO Lisa Su asserted that “AI is absolutely the No. 1 priority at AMD.”
She also said that AI is both “the future of computing” and “the most transformative technology of the last 50 years.”
AMD is leading the AI charge with its Instinct MI300 Series accelerators, designed for both cloud and enterprise AI and HPC workloads. These systems offer GPUs, large and fast memory, and 3D packaging using the 4th gen AMD Infinity Architecture.
AMD is also relying heavily on cloud, OEM and software partners that include Meta, Microsoft and Oracle Cloud. Another partner, Supermicro, announced additions to its H13 generation of accelerated servers powered by 4th Gen AMD EPYC CPUs and AMD Instinct MI300 Series accelerators.
The AMD Instinct MI300X is based on the company’s CDNA 3 architecture. It packs 304 GPU cores. It also includes up to 192MB of HBM3 memory with a peak memory bandwidth of 5.3TB/sec. It’s available as 8 GPUs on an OAM baseboard.
The accelerator runs on the latest bus, the PCIe Gen 5, at 128GB/sec.
AI performance has been rated at 20.9 PFLOPS of total theoretical peak FP8 performance, AMD says. And HPC performance has a peak double-precision matrix (FP64) performance of 1.3 PFLOPS.
Compared with competing products, the AMD Instinct MI300X delivers nearly 40% more compute units, 1.5x more memory capacity, and 1.7x more peak theoretical memory bandwidth, AMD says.
AMD is also offering a full system it calls the AMD Instinct Platform. This packs 8 MI300X accelerators to offer up to 1.5TB of HBM3 memory capacity. And because it’s built on the industry-standard OCP design, the AMD Instinct Platform can be easily dropped into an existing servers.
The AMD Instinct MI300X is shipping now. So is a new Supermicro 8-GPU server with this new AMD accelerator.
AMD describes its new Instinct MI300A as the world’s first data-center accelerated processing unit (APU) for HPC and AI. It combines 228 cores of AMD CDNA 3 GPU, 224 cores of AMD ‘Zen 4’ CPUs, and 128GB of HBM3 memory with a memory bandwidth of up to 5.3TB/sec.
AMD says the Instinct MI300A APU gives customers an easily programmable GPU platform, high-performing compute, fast AI training, and impressive energy efficiency.
The energy savings are said to come from the APU’s efficiency. As HPC and AI workloads are both data- and resource-intensive, a more efficient system means users can do the same or more work with less hardware.
As part of its push into AI, AMD intends to maintain an open software platform. During CEO Su’s presentation, she said that openness is one of AMD’s three main priorities for AI, along with offering a broad portfolio and working with partners.
Victor Peng, AMD’s president, said the company has set as a goal the creation of a unified AI software stack. As part of that, the company is continuing to enhance ROCm, the company’s software stack for GPU programming. The latest version, ROCm 6, will ship later this month, Peng said.
AMD says ROCm 6 can increase AI acceleration performance by approximately 8x when running on AMD MI300 Series accelerators in Llama 2 text generation compared with previous-generation hardware and software.
ROCm 6 also adds support for several new key features for generative AI. These include FlashAttention, HIPGraph and vLLM.
AMD is also leveraging open-source AI software models, algorithms and frameworks such as Hugging Face, PyTorch and TensorFlow. The goal: simplify the deployment of AMD AI solutions and help customers unlock the true potential of generative AI.
Shipments of ROCm are set to begin later this month.
- Get tech specs: AMD Instinct MI300X platform
- Learn more: AMD ROCm software
- Read a product brief: Supermicro and AMD deliver rack-scale AI and HPC solutions with new AMD Instinct MI300 series accelerators (PDF)
- Check out: Supermicro servers with AMD accelerators
- Watch the YouTube video: AMD’s “Advancing AI” event (2:10:08)