Supermicro didn’t waste any time.
The same day that AMD introduced its new AMD Instinct MI300 series accelerators, Supermicro debuted three GPU rackmount servers that use the new AMD accelerated processing units (APUs). One of the three new systems also offers energy-efficient liquid cooling.
Here’s a quick look, plus links for more technical details:
Supermicro 8-GPU server with AMD Instinct MI300X: AS -8125GS-TNMR2
This big 8U rackmount system is powered by a pair of AMD EPYC 9004 Series CPUs and 8 AMD Instinct MI300X accelerator GPUs. It’s designed for training and inference on massive AI models with a total of 1.5TB of HBM3 memory per server node.
The system also supports 8 high-speed 400G networking cards, which provide direct connectivity for each GPU; 128 PCIe 5.0 lanes; and up to 16 hot-swap NVMe drives.
It’s an air-cooled system with 5 fans up front and 5 more in the rear.
Quad-APU systems with AMD Instinct MI300A accelerators: AS -2145GH-TNMR and AS -4145GH-TNMR
These two rackmount systems are aimed at converged HPC-AI and scientific computing workloads.
Either way, these servers are powered by four AMD Instinct MI300A accelerators, which combine CPUs and GPUs in an APU. That gives each server a total of 96 AMD ‘Zen 4’ cores, 912 compute units, and 512GB of HBM3 memory. Also, PCIe 5.0 expansion slots allow for high-speed networking, including RDMA to APU memory.
Supermicro says the liquid-cooled 2U system provides a 50%+ cost savings on data-center energy. Another difference: The air-cooled 4U server provides more storage and an extra 8 to 16 PCIe acceleration cards.
- Visit the AMD microsite: Empowering advancement in AI and HPC
- Read a product brief: Supermicro and AMD deliver rack-scale AI and HPC solutions with new AMD Instinct MI300 series accelerators (PDF)