Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

AMD Instinct MI300A blends GPU, CPU for super-speedy AI/HPC

Featured content

AMD Instinct MI300A blends GPU, CPU for super-speedy AI/HPC

CPU or GPU for AI and HPC? You can get the best of both with the AMD Instinct MI300A.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A is the world’s first data center accelerated processing unit for high-performance computing and AI. It does this by integrating both CPU and GPU cores on a single package.

That makes the AMD Instinct MI300A highly efficient at running both HPC and AI workloads. It also makes the MI300A powerful enough to accelerate training the latest AI models.

Introduced about a year ago, the AMD Instinct MI300A accelerator is shipping soon. So are two Supermicro servers—one a liquid-cooled 2U system, the other an air-cooled 4U—each powered by four MI300A units.

Under the Hood

The technology of the AMD Instinct MI300A is impressive. Each MI300A integrates 24 AMD ‘Zen 4’ x86 CPU cores with 228 AMD CDNA 3 high-throughput GPU compute units.

You also get 128GB of unified HBM3 memory. This presents a single shared address space to CPU and GPU, all of which are interconnected into the coherent 4th Gen AMD Infinity architecture.

Also, the AMD Instinct MI300A is designed to be used in a multi-unit configuration. This means you can connect up to four of them in a single server.

To make this work, each APU has 1 TB/sec. of bidirectional connectivity through eight 128 GB/sec. AMD Infinity Fabric interfaces. Four of the interfaces are dedicated Infinity Fabric links. The other four can be flexibly assigned to deliver either Infinity Fabric or PCIe Gen 5 connectivity.

In a typical four-APU configuration, six interfaces are dedicated to inter-GPU Infinity Fabric connectivity. That supplies a total of 384 GB/sec. of peer-to-peer connectivity per APU. One interface is assigned to support x16 PCIe Gen 5 connectivity to external I/O devices. In addition, each MI300A includes two x4 interfaces to storage, such as M.2 boot drives, plus two USB Gen 2 or 3 interfaces.

Converged Computing

There’s more. The AMD Instinct MI300A was designed to handle today’s convergence of HPC and AI applications at scale.

To meet the increasing demands of AI applications, the APU is optimized for widely used data types. These include FP64, FP32, FP16, BF16, TF32, FP8 and INT8.

The MI300A also supports native hardware sparsity for efficiently gathering data from sparse matrices. This saves power and compute cycles, and it also lowers memory use.

Another element of the design aims at high efficiency by eliminating time-consuming data copy operations. The MI300A can easily offload tasks easily between the CPU and GPU. And it’s all supported by AMD’s ROCm 6 open software platform, built for HPC, AI and machine learning workloads.

Finally, virtualized environments are supported on the MI300A through SR-IOV to share resources with up to three partitions per APU. SR-IOV—short for single-root, input/output virtualization—is an extension of the PCIe spec. It allows a device to separate access to its resources among various PCIe functions. The goal: improved manageability and performance.

Fun fact: The AMD Instinct MI300A is a key design component of the El Capitan supercomputer recently dedicated by Lawrence Livermore Labs. This system can process over two quintillion (1018) calculations per second.

Supermicro Servers

As mentioned above, Supermicro now offers two server systems based on the AMD Instinct MI300A APU. They’re 2U and 4U systems.

These servers both take advantage of AMD’s integration features by combining four MI300A units in a single system. That gives you a total of 912 GPUs, 96 CPUs, and 512GB of HBM3 memory.

Supermicro says these systems can push HPC processing to Exascale levels, meaning they’re very, very fast. “Flop” is short for floating point operations per second, and “exa” indicates a 1 with 18 zeros after it. That’s fast.

Supermicro’s 2U server (model number AS -2145GH-TNMR-LCC) is liquid-cooled and aimed at HPC workloads. Supermicro says direct-to-chip liquid-cooling technology enables a nice TCO with over 51% data center energy cost savings. The company also cites a 70% reduction in fan power usage, compared with air-cooled solutions.

If you’re looking for big HPC horsepower, Supermicro’s got your back with this 2U system. The company’s rack-scale integration is optimized with dual AIOM (advanced I/O modules) and 400G networking. This means you can create a high-density supercomputing cluster with as many as 21 of Supermicro’s 2U systems in a 48U rack. With each system combining four MI300A units, that would give you a total of 84 APUs.

The other Supermicro server (model number AS -4145GH-TNMR) is an air-cooled 4U system, also equipped with four AMD Instinct MI300A accelerators, and it’s intended for converged HPC-AI workloads. The system’s mechanical airflow design keeps thermal throttling at bay; if that’s not enough, the system also has 10 heavy-duty 80mm fans.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: CPUs and GPUs for AI training and inferencing

Featured content

Tech Explainer: CPUs and GPUs for AI training and inferencing

Which is best for AI – a CPU or a GPU? Like much in life, it depends.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While central processing units and graphics processing units serve different roles in AI training and inferencing, both roles are vital to AI workloads.

CPUs and GPUs were both invented long before the AI era. But each has found new purpose as the robots conduct more of our day-to-day business.

Each has its tradeoffs. Most CPUs are less expensive than GPUs, and they typically require less electric power. But that doesn’t mean CPUs are always the best choice for AI workloads. Like lots of things in life, it depends.

Two Steps to AI

A typical AI application involves a two-step process. First training. Then inferencing.

Before an AI model can be deployed, it must be trained. That could include suggesting which movie to watch next on Netflix or detecting fake currency in a retail environment.

Once the AI model has been deployed, it can begin the inferencing process. In this stage, the AI application interfaces with users, devices and other models. Then it autonomously makes predictions and decisions based on new input.

For example, Netflix’s recommendation engine is powered by an AI model. The AI was first trained to consider your watching history and stated preferences, as well as to review newly available content. Then the AI employs inferencing—what we might call reasoning—to suggest a new movie or TV show you’re likely to enjoy.

AI Training

GPU architectures like those found in the AMD Instinct MI325X accelerator offers highly parallel processing. In other words, a GPU can perform many calculations simultaneously.

The AMD Instinct MI325X has more than 300 GPU compute units. They make the accelerator faster and more adept at both processing large datasets and handling the repetitious numerical operations common to the training process.

These capabilities also mean GPUs can accelerate the training process. That’s especially true for large models, such as those that underpin the networks used for deep learning.

CPUs, by contrast, excel at general-purpose tasks. Compared with a GPU, a CPU will be better at completing sequential tasks that require logic or decision-making. For this reason, a CPU’s role in AI training is mostly limited to data preprocessing and coordinating GPU tasks.

AI Inferencing

However, when it comes to AI inferencing, CPUs play a much more significant role. Often, inferencing can be a relatively lightweight workload, because it’s not highly parallel. A good example is the AI capability present in modern edge devices such as the latest iOS and Android smartphones.

As mentioned above, the average CPU also consumes less power than a GPU. That makes a CPU a better choice in situations where heat and battery life are important.

However, not all inferencing applications are lightweight, and such workloads may not be appropriate for CPUs. One example is autonomous vehicles. They will require massive parallel processing in real-time to ensure safety and optimum efficiency.

In these cases, GPUs will play a bigger role in the AI inferencing process, despite their higher cost and power requirements.

Powerful GPUs are already used for AI inferencing at the core. Examples include large-scale cloud services such as AWS, Google Cloud and Microsoft Azure.

Enterprise Grade

Enterprises often conduct AI training and inferencing on a scale so massive, it eclipses those found in edge environments. In these cases, IT engineers must rely on hugely powerful systems.

One example is the Supermicro AS -8125GS-TNMR2 server. This 8U behemoth—weighing in at 225 pounds—can operate up to eight AMD Instinct MI300X accelerators. And it’s equipped with dual AMD EPYC processors, the customer’s choice of either the 9004 or 9005 series.

To handle some of the world’s most demanding AI workloads, Supermicro’s server is packed with an astonishing amount of tech. In addition to its eight GPUs, the server also has room for a pair of AMD EPYC 9005-series processors, 6TB of ECC DDR5 memory, and 18 hot-swap 2.5-inch NVMe and SATA drives.

That makes the Supermicro system one of the most capable and powerful servers now available. And as AI evolves, tech leaders including AMD and Supermicro will undoubtedly produce more powerful CPUs, GPUs and servers to meet the growing demand.

What will the next generation of AI training and inferencing technology look like? To find out, you won’t have to wait long.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

2024: A look back at the year’s best

Featured content

2024: A look back at the year’s best

Let's look back at 2024, a year when AI was everywhere, AMD introduced its 5th Gen EPYC processors, and Supermicro led with liquid cooling.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You couldn't call 2024 boring.

If anything, the year was almost too exciting, too packed with important events, and moving much too fast.

Looking back, a handful of 2024’s technology events stand out. Here are a few of our favorite things.

AI Everywhere

In March AMD’s chief technology officer, Mark Papermaster, made some startling predictions that turned out to be absolutely true.

Speaking at an investors’ event sponsored by Arete Research, Papermaster said, “We’re thrilled to bring AI across our entire product portfolio.” AMD has indeed done that, offering AI capabilities from PCs to servers to high-performance GPU accelerators.

Papermaster also said the buildout of AI is an event as big as the launch of the internet. That certainly sounds right.

He also said AMD believes the total addressable market for AI through 2027 to be $400 billion. If anything, that was too conservative. More recently, consultants Bain & Co. predicted that figure will reach $780 billion to $990 billion.

Back in March, Papermaster said AMD had increased its projection for full-year AI sales from $2 billion to $3.5 billion. That’s probably too low, too.

AMD recently reported revenue of $3.5 billion for its data-center group for just the third quarter alone. The company attributed at least some of the group’s 122% year-on-year increase to the strong ramp of AMD Instinct GPU shipments.

5th Gen AMD EPYC Processors

October saw AMD introduce the fifth generation of its powerful line of EPYC server processors.

The 5th Gen AMD EPYC processors use the company’s new ‘Zen 5’ core architecture. It includes over 25 SKUs offering anywhere from 8 to 192 cores. And the line includes a model—the AMD EPYC 9575F—designed specifically to work with GPU-powered AI solutions.

The market has taken notice. During the October event, AMD CEO Lisa Su told the audience that nearly one in three servers worldwide (34%) are now powered by AMD EPYC processors. And Supermicro launched its new H14 line of servers that will use the new EPYC processors.

Supermicro Liquid Cooling

As servers gain power to add AI and other compute-intensive capabilities, they also run hotter. For data-center operators, that presents multiple challenges. One big one is cost: air conditioning is expensive. What’s more, AC may be unable to cool the new generation of servers.

Supermicro has a solution: liquid cooling. For some time, the company has offered liquid cooling as a data-center option.

In November the company took a new step in this direction. It announced a server that comes with liquid cooling only.

The server in question is the Supermicro 2U 4-node FlexTwin, model number AS -2126FT-HE-LCC. It’s a high-performance, hot-swappable, high-density compute system designed for HPC workloads.

Each 2U system comprises 4 nodes, and each node is powered by dual AMD EPYC 9005 processors. (The previous-gen AMD EPYC 9004s are supported, too.)

To keep cool, the FlexTwin server uses a direct-to-chip (D2C) cold plate liquid cooling setup. Each system also runs 16 counter-rotating fans. Supermicro says this cooling arrangement can remove up to 90% of server-generated heat.

AMD Instinct MI325X Accelerator

A big piece of AMD’s product portfolio for AI is its Instinct line of accelerators. This year the company promised to maintain a yearly cadence of new Instinct models.

Sure enough, in October the company introduced the AMD Instinct MI325X Accelerator. It’s designed for Generative AI performance and working with large language models (LLMs). The system offers 256GB of HBM3E memory and up to 6TB/sec. of memory bandwidth.

Looking ahead, AMD expects to formally introduce the line’s next member, the AMD Instinct MI350, in the second half of next year. AMD has said the new accelerator will be powered by a new AMD CDNA 4 architecture, and will improve AI inferencing performance by up to 35x compared with the older Instinct MI300.

Supermicro Edge Server

A lot of computing now happens at the edge, far beyond either the office or corporate data center.

Even more edge computing is on tap. Market watcher IDC predicts double-digit growth in edge-computing spending through 2028, when it believes worldwide sales will hit $378 billion.

Supermicro is on it. At the 2024 MWC, held in February in Barcelona, the company introduced an edge server designed for the kind of edge data centers run by telcos.

Known officially as the Supermicro A+ Server AS -1115SV-WTNRT, it’s a 1U short-depth server powered by a single AMD EPYC 8004 processor with up to 64 cores. That’s edgy.

Happy Holidays from all of us at Performance Intensive Computing. We look forward to serving you in 2025.

Check out these related blog posts:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Faster is better. Supermicro with 5th Gen AMD is faster

Featured content

Faster is better. Supermicro with 5th Gen AMD is faster

Supermicro servers powered by the latest AMD processors are up to 9 times faster than a previous generation, according to a recent benchmark.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When it comes to servers, faster is just about always better.

With faster processors, workloads get completed in less time. End users get their questions answered sooner. Demanding high-performance computing (HPC) and AI applications run more smoothly. And multiple servers get all their jobs done more rapidly.

And if you’ve installed, set up or managed one of these faster systems, you’ll look pretty smart.

That’s why the latest benchmark results from Supermicro are so impressive, and also so important.

The tests show that Supermicro servers powered by the latest AMD processors are up to 9 times faster than a previous generation. These systems can make your customer happy—and make you look good.

SPEC Check

The benchmark in question are those of the Standard Performance Evaluation Corp., better known as SPEC. It’s a nonprofit consortium that sets benchmarks for running complete applications.

Supermicro ran its servers on SPEC’s CPU 2017 benchmark, a suite of 43 benchmarks that measures and compare compute-intensive performance. All of them stress a system’s CPU, memory subsystem and compiler—emphasizing all three of these components working together, not just the processor.

To provide a comparative measure of integer and floating-point compute-intensive performance, the benchmark uses two main metrics. The first is speed, or how much time a server needs to complete a single task. The second is throughput, in which the server runs multiple concurrent copies.

The results are given as comparative scores. In general, higher is better.

Super Server

The server tested was the Supermicro H14 Hyper server, model number AS 2126HS-TN. It’s powered by dual AMD EPYC 9965 processors and loaded with 1.5TB of memory.

This server has been designed for applications that include HPC, cloud computing, AI inferencing and machine learning.

In the floating-point measure, the new server, when compared with a SMC server powered by an earlier-gen AMD EPYC 7601, was 8x faster.

In the Integer Rate measure, compared with a circa 2018 SMC server, it’s almost 9x faster.

Impressive results. And remember, when it comes to servers, faster is better.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: Why does PCIe 5.0 matter? And what’s coming next?

Featured content

Tech Explainer: Why does PCIe 5.0 matter? And what’s coming next?

PCIe 5.0 connects high-speed components to servers and PCs. Versions 6 & 7, coming soon, will deliver even higher speeds for tomorrow’s AI workloads.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You’ve no doubt heard of PCIe 5.0. But what is it exactly? And why does it matter?

As the name and number imply, PCIe 5.0 is the fifth generation of the Peripheral Component Interconnect Express interface standard. PCIe essentially sets the rules for connecting high-speed components such as GPUs, networking cards and storage devices to servers, desktop PCs and other devices.

To be sure, these components could be connected via a number of other interface standards, such as USB-C and SATA.

But PCIe 5.0 alone offers extremely high bandwidth and low latency. That makes it a better choice for mission-critical enterprise IT operations and resource-intensive AI workloads.

Left in the Dust

The 5th generation of PCIe was released in May 2019, bringing significant improvements over PCIe 4.0. These include:

  • Increased Bandwidth. PCIe 5.0 has a maximum throughput of 32 giga-transfers per second (GT/s)—effectively double the bandwidth of its predecessor. In terms of data transfer, 32 GT/s translates to around 4 GB of data throughput per lane in each direction. That allows for a total of 64 GB/s in a 16-lane PCIe-based GPU. That’s perfect for modern GPU-dependent workflows such as AI-inferencing.
  • Lower Latency. Keeping latency as low as possible is crucial for applications like gaming, high-performance computing (HPC) and AI workloads. High latency can inhibit data retrieval and processing, which in turn hurts both application performance and the user experience. The latency of PCIe 5.0 varies depending on multiple factors, including network connectivity, attached devices and workloads. But it’s safe to assume an average latency of around 100 nanoseconds (ns) — roughly 50% less than PCIe 4.0. And again, with latency, lower is better.
  • Enhanced Data-Center Features. Modern data-center operations are among the most demanding. That’s especially true for IT operations focused on GenAI, machine learning and telecom. So it’s no surprise that PCIe 5.0 includes several features focused on enhanced operations for data centers. Among the most notable is increased bandwidth and faster data access for NVMe storage devices. PCIe 5.0 also includes features that enhance power management and efficiency.

Leveraging PCIe 5

AMD is a front-runner in the race to help enterprises cope with modern AI workloads. And the company has been quick to take advantage of PCIe 5.0’s performance improvements. Take, for example, the AMD Instinct MI325X Accelerator.

This system is a leading-edge accelerator module for generative AI, inference, training and HPC. Each discrete AMD Instinct MI325X offers a 16-lane PCIe Gen 5 host interface and seven AMD Infinity Fabric links for full connectivity between eight GPUs in a ring.

By leveraging a PCIe 5.0 connection, AMD’s accelerator can offer I/O-to-host-CPU and scale-out network bandwidths of 128 GB/sec.

AMD is also using PCIe on its server processors. The new 5th generation AMD EPYC server processors take advantage of PCIe 5.0’s impressive facility. Specifically, the AMD EPYC 9005 Series processors support 128 PCIe 5 I/O lanes in a single-socket server. For dual-socket servers, support increases to 160 lanes.

Supermicro is another powerful force in enterprise IT operations. The company’s behemoth H14 8-GPU system (model number AS-8126GS-TNMR2) leverages AMD EPYC processors and AMD Instinct accelerators to help enterprises deploy the largest AI and large language models (LLMs).

The H14’s standard configuration includes eight PCIe 5.0 x16 low-profile slots and two full-height slots. Users can also opt for a PCIe expansion kit, which adds two additional PCIe 5.0 slots. That brings the grand total to an impressive 12 PCIe 5.0 16-lane expansion slots.

PCIe 6.0 and Beyond

PCIe 5.0 is now entering its sixth year of service. That’s not a long time in the grand scheme of things. But the current version might feel ancient to IT staff who need to eke out every shred of bandwidth to support modern AI workloads.

Fortunately, a new PCIe generation is in the works. The PCIe 6.0 specification, currently undergoing testing and development, will offer still more performance gains over its predecessor.

PCI-SIG, an organization committed to developing and enhancing the PCI standard, says the 6.0 platform’s upgrades will include:

  • A data rate of up to 64 GT/sec., double the current rate and providing a maximum bidirectional bandwidth of up to 256 GB/sec for x16 lanes
  • Pulse Amplitude Modulation with 4 levels (PAM4)
  • Lightweight Forward Error Correct (FEC) and Cyclic Redundancy Check (CRC) to mitigate the bit error rate increase associated with PAM4 signaling
  • Backwards compatibility with all previous generations of PCIe technology

There’s even a next generation after that, PCIe 7.0. This version could be released as soon as 2027, according to the PCI-SIG. That kind of speed makes sense considering the feverish rate at which new technology is being developed to enable and expand AI operations.

It’s not yet clear how accurate those release dates are. But one thing’s for sure: You won’t have to wait long to find out.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro JumpStart remote test site adds latest 5th Gen AMD EPYC processors

Featured content

Supermicro JumpStart remote test site adds latest 5th Gen AMD EPYC processors

Register now to test the Supermicro H14 2U Hyper with dual AMD EPYC 9965 processors from the comfort and convenience of your office.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro’s JumpStart remote test site will soon let you try out a server powered by the new 5th Gen AMD EPYC processors from any location you choose.

The server is the Supermicro H14 2U Hyper with dual AMD EPYC 9965 processors. It will be available for remote testing on the Supermicro JumpStart site starting on Dec. 2. Registration is open now.

The JumpStart site lets you use a Supermicro server solution online to validate, test and benchmark your own workloads, or those of your customers. And using JumpStart is free.

All test systems on JumpStart are fully configured with SSH (the Secure Socket Shell network protocol); VNC (Virtual Network Computing remote-access software); and Web IPMI (the Intelligent Platform Management Interface). During your test, you can open one session of each.

Using the Supermicro JumpStart remote testing site is simple:

Step 1: Select the system you want to test, and the time slot when you want to test it.

Step 2: At the scheduled time, login to the JumpStart site using your Supermicro single sign-on (SSO) account. If you don’t have an account yet, create one and then use it to login to JumpStart. (Creating an account is free.)

Step 3: Use the JumpStart site to validate, test and benchmark your workloads!

Rest assured, Supermicro will protect your privacy. Once you’re done testing a system on JumpStart, Supermicro will manually erase the server, reflash the BIOS and firmware, and re-install the OS with new credentials.

Hyper power

The AMD-powered server recently added to JumpStart is the Supermicro H14 2U Hyper, model number AS -2126HS-TN. It’s powered by dual AMD EPYC 9965 processors. Each of these CPUs offers 192 cores and a maximum boost clock of 3.7 GHz.

This Supermicro server also features 3.8TB of storage and 1.5TB of memory. The system is built in the 2U rackmount form factor.

Are you eager to test this Supermicro server powered by the latest AMD EPYC CPUs? JumpStart is here to help you.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro FlexTwin now supports 5th gen AMD EPYC CPUs

Featured content

Supermicro FlexTwin now supports 5th gen AMD EPYC CPUs

FlexTwin, part of Supermicro’s H14 server line, now supports the latest AMD EPYC processors — and keeps things chill with liquid cooling.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering about the server of the future? It’s available for order now from Supermicro.

The company recently added support for the latest 5th Gen AMD EPYC 9005 Series processors on its 2U 4-node FlexTwin server with liquid cooling.

This server is part of Supermicro’s H14 line and bears the model number AS -2126FT-HE-LCC. It’s a high-performance, hot-swappable and high-density compute system.

Intended users include oil & gas companies, climate and weather modelers, manufacturers, scientific researchers and research labs. In short, anyone who requires high-performance computing (HPC).

Each 2U system comprises four nodes. And each node, in turn, is powered by a pair of 5th Gen AMD EPYC 9005 processors. (The previous-gen AMD EPYC 9004 processors are supported, too.)

Memory on this Supermicro FlexTwin maxes out at 9TB of DDR5, courtesy of up to 24 DIMM slots. Expansions connect via PCIe 5.0, with one slot per node the standard and more available as an option.

The 5th Gen AMD EPYC processors, introduced last month, are designed for data center, AI and cloud customers. The series launched with over 25 SKUs offering up to 192 cores and all using AMD’s new “Zen 5” or “Zen 5c” architectures.

Keeping Cool

To keep things chill, the Supermicro FlexTwin server is available with liquid cooling only. This allows the server to be used for HPC, electronic design automation (EDA) and other demanding workloads.

More specifically, the FlexTwin server uses a direct-to-chip (D2C) cold plate liquid cooling setup, and each system also runs 16 counter-rotating fans. Supermicro says this cooling arrangement can remove up to 90% of server-generated heat.

The server’s liquid cooling also covers the 5th gen AMD processors’ more demanding cooling requirements; they’re rated at up to 500W of thermal design power (TDP). By comparison, some members of the previous, 4th gen AMD EPYC processors have a default TDP as low as 200W.

Build & Recycle

The Supermicro FlexTwin server also adheres to the company’s “Building Block Solutions” approach. Essentially, this means end users purchase these servers by the rack.

Supermicro says its Building Blocks let users optimize for their exact workload. Users also gain efficient upgrading and scaling.

Looking even further into the future, once these servers are ready for an upgrade, they can be recycled through the Supermicro recycling program.

In Europe, Supermicro follows the EU’s Waste Electrical and Electronic Equipment (WEEE) Directive. In the U.S., recycling is free in California; users in other states may have to pay a shipping charge.

Put it all together, and you’ve got a server of the future, available to order today.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is the AMD “Zen” core architecture?

Featured content

Tech Explainer: What is the AMD “Zen” core architecture?

Originally launched in 2017, this CPU architecture now delivers high performance and efficiency with ever-thinner processes.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The recent release of AMD’s 5th generation processors—formerly codenamed Turin—also heralded the introduction of the company’s “Zen 5” core architecture.

“Zen” is AMD’s name for a design ethos that prioritizes performance, scalability and efficiency. As any CTO will tell you, these 3 aspects are crucial for success in today’s AI era.

AMD originally introduced its “Zen” architecture in 2017 as part of a broader campaign to steal market share and establish dominance in the all-important enterprise IT space.

Subsequent generations of the “Zen” design have markedly increased performance and efficiency while delivering ever-thinner manufacturing processes.

Now and Zen

Since the “Zen” core’s original appearance in AMD Ryzen 1000-series processors, the architecture’s design philosophy has maintained its focus on a handful of vital aspects. They include:

  • A modular design. Known as Infinity Fabric, it facilitates efficient connectivity among multiple CPU cores and other components. This modular architecture enhances scalability and performance, both of which are vital for modern enterprise IT infrastructure.
  • High core counts and multithreading. Both are common to EPYC and Ryzen CPUs built using the AMD “Zen” core architecture. Simultaneous multithreading enables each core to process 2 threads. In the case of EPYC processors, this makes AMD’s CPUs ideal for multithreaded workloads that include Generative AI, machine learning, HPC and Big Data.
  • Advanced manufacturing processes. These allow faster, more efficient communication among individual CPU components, including multithreaded cores and multilevel caches. Back in 2017, the original “Zen” architecture was manufactured using a 14-nanometer (nm) process. Today’s new “Zen 5” and “Zen 5c” architectures (more on these below) reduce the lithography to just 4nm and 3nm, respectively.
  • Enhanced efficiency. This enables IT staff to better manage complex enterprise IT infrastructure. Reducing heat and power consumption is crucial, too, both in data centers and at the edge. The AMD “Zen” architecture makes this possible by offering enterprise-grade EPYC processors that offer up to 192 cores, yet require a maximum thermal design power (TDP) of only 500W.

The Two-Fold Path

The latest, fifth generation “Zen” architecture is divided into two segments: “Zen 5” and “Zen 5c.”

“Zen 5” employs a 4-nanometer (nm) manufacturing process to deliver up to 128 cores operating at up to 4.1GHz. It’s optimized for high per-core performance.

“Zen 5c,” by contrast, offers a 3nm lithography that’s reserved for AMD EPYC 96xx, 97xx, 98xx, and 99xx series processors. It’s optimized for high density and power efficiency.

The most powerful of these CPUs—the AMD EPYC 9965—includes an astonishing 192 cores, a maximum boost clock speed of 3.7GHz, and an L3 cache of 384MB.

Both “Zen 5” and “Zen 5c” are key components of the 5th gen AMD EPYC processors introduced earlier this month. Both have also been designed to achieve double-digit increases in instructions per clock cycle (IPC) and equip the core with the kinds of data handling and processing power required by new AI workloads.

Supermicro’s Satori

AMD isn’t the only brand offering bold, new tech to harried enterprise IT managers.

Supermicro recently introduced its new H14 servers, GPU-accelerated systems and storage servers powered by AMD EPYC 9005 Series processors and AMD Instinct MI325X Accelerators. A number of these servers also support the new AMD “Turin” CPUs.

The new product line features updated versions of Supermicro’s vaunted Hyper system, Twin multinode servers, and AI-inferencing GPU systems. All are now available with the user’s choice of either air or liquid cooling.

Supermicro says its collection of purpose-built powerhouses represents one of the industry’s most extensive server families. That should be welcome news for organizations intent on building a fleet of machines to meet the highly resource-intensive demands of modern AI workloads.

By designing its next-generation infrastructure around AMD 5th Generation components, Supermicro says it can dramatically increase efficiency by reducing customers’ total data-center footprints by at least two-thirds.

Enlightened IT for the AI Era

While AMD and Supermicro’s advances represent today’s cutting-edge technology, tomorrow is another story entirely.

Keeping up with customer demand and the dizzying pace of AI-based innovation means these tech giants will soon return with more announcements, tools and design methodologies. AMD has already promised a new accelerator, the AMD Instinct MI350, will be formally announced in the second half of 2025.

As far as enterprise CTOs are concerned, the sooner, the better. To survive and thrive amid heavy competition, they’ll need an evolving array of next-generation technology. That will help them reduce their bottom lines even as they increase their product offerings—a kind of technological nirvana.

Do More:

Watch a related video: 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Do your customers need more room for AI? AMD has an answer

Featured content

Do your customers need more room for AI? AMD has an answer

If your customers are looking to add AI to already-crowded, power-strapped data centers, AMD is here to help. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

How can your customers make room for AI in data centers that are already full?

It’s a question that’s far from academic. Nine in 10 tech vendors surveyed recently by the Uptime Institute expect AI to be widely used in data centers in the next 5 years.

Yet data center space is both hard to find and costly to rent. Vacancy rates have hit new lows, according to real-estate services firm CBRE Group.

Worse, this combination of supply shortages and high demand is driving up data center pricing and rents. Across North America, CRBE says, pricing is up by 20% year-on-year.

Getting enough electric power is an issue, too. Some utilities have told prospective data-center customers they won’t get the power they requested until the next decade, reports The Wall Street Journal. In other cases, strapped utilities are simply giving customers less power than they asked for.

So how to help your customers get their data centers ready for AI? AMD has some answers. And a free software tool to help.

The AMD Solution

AMD’s solution is simple, with just 2 points:

  • Make the most of existing data-center real estate and power by consolidating existing workloads.
  • Replace the low-density compute of older, inefficient and out-of-warranty systems with compute that’s newer, denser and more efficient.

AMD is making the case that your customers can do both by moving from older Intel-based systems to newer ones that are AMD-based.

For example, the company says, replacing servers based on Intel Xeon 6143 Sky Lake processors with those based on AMD EPYC 9334 CPUs can result in the need for 73% fewer servers, 70% fewer racks and 69% less power.

That could include Supermicro servers powered by AMD EPYC processors. Supermicro H13 servers using AMD EPYC 9004 Series processors offer capabilities for high-performance data centers.

AMD hasn’t yet done comparisons with either its new 5th gen EPYC processors (introduced last week) or Intel’s 86xx CPUs. But the company says the results should be similar.

Consolidating processor-based servers can also make room in your customers’ racks for AMD Instinct MI300 Series accelerators designed specifically for AI and HPC workloads.

For example, if your customer has older servers based on Intel Xeon Cascade Lake processors, migrating them to servers based on AMD EPYC 9754 processors instead can gain them as much as a 5-to-1 consolidation.

The result? Enough power and room to accommodate a new AI platform.

Questions Answered

Simple doesn’t always mean easy. And you and your customers may have concerns.

For example, isn’t switching from one vendor to another difficult?

No, says AMD. The company cross-licenses the X86 instruction set, so on its processors, most workloads and applications will just work.

What about all those cores on AMD processors? Won’t they raise a customer’s failure domain too high?

No, says AMD. Its CPUs are scalable enough to handle any failure domain from 8 to 256 cores per server.

Wouldn’t moving require a cold migration? And if so, wouldn’t that disrupt the customer’s business?

Again, AMD says no. While moving virtual machines (VMs) to a new architecture does require a cold migration, the job can be done without any application downtime.

That’s especially true if you use AMD’s free open-source tool known as VAMT, short for VMware Architecture Migration Tool. VAMT automates cold migration. In one AMD test, it migrated hundreds of VMs in just an hour.

So if your customers among those struggling to find room for AI systems in their already-crowded and power-strapped data centers, tell them consider a move to AMD.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD intros CPUs, accelerators, networking for end-to-end AI infrastructure -- and Supermicro supports

Featured content

AMD intros CPUs, accelerators, networking for end-to-end AI infrastructure -- and Supermicro supports

AMD expanded its end-to-end AI infrastructure products for data centers with new CPUs, accelerators and network controllers. And Supermicro is already offering supporting servers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD today held a roughly two-hour conference in San Francisco during which CEO Lisa Su and other executives introduced a new generation of server processors, the next model in the Instinct MI300 Accelerator family, and new data-center networking devices.

As CEO Su told the audience the live and online audience, AMD is committed to offering end-to-end AI infrastructure products and solutions in an open, partner-dependent ecosystem.

Su further explained that AMD’s new AI strategy has 4 main goals:

  • Become the leader in end-to-end AI
  • Create an open AI software platform of libraries and models
  • Co-innovate with partners including cloud providers, OEMs and software creators
  • Offer all the pieces needed for a total AI solution, all the way from chips to racks to clusters and even entire data centers.

And here’s a look at the new data-center hardware AMD announced today.

5th Gen AMD EPYC CPUs

The EPYC line, originally launched in 2017, has become a big success for AMD. As Su told the event audience, there are now more than 950 EPYC instances at the largest cloud providers; also, AMD hardware partners now offer EPYC processors on more than 350 platforms. Market share is up, too: Nearly one in three servers worldwide (34%) now run on EPYC, Su said.

The new EPYC processors, formerly codenamed Turin and now known as the AMD EPYC 9005 Series, are now available for data center, AI and cloud customers.

The new CPUs also have a new core architecture known as Zen5. AMD says Zen5 outperforms the previous Zen4 generation by 17% on enterprise instructions-per-clock and up to 37% on AI and HPC workloads.

The new 5th Gen line has over 25 SKUs, and core count ranges widely, from as few as 8 to as many as 192. For example, the new AMD EPYC 9575F is a 65-core, 5GHz CPU designed specifically for GPU-powered AI solutions.

AMD Instinct MI325X Accelerator

About a year ago, AMD introduced the Instinct MI300 Accelerators, and since then the company committed itself to introducing new models on a yearly cadence. Sure enough, today Lisa Su introduced the newest model, the AMD Instinct MI325X Accelerator.

Designed for Generative AI performance and built on the AMD CDNA3 architecture, the new accelerator offers up to 256GB of HBM3E memory, and bandwidth up to 6TB/sec.

Shipments of the MI325X are set to begin in this year’s fourth quarter. Partner systems with the new AMD accelerator are expected to start shipping in next year’s first quarter.

Su also mentioned the next model in the line, the AMD Instinct MI350, which will offer up to 288GB of HBM3E memory. It’s set to be formally announced in the second half of next year.

Networking Devices

Forrest Norrod, AMD’s head of data-center solutions, introduced two networking devices designed for data centers running AI workloads.

The AMD Pensando Salina DPU is designed for front-end connectivity. It supports thruput of up to 400 Gbps.

The AMD Pensando Pollara 400, designed for back-end networks connecting multiple GPUs, is the industry’s first Ultra-Ethernet Consortium-ready AI NIC.

Both parts are sampling with customers now, and AMD expects to start general shipments in next year’s first half.

Both devices are needed, Norrod said, because AI dramatically raises networking demands. He cited studies showing that connectivity currently accounts for 40% to 75% of the time needed to run certain AI training and inference models.

Supermicro Support

Supermicro is among the AMD partners already ready with systems based on the new AMD processors and accelerator.

Wasting no time, Supermicro today announced new H14 series servers, including both Hyper and FlexTwin systems, that support the 5th gen AMD 9005 EPYC processors and AMD Instinct MI325X Accelerators.

The Supermicro H14 family includes three systems for AI training and inference workloads. Supermicro says the systems can also accommodate the higher thermal requirements of the new AMD EPYC processors, which are rated at up to 500W. Liquid cooling is an option, too.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages