Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Check out Supermicro’s new AMD GPU-powered server—it’s air-cooled

Featured content

Check out Supermicro’s new AMD GPU-powered server—it’s air-cooled

Supermicro’s new 10U server is powered by AMD’s EPYC CPUs and Instinct MI355X GPUs. And it’s kept cool by nearly 20 fans.

Learn More about this topic
  • Applications:
  • Featured Technologies:

What do you do if you need GPU power for AI and other compute-intensive workloads, but lack the infrastructure for liquid cooling?

Supermicro has the answer. The company just introduced a 10U server powered by AMD Instinct MI355X GPUs that’s air-cooled.

The new server, showcased at the recent SC25 conference in St. Louis, is Supermicro model AS -A126GS-TNMR.

Each server is powered by the customer’s choice of dual AMD EPYC 9004 or 9005 Series CPUs with up to 384 cores and 768 threads. The system also features a total of eight AMD Instinct MI355X onboard OAM GPU accelerator modules, which are air-cooled. (OAM is short for OCP Accelerator Module, an industry-standard form factor for AI hardware.) In addition, these accelerated GPU servers offer up to 6TB of DDR5 system memory.

While the systems are air-cooled with up to 19 heavy-duty fans, there’s no penalty in terms of cooling capacity. In fact, AMD has boosted the GPU’s thermal design point (TDP)—the maximum amount of heat a server’s cooling system can handle—from 1000W to 1400W.

Also, compared with the company’s air-cooled 8U server based on AMD Instinct MI350X GPUs, the 10U server offers up to double-digit more performance, according to Supermicro . For end users, that means faster data processing.

More Per Rack

The bigger picture: Supermicro’s new 10U option lets customers unlock higher performance per rack. And with their choice of 10U air cooling or 4U liquid cooling, both powered by the latest AMD EPYC processors.

Supermicro’s GPU solutions are designed to offer maximum performance for AI and inference at scale. And they’re intended for use by both cloud service providers and enterprises.

Are your customers looking for a GPU-powered server that’s air cooled? Tell them about these new Supermicro 10U servers. And let them know that these systems are ready to ship now.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Featured content

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Liquid cooling offers big efficiency gains over traditional air. And while there are upfront costs, for data centers with high-performance AI and HPC servers, the savings can be substantial. Learn how it works.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Increasingly resource-intensive AI workloads are creating more demand for advanced data center cooling systems. Today, the most efficient and cost-effective method is liquid cooling.

A liquid-cooled PC or server relies on a liquid rather than air to remove heat from vital components that include CPUs, GPUs and AI accelerators. The heat produced by these components is transferred to a liquid. Then the liquid carries away the heat to where it can be safely dissipated.

Most computers don’t require liquid cooling. That’s because general-use consumer and business machines don’t generate enough heat to justify liquid cooling’s higher upfront costs and additional maintenance.

However, high-performance systems designed for tasks such as gaming, scientific research and AI can often operate better, longer and more efficiently when equipped with liquid cooling.

How Liquid Cooling Works

For the actual coolant, most liquid systems use either water or dielectric fluids. Before water is added to a liquid cooler, it’s demineralized to prevent corrosion and build-up. And to prevent freezing and bacterial growth, the water may also be mixed with a combination of glycol, corrosion inhibitors and biocides.

Thus treated, the coolant is pushed through the system by an electric pump. A single liquid-cooled PC or server will need to include its own pump. But for enterprise data center racks containing multiple servers, the liquid is pumped by what’s known as an in-rack cooling distribution unit (CDU). Then the liquid is distributed to each server via a coolant distribution manifold (CDM).

As the liquid flows through the system, it’s channeled into cold plates that are mounted atop the system’s CPUs, GPUs, DIMM modules, PCIe switches and other heat-producing components. Each cold plate has microchannels through which the liquid flows, absorbing and carrying away each component’s thermal energy.

The next step is to safely dissipate the collected heat. To accomplish this, the liquid is pumped back through the CDU, which sends the now-hot liquid to a mechanism that removes the heat. This is typically done using chillers, cooling towers or heat exchangers.

Finally, the cooled liquid is sent back to the systems’ heat-producing components to begin the process again.

Liquid Pros & Cons

The most compelling aspect of liquid cooling is its efficiency. Water moves heat up to 25 times better than air while using less energy to do it. In comparison with traditional air, liquid cooling can reduce cooling energy costs by up to 40%.

But there’s more to the efficiency of liquid cooling than just cutting costs. Liquid cooling also enables IT managers to move servers closer together, packing in more power and storage per square foot. Given the high cost of data center real estate, and the fullness of many data centers, that’s an important benefit.

In addition, liquid cooling can better handle the latest high-powered processing components. For instance, Supermicro says its DLC-2 next-generation Direct Liquid-Cooling solutions, introduced in May, can accommodate warmer liquid inflow temperatures while also enhancing AI per watt.

But liquid cooling systems have their downsides, too. For one, higher upfront costs can present a barrier for entry. Sure, data center operators will realize a lower total cost of ownership (TCO) over the long run. But when deploying a liquid-cooled data center, they must still contend with initial capital expense (CapEx) outlays—and justifying those costs to the CFO.

For another, IT managers might think twice about the additional complexity and risks of a liquid cooling solution. More components and variables mean more things that can go wrong. Data center insurance premiums may rise too, since a liquid cooling system can always spring a leak.

Driving Demand: AI

All that said, the market for liquid cooling systems is primed for serious growth.

As AI workloads become increasingly resource-intensive, IT managers are deploying more powerful servers to keep up with demand. These high-performance machines produce more heat than previous generations. And that creates increased demand for efficient, cost-effective cooling solutions.

How much demand? This year, the data center liquid cooling market is projected to drive global sales of $2.84 billion, according to Markets and Markets.

Looking ahead, the industry watcher expects the global liquid cooling market to reach $21.14 billion by 2032. If that happens, the rise will represent a compound annual growth rate (CAGR) over the projected period of 33%.

Coming Soon: Immersive Cooling

In the near future, AI workloads will likely become even more demanding. This means data centers will need to deploy—and cool—ultra-dense AI server clusters that produce tremendous amounts of heat.

To deal with this extra heat, IT managers may need the next step in data center cooling: immersion.

With immersion cooling, an entire rack of servers is submerged horizontally in a tank filled with what’s known as dielectric fluid. This is a non-conductive liquid that ensures the server’s hardware can operate while submerged, and without short-circuiting.

Immersion cooling is being developed along two paths. The most common variety is called single-phase, and it operates similarly to an aquarium’s water filter. As pumps circulate the dielectric fluid around the servers, the fluid is heated by the server’s components. Then it’s cooled by an external heat exchanger.

The other type of immersion cooling is known as two-phase. Here, the system uses water treated to have a relatively low boiling point—around 50 C / 122 F. As this water is heated by the immersed server, it boils, creating a vapor that rises to condensers installed at the top of the tank. The vapor is there condensed to a cooler liquid, then allowed to drip back down into the tank.

This natural convection means there’s no need for electric pumps. It’s a glimpse of a smarter, more efficient liquid future, coming soon to a data center near you.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Retail AI at the edge: Now here from Supermicro, AMD & Wobot.ai

Featured content

Retail AI at the edge: Now here from Supermicro, AMD & Wobot.ai

Retailers can now use AI to analyze in-store videos, thanks to a new system from Supermicro, AMD and Wobot.ai.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Artificial intelligence is being adapted for specific industry verticals. That now includes retail.

Supermicro, AMD and Wobot Intelligence Inc., a video intelligence supplier, are partnering to provide retailers with a short-depth server they can use to drive AI-powered analysis of their in-store videos. With these analyses, retailers can improve store operations, elevate the customer experience and boost sales.

The new server system was recently showcased by the three partners at NRF Europe 2025, an international conference for retailers. This year’s NRF Europe was held in Paris, France, in mid-September.

The new retail system is based on a Supermicro 1U server, model AS -1115S-FWTRT. It’s a short-depth front I/O system powered by a single AMD EPYC 8004 processor.

The server’s other features include dual 10G ports, dual 2.5-inch drive bays, up to 768GB of DDR5 memory, and an 800W redundant platinum power supply. This server is air-cooled by as many as six heavy-duty fans, and it supports a pair of single-width GPUs.

Good to Go

The retail system’s video-analysis software, provided by Wobot.ai, features a single dashboard, performance benchmarking, and easy installation and configuration. It’s designed to work with a user’s existing CCTV setup.

The company’s WoConnect app helps users connect digital video recorders (DVRs) and network video recorders (NVRs) in their private network to their Wobot.ai account. The app routes the user’s camera feeds to the AI.

Target use cases for retailers include store operations, loss prevention and compliance, customer behavior and footfall analysis.

More specifically, retailers can use the system to conduct video analyses that include:

  • Zone-based analytics: Which areas of the store draw the most attention? Which products draw interaction? How do customers move through the store?
  • Heat maps and event tracking: Visualize “crowd magnets” to improve future sales.
  • Customer-path analysis: Observe which sections of the store customers explore the most, and also see where they linger.

Using the system, retailers can enjoy a long list of benefits that include accelerated checkout processes, fewer customer walkaways, fine-tuned staffing levels, and improved product placement.

For example, a chain of juice bars with nearly 145 locations in California turned to Wobot.ai for help speeding customer service and improving employee productivity. Based on its video analyses, the retailer worked with Wobot.ai to design a pilot program for 10 stores. In just three months, the pilot delivered additional revenue in the test stores equivalent to 2% to 2.5% a year.

Wobot.ai also offers its video intelligence systems to other verticals, including hospitality, food service and security.

Edgy

One important feature of the new server is that it allows retailers to run real-time AI-powered video analysis at the edge. The Supermicro server is housed in a short-depth form factor, meaning it can be run in retail sites that lack a full-fledged data center.

Similarly, the system’s AMD EPYC 8004 processor has been optimized for power efficiency—important for installations at the edge. Featuring up to 64 ‘Zen4c’ dense cores, this AMD processor is specifically designed for intelligent edge and communications workloads.

By processing the AI analysis on-premises, the new system also offers low latency and high levels of privacy. Wobot.ai says its software is scalable across literally thousands of locations.

And the software is designed to be integrated easily with retailers’ existing camera infrastructure. In this way, it offers fast time-to-value and a quick return on investment.

Do you have retail customers looking for an edge—with AI at the edge? Tell them about this new retail solution today.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

4 IT events this fall you won’t want to miss

Featured content

4 IT events this fall you won’t want to miss

Important IT industry events are coming in October and November--with lots of participation from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Summer’s over…somehow it’s already October…and that means it’s time to attend important IT industry conferences, summits and other get-togethers.

Here’s your Performance Intensive Computing preview of four top events coming this month and next.

OCP Global Summit

  • Where & when: San Jose, California; Oct. 13-16, 2025
  • Who it’s for: This event, sponsored by the Open Compute Project (OCP), is for anyone interested in redesigning open source hardware to support the changing demands on compute infrastructure. This year’s theme: “Leading the future of AI.”
  • Who will be there: Speakers this year include Vik Malyala, senior VP of technology and AI at Supermicro; Mark Papermaster, CTO of AMD; Johnson Eung, staff growth product manager in AI at Supermicro; Shane Corban, senior director of technical product management at AMD; and Morris Ruan, director of product management at Supermicro.
     
  • Fun facts: AMD is a Diamond sponsor, and Supermicro is an Emerald sponsor.

~~~~~~~~~~~~~~~~~~~~

AMD AI Developer Day

  • Where & when: San Francisco, Oct. 20, 2025
  • Who it’s for: Developers of artificial intelligence applications and systems. Workshop topics will include developing multi-model, multi-agent systems; generating videos using open source tools; and developing optimized kernals.
  • Who will be there: Speakers will include executives from the University of California, Berkeley; Red Hat AI; Google DeepMind; and OpenAI. Also speaking will be execs from Ollama, an open source platform for AI models; Unsloth AI, an open source AI startup; vLLM, a library for large language model (LLM) inference and serving; and SGLang, an LLM framework.
  • Fun facts:
    • Supermicro is a conference sponsor.
    • During the conference, winners of the AMD Developer Challenge will be announced. The grand prize winner will take home $100,000.
    • AMD, PyTorch and Unsloth AI are co-sponsoring a virtual hackathon, the Synthetic Data AI Agents Challenge, on Oct. 18-20. The first-prize winners will receive $3,000 plus 1,200 hours of GPU credits.

~~~~~~~~~~~~~~~~~~~~

AI Infra Summit

  • Where & when: San Francisco; Nov. 7, 2025
  • Who it’s for: Anyone interested in the convergence of AI innovation and scalable infrastructure. This event is being hosted by Ignite, a go-to-market provider for the technology industry.
  • Who will be there: The speaker lineup is still TBA, but is promised to include enterprise technology leaders, AI and machine learning engineers, cloud and data center architects, venture capital investors, and infrastructure vendors.
  • Fun facts:
    • This is a hybrid event. You can attend either live or online.
    • AMD and Supermicro are Stadium-level sponsors.

~~~~~~~~~~~~~~~~~~~~

SC25

  • Where & when: St. Louis, Missouri; Nov. 16-21, 2025
  • Who it’s for: The global supercomputing community, including those working in high performance computing (HPC), networking, storage and analysis. This year’s theme: “HPC ignites.”
  • Who will be there: Speakers will feature nearly a dozen AMD executives, including Rob Curtis, a Fellow in Data Center Platform Engineering; Shelby Lockhart, a software system engineer; and Nuwan Jayasena, a Fellow in AMD Research. They and other speakers will appear in panels, presentations of papers, workshops, tutorials and more.
     
  • Fun facts: SC25 will feature a series of noncommercial “Birds of a Feather” sessions that allow attendees to openly discuss topics of mutual interest.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Featured content

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Vultr, a global provider of cloud services, now offers Supermicro servers powered by AMD Instinct GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro servers powered by the latest AMD Instinct GPUs and supported by the AMD ROCm open software ecosystem are at the heart of a global cloud infrastructure program offered by Vultr.

Vultr calls itself a modern hyperscaler, meaning it provides cloud solutions for organizations facing complex AI and HPC workloads, high operational costs, vendor lock-in, and the need for rapid insights.

Launched in 2014, Vultr today offers services from 32 data centers worldwide, which it says can reach 90% of the world’s population in under 40 milliseconds. Vultr’s services include cloud instances, dedicated servers, cloud GPUs, and managed services for database, cloud storage and networking.

Vultr’s customers enjoy benefits that include costs 30% to 50% lower than those of the hyperscalers and 20% to 30% lower than those of other independent cloud providers. These customers—there are over 220,000 of them worldwide—also enjoy Vultr’s full native AI stack of compute, storage and networking.

Vultr is the flagship product of The Constant Co., based in West Palm Beach, Fla. The company was founded by David Aninowsky, an entrepreneur who also started GameServers.com and served as its CEO for 18 years.

Now Vultr counts among its partners AMD, which joined the Vultr Cloud Alliance, a partner program, just a year ago. In addition, AMD’s venture group co-led a funding round this past December that brought Vultr $333 million.

Expanded Data Center

Vultr is now expanding its relationship with Supermicro, in part because that company is first to market with the latest AMD Instinct GPUs. Vultr is now offering Supermcro systems powered by AMD Instinct MI355X, MI325X and MI300X GPUs. And as part of the partnership, Supermicro engineers work on-site with Vultr technicians.

Vultr is also relying on Supermicro for scaling. That’s a challenge for large AI implementations, as these configurations require deep expertise for both integration and operations.

Among Vultr’s offerings from Supermicro is a 4U liquid-cooled server (model AS -4126GS-NMR-LCC) with dual AMD EPYC 9005/9004 processors and up to eight AMD GPUs—the user’s choice of either MI325X or MI355X.

Another benefit of the new arrangement is access to AMD’s ROCm open source software environment, which will be made available within Vultr’s composable cloud infrastructure. This AMD-Vultr combo gives users access to thousands of open source, pre-trained AI models & frameworks.

Rockin’ with ROCm

AMD’s latest update to the software is ROCm 7, introduced in July and now live and ready to use. Version 7 offers advancements that include big performance gains, advanced features for scaling AI, and enterprise-ready AI tools.

One big benefit of AMD ROCm is that its open software ecosystem eliminates vendor lock-in. And when integrated with Vultr, ROCm supports AI frameworks that include PyTorch and TensorFlow, enabling flexible, rapid innovation. Further, ROCm future-proofs AI solutions by ensuring compatibility across hardware, promoting adaptability and scalability.

AMD’s roadmap is another attraction for Vultr. AMD products on tap for 2026 include the Instinct 400 family (codename Helios), new EPYC CPUs (Venice) and an 800-Gbit NIC (Vulcano).

Conversely, Vultr is a big business for AMD. Late last year, a tech blog reported that Vultr’s first shipment of AMD Instinct MI300X GPUs numbered “in the thousands.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Featured content

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Supermicro and MangoBoost are together delivering an optimized end-to-end GenAI stack. It’s based on Supermicro servers powered by AMD Instinct GPUs and running MangoBoost’s LLMBoost software.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While many organizations are implementing AI for business, many are also discovering that deploying and operating large language models (LLMs) at scale isn’t easy.

They’re finding that the hardware demands are intense. And so are the performance and cost trade-offs. Also, with AI workloads increasingly demanding multi-node GPU clusters, orchestration and tuning can be complex.

To address these challenges, Supermicro and MangoBoost Inc. are working together to deliver an optimized end-to-end GenAI stack. They’ve combined Supermicro’s robust AMD Instinct GPU server portfolio with MangoBoost’s LLMBoost software.

Meet MangoBoost

If you’re unfamiliar with MangoBoost, the company offers programmable solutions that improve data-center application performance while lowering CPU overhead. MangoBoost was founded three years ago; today it operates in the United States, Canada and South Korea.

MangoBoost’s core product is called the Data Processing Unit. It ensures full compatibility with general-purpose GPUs, accelerators and storage devices, enabling cost-efficient and standardized AI infrastructures.

MangoBoost also offers a ready-to-deploy, full-stack AI inference server. Known as Mango LLMBoost, it’s available from the Big Three cloud providers—AWS, Microsoft Azure and Google Cloud.

LLMBoost helps organizations accelerate both the training and deploying LLM at scale. Why is this so challenging? Because once a model is ready for inference, developers face what’s known as a “productization tax.”

Integrating the machine-learning processing pipeline into the rest of the application often requires additional time and engineering effort. And this can lead to delays.

Mango LLMBoost addresses these challenges by creating an easy-to-use container. This lets LLM experts optimize their models, then select suitable GPUs on demand.

MangoBoost’s inference engine uses three forms of GPU parallelism, allowing GPUs to balance their compute, memory and network-resource usage. In addition, the software’s intelligent job scheduling optimizes cluster-wide GPU resources, ensuring that the load is balanced equally across GPU nodes.

LLMBoost also ensures the effective use of low-latency GPU caches and high-bandwidth memory through quantization. This reduces the data footprint, but without lowering accuracy.

Complementing Hardware

MangoBoost’s LLMBoost software complements the powerful hardware with a full-stack, production-ready AI MLOps platform. It includes:

  • Plug-and-play deployment: Pre-built Docker images and an intuitive command-line interface (CLI) both help developers to launch LLM workloads quickly.
  • OpenAI-compatible API: Lets developers integrate LLM endpoints with minimal code changes.
  • Kubernetes-native orchestration: Provides automated deployment and management of autoscaling, load balancing and job scheduling for seamless operation across both single- and multi-node clusters.
  • Full-stack performance auto-tuning: Unlike conventional auto-tuners that handle model hyper-parameters only, LLMBoost optimizes every layer from the inference and training back-ends to network configurations and GPU runtime parameters. This ensures maximum hardware utilization, yet without requiring any manual tuning.

Proof of Performance

Supermicro and MangoBoost collaborating to deliver an optimized end-to-end Generative AI stack sounds good. But how does the combined solution actually perform?

To find out, Supermicro, AMD and MangoBoost recently tested their combined solution using real-world GenAI workloads. Here are the results:

  • LLMBoost reduced training time by 40% for two-node training, down to 13.3 minutes on a server built around a dual-node AMD Instinct MI325X. The training was done running Llama 2 70B, an LLM with 70 billion parameters, with LoRA (low-rank adaptation).
  • LLMBoost achieved a 1.96X higher throughput for multiple-node inference on Supermicro AMD servers. That was up to over 61,000 tokens/sec. on a dual-node AMD Instinct MI325X configuration.
  • In-house LLM inference with Llama 4 Maverick and Scout models achieved near-linear scaling on AMD Instinct MI325X nodes. (Maverick is designed for fast responses at low cost; Scout, for long-document analysis.) This shows that Supermicro systems are ready for real-time GenAI deployment.
  • Load balancing: The researchers used LLaVA, an image-capturing model, on three setups. The heterogeneous dual-node configuration—eight AMD Instinct MI300X GPUs and eight AMD Instinct MI325X GPUs—achieved 96% of the sum of individual single-node runs. This demonstrates minimal overhead and high efficiency.

Are your customers looking for a turnkey GenAI cluster solution that’s high-performance, flexible and easy to operate? Then tell them that Supermicro, AMD and MangoBoost have their solution—and the proof that it works.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Deploy GenAI with confidence: Validated Server Designs from Supermicro and AMD

Featured content

Deploy GenAI with confidence: Validated Server Designs from Supermicro and AMD

Learn about the new Validated Design for AI clusters from Supermicro and AMD. It can save you time, reduce complexity and improve your ROI.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The task of designing, building and connecting a server system that can run today’s artificial intelligence workloads is daunting.

Mainly, because there are a lot of moving parts. Assembling and connecting them all correctly is not only complicated, but also time-consuming.

Supermicro and AMD are here to help. They’ve recently co-published a Verified Design document that explains how to build an AI cluster. The PDF also tells you how you can acquire an AMD-powered Supermicro cluster for AI pre-built, with all elements connected, configured and burned in before shipping.

Full-Stack for GenAI

Supermicro and AMD are offering a fully validated, full-stack solution for today’s Generative AI workloads. The system’s scale can be easily adjusted from as few as 16 nodes to as many as 1,024—and points in between.

This Supermicro solution is based on three AMD elements: the AMD Instinct MI325X GPU, AMD Pensando Pollara 400 AI network interface card (NIC), and AMD EPYC CPU.

These three AMD parts are all integrated with Supermicro’s optimized servers. That includes network cabling and switching.

The new Validated Design document is designed to help potential buyers understand the joint AMD-Supermicro solution’s key elements. To shorten your implementation time, the document also provides an organized plan from start to finish.

Under the Cover

This comprehensive report—22 pages plus a lengthy appendix—goes into a lot of technical detail. That includes the traffic characteristics of AI training, impact of large “elephant” flows on the network fabric, and dynamic load balancing. Here’s a summary:

  • Foundations of AI Fabrics: Remote Direct Memory Access (RDMA), PCIe switching, Ethernet, IP and Border Gateway Protocol (BGP).
  • Validated Design Equipment and Configuration: Server options that optimize RDMA traffic with minimal distance, latency and silicon between the RDMA-capable NIC (RNIC) and accelerator.
  • Scaling Out the Accelerators with an Optimized Ethernet Fabric: Components and configurations including the AMD Pensando Pollara 400 Ethernet NIC and Supermicro’s own SSE-T8196 Ethernet switch.
  • Design of the Scale Unit—Scaling Out the Cluster: Designs are included for both air-cooled and liquid-cooled setups.
  • Resource Management and Adding Locality into Work Placement: Covering the Simple Linux Utility for Resource Management (SLURM) and topology optimization including the concept of rails.
  • Supermicro Validated AMD Instinct MI325 Design: Shows how you can scale the validated design all the way to 8,000 AMD MI325X GPUs in a cluster.
  • Storage Network Validated Design: Multiple alternatives are offered.
  • Importance of Automation: Human errors are, well, human. Automation can help with tasks including the production of detailed architectural drawings, output of cabling maps, and management of device firmware.
  • How to Minimize Deployment Time: Supermicro’s Rack Scale Solution Stack offers a fully integrated, end-to-end solution. And by offering a system that’s pre-validated, this also eases the complexity of multi-vendor integration.

Total Rack Solution

Looking to minimize implementation times? Supermicro offers a total rack scale solution that’s fully integrated and end-to-end.

This frees the user from having to integrate and validate a multi-vendor solution. Basically, Supermicro does it for you.

By leveraging industry-leading energy efficiency, liquid and air-cooled designs, and global logistics capabilities, Supermicro delivers a cost-effective and future-proof solution designed to meet the most demanding IT requirements.

The benefits to the customer include reduced operational overhead, a single point of accountability, streamlined procurement and deployment, and maximum return on investment.

For onsite deployment, Supermicro provides a turnkey, fully optimized rack solution that is ready to run. This helps organizations maximize efficiency, lower costs and ensure long-term reliability. It includes a dedicated on-site project manager.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s special about an AI server?

Featured content

Tech Explainer: What’s special about an AI server?

What’s in an AI server that a general-purpose system lacks?

Learn More about this topic
  • Applications:
  • Featured Technologies:

The Era of Artificial Intelligence requires its own class of servers, and rightly so. The AI tech that increasingly powers our businesses, finance, entertainment and scientific research is some of the most resource-intensive in history. Without AI servers, all this would grind to a halt.

But why? What’s so special about AI servers? And how are they able to power successive evolutions of large language models, generative AI, machine learning, and all the other AI-based workloads we’ve come to rely on day in and day out?

Put another way: What do AI servers have that standard servers don’t?

The answer can be summed up in a single word: More.

When it comes to AI servers, it’s all about managing a symphony. The musical instruments include multiple processors, GPUs, memory modules, networking hardware and expansion options.

Sure, your average general-purpose server has many similar components. But both the quantity and performance of each component is considerably lower than those of an AI server. That helps keep the price affordable, heat low, and workload options open. But it certainly doesn’t have the integrated GPU needed to run AI workloads.

Best of the Beasts

Supermicro specializes in the deployment of jaw-dropping power. The company’s newest 8U GPU Server (AS -8126GS-TNMR) is engineered to chew through the world’s toughest AI workloads. It’s powered by dual AMD EPYC processors and eight AMD Instinct MI350X or Instinct MI325X accelerators. This server can tackle AI workloads while staying cool and scaling up to meet increasing demand.

Keeping AI servers from overheating can be a tough job. Even a lowly, multipurpose business server kicks off a lot of heat. Temperatures build up around vital components like the CPU, GPU and storage devices. If that heat hangs around too long, it can lead to performance issues and, eventually, system failure.

Preventing heat-related issues in a single general-purpose server can be accomplished with a few heatsinks and small-diameter fans. But when it comes to high-performance, multi-GPU servers like Supermicro’s new 4U GPU A+ Server (AS -4126GS-NMR-LCC), liquid cooling becomes a must-have.

It’s also vital that AI servers be designed with expansion in mind. When an AI-powered app becomes successful, IT managers must be able to scale up quickly and without interruption.

Supermicro’s H14 8U 8-GPU System sets the standard for scalability. The H14 offers up to 20 storage drives and up to 12 PCI Express 5.0 (PCIe) x16 expansion slots.

Users can fill these high-bandwidth slots with a dizzying array of optional hardware, including:

  • Network Interface Cards (NICs) like the new AI-focused AMD AI NIC for high-speed networking.
  • NVMe storage to provide fast disk access.
  • Field Programmable Gate Array (FPGA) modules, which can be set up for custom computation and reconfigured after deployment.
  • Monitoring and control management cards. These enable IT staff to power servers on and off remotely, and also access BIOS settings.
  • Additional GPUs to aid in AI training and inferencing.
  • AI Accelerators. The AMD Instinct series is designed to tackle computing for AI, both training and inference.

A Different Class of Silicon

Hardware like the Supermicro GPU Server epitomizes what it means to be an AI server. That’s due in part to the components it’s designed to house. We’re talking about some of the most advanced processing tech available today.

As mentioned above, that tech comes courtesy of AMD, whose 5th Gen AMD EPYC 9005 series processors and recently announced AMD Instinct MI350 Series GPUs are powerful enough to tackle any AI workload.

AMD’s Instinct MI350 accelerators deliver a 4x generation-on-generation AI compute increase and a 35x generational leap in inferencing.

Say the word, and Supermicro will pack your AI Server with dual AMD EPYC processors containing up to 192 cores. They’ll install the latest AMD Instinct M1350X platform with 8 GPUs, fill all 24 DIMM slots with 6TB of DDR5 memory, and add an astonishing 16 NVMe U.2 drives. 

Advances Just Around the Corner

It seems like each new day brings stories about bold advances in AI. Apparently, our new robot friends may have the answer to some very human questions like, how can we cure our most insidious diseases? And how do we deal with the looming threat of climate crisis?

The AI models that could answer those questions—not to mention the ones that will help us find even better movies on Netflix—will require more power as they grow.

To meet those demands, AI server engineers are already experimenting with the next generation of advanced cooling for dense GPU clusters, enhanced hardware-based security, and new, more scalable modular infrastructure.

In fact, AI server designers have begun using their own AI models to create bigger and better AI servers. How very meta.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Meet Supermicro’s newest AI servers, powered by AMD Instinct MI350 Series GPUs

Featured content

Meet Supermicro’s newest AI servers, powered by AMD Instinct MI350 Series GPUs

Supermicro’s new AI servers are powered by a combination of AMD EPYC CPUs and AMD Instinct GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro didn’t waste any time supporting AMD’s new Instinct MI350 Series GPUs. The same day AMD formally introduced the new GPUs, Supermicro announced two rack-mount servers that support them.

The new servers, members of Supermicro’s H14 generation of GPU optimized solutions, feature dual AMD EPYC 9005 CPUs along with the AMD Instinct MI350 series GPUs. They’re aimed at organizations looking to achieve a formerly tough combination: maximum performance at scale in their AI-driven data centers, but also a lower total cost of ownership (TCO).

To make the new servers easy to upgrade and scale, Supermicro has designed the new servers around its proven building-block architecture.

Here’s a quick look at the two new Supermicro servers:

4U liquid-cooled system with AMD Instinct MI355X GPU

This system, model number AS -4126GS-NMR-LCC, comes with a choice of dual AMD EPYC 9005 or 9004 Series CPUs, both with liquid cooling.

On the GPU front, users also have a choice of the AMD Instinct MI325X or brand-new AMD Instinct MI355X. Either way, this server can handle up to 8 GPUs.

Liquid cooling is provided by a single direct-to-chip cold plate. Further cooling comes from 5 heavy-duty fans and an air shroud.

8U air-cooled system with AMD Instinct MI350X GPU

This system, model number AS -8126GS-TNMR, comes with a choice of dual AMD EPYC 9005 or 9004 Series CPUs, both with air cooling.

This system also supports both the AMD Instinct MI325X and AMD Instinct MI350X GPUs. Also like the 4U server, this system supports up to 8 GPUs.

Air cooling is provided by 10 heavy-duty fans and an air shroud.

The two systems also share some features in common. These include PCIe 5.0 connectivity, large memory capacities (up to 2.3TB), and support for both AMD’s ROCm open-source software and AMD Infinity Fabric Link connections for GPUs.

“Supermicro continues to lead the industry with the most experience in delivering high-performance systems designed for AI and HPC applications,” says Charles Liang, president and CEO of Supermicro. “The addition of the new AMD Instinct MI350 series GPUs to our GPU server lineup strengthens and expands our industry-leading AI solutions and gives customers greater choice and better performance as they design and build the next generation of data centers.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

AMD presents its vision for the AI future: open, collaborative, for everyone

Featured content

AMD presents its vision for the AI future: open, collaborative, for everyone

Check out the highlights of AMD’s Advancing AI event—including new GPUs, software and developer resources.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD advanced its AI vision at the “Advancing AI” event on June 12. The event, held live in the Silicon Valley city of San Jose, Calif., as well as online, featured presentations by top AMD executives and partners.

As many of the speakers made clear, AMD’s vision for AI is that it be open, developer-friendly, collaborative and useful to all.

AMD certainly believes the market opportunity is huge. During the day’s keynote, CEO Lisa Su said AMD now believes the total addressable market (TAM) for data-center AI will exceed $500 billion by as soon as 2028.

And that’s not all. Su also said she expects AI to move beyond the data center, finding new uses in edge computers, PCs, smartphone and other devices.

To deliver on this vision, Su explained, AMD is taking a three-pronged approach to AI:

  • Offer a broad portfolio of compute solutions.
  • Invest in an open development ecosystem.
  • Deliver full-stack solutions via investments and acquisitions.

The event, lasting over two hours, was also filled with announcements. Here are the highlights.

New: AMD Instinct MI350 Series

At the Advancing AI event, CEO Su formally announced the company’s AMD Instinct MI350 Series GPUs.

There are two models, the MI350X and MI355X. Though both are based on the same silicon, the MI355X supports higher thermals.

These GPUs, Su explained, are based on AMD’s 4th gen Instinct architecture, and each GPU comprises 10 chiplets containing a total of 185 billion transistors. The new Instinct solutions can be used for both AI training and AI inference, and they can also be configured in either liquid- or air-cooled systems.

Su said the MI355X delivers a massive 35x general increase in AI performance over the previous-generation Instinct MI300. For AI training, the Instinct MI355X offers up to 3x more throughput than the Instinct MI300. And in comparison with a leading competitive GPU, the new AMD GPU can create up to 40% more tokens per dollar.

AMD’s event also featured several representatives of companies already using AMD Instinct MI300 GPUs. They included Microsoft, Meta and Oracle.

Introducing ROCm 7 and AMD Developer Cloud

Vamsi Boppana, AMD’s senior VP of AI, announced ROCm 7, the latest version of AMD’s open-source AI software stack. ROCm 7 features improved support for industry-standard frameworks; expanded hardware compatibility; and new development tools, drivers, APIs and libraries to accelerate AI development and deployment.

Earlier in the day, CEO Su said AMD’s software efforts “are all about the developer experience.” To that end, Boppana introduced the AMD Developer Cloud, a new service designed for rapid, high-performance AI development.

He also said AMD is giving developers a 25-hour credit on the Developer Cloud with “no strings.” The new AMD Developer Cloud is generally available now.

Road Map: Instinct MI400, Helios rack, Venice CPU, Vulcano NIC

During the last segment of the AMD event, Su gave attendees a sneak peek at several forthcoming products:

  • Instinct MI400 Series: This GPU is being designed for both large-scale AI inference and training. It will be the heart of the Helios rack solution (see below) and provide what Su described as “the engine for the next generation of AI.” Expect performance of up to 40 petaflops, 432GB of HBM4 memory, and bandwidth of 19.6TB/sec.
  • Helios: The code name for a unified AI rack solution coming in 2026. As Su explained it, Helios will be a rack configuration that functions like a single AI engine, incorporating AMD’s EPYC CPU, Instinct GPU, Pensando Pollara network interface card (NIC) and ROCm software. Specs include up to 72 GPUs in a rack and 31TB of HBM3 memory.
  • Venice: This is the code name for the next generation of AMD EPYC server CPUs, Su said. They’ll be based on a 2nm form, feature up to 256 cores, and offer a 1.7x performance boost over the current generation.
  • Vulcano: A future NIC, it will be built using a 3nm form and feature speeds of up to 800Gb/sec.

Do More:

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages