Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Check out Supermicro’s new AMD GPU-powered server—it’s air-cooled

Featured content

Check out Supermicro’s new AMD GPU-powered server—it’s air-cooled

Supermicro’s new 10U server is powered by AMD’s EPYC CPUs and Instinct MI355X GPUs. And it’s kept cool by nearly 20 fans.

Learn More about this topic
  • Applications:
  • Featured Technologies:

What do you do if you need GPU power for AI and other compute-intensive workloads, but lack the infrastructure for liquid cooling?

Supermicro has the answer. The company just introduced a 10U server powered by AMD Instinct MI355X GPUs that’s air-cooled.

The new server, showcased at the recent SC25 conference in St. Louis, is Supermicro model AS -A126GS-TNMR.

Each server is powered by the customer’s choice of dual AMD EPYC 9004 or 9005 Series CPUs with up to 384 cores and 768 threads. The system also features a total of eight AMD Instinct MI355X onboard OAM GPU accelerator modules, which are air-cooled. (OAM is short for OCP Accelerator Module, an industry-standard form factor for AI hardware.) In addition, these accelerated GPU servers offer up to 6TB of DDR5 system memory.

While the systems are air-cooled with up to 19 heavy-duty fans, there’s no penalty in terms of cooling capacity. In fact, AMD has boosted the GPU’s thermal design point (TDP)—the maximum amount of heat a server’s cooling system can handle—from 1000W to 1400W.

Also, compared with the company’s air-cooled 8U server based on AMD Instinct MI350X GPUs, the 10U server offers up to double-digit more performance, according to Supermicro . For end users, that means faster data processing.

More Per Rack

The bigger picture: Supermicro’s new 10U option lets customers unlock higher performance per rack. And with their choice of 10U air cooling or 4U liquid cooling, both powered by the latest AMD EPYC processors.

Supermicro’s GPU solutions are designed to offer maximum performance for AI and inference at scale. And they’re intended for use by both cloud service providers and enterprises.

Are your customers looking for a GPU-powered server that’s air cooled? Tell them about these new Supermicro 10U servers. And let them know that these systems are ready to ship now.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s new in AMD ROCm 7?

Featured content

Tech Explainer: What’s new in AMD ROCm 7?

Learn how the AMD ROCm software stack has been updated for the era of AI.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While GPUs have become the digital engines of our increasingly AI-powered lives, controlling them accurately and efficiently can be tricky.

That’s why, in 2016, AMD created ROCm. Pronounced rock-em, it’s a software stack designed to translate the code written by programmers into sets of instructions that AMD GPUs can understand and execute perfectly.

If the GPUs in today’s cutting-edge AI servers are the orchestra, then ROCm is the sheet music being played.

AMD introduced the latest version, ROCm 7.0, earlier this fall. Version 7.0 is designed for the new world of AI.

How ROCm works

ROCm is a platform created by AMD to run programs on its AI-focused GPUs, the Instinct MI350 Series accelerators. AMD calls the latest version, ROCm 7.0, an AI-ready powerhouse designed for performance, efficiency and productivity.

Providing that kind of facility is a matter of far more than just simple software. ROCm is actually an expansive collection of tools, drivers and libraries.

What’s in the collection? The full ROCm stack contains:

  • Drivers that enable a computer’s operating system to communicate with any installed AMD GPUs.
  • The Heterogeneous Interface for Portability (HIP), a coding system for users to create and run custom GPU programs.
  • Math and AI libraries including specialized tools like deep learning operations, fast math routines, matrix multiplication, and tensor ops. These AI building blocks are pre-built to help developers accelerate production.
  • Compilers that turn code into GPU instructions.
  • System-management tools that developers can use to debug applications and optimize GPU performance.

Help Me, GPU

The latest version of ROCm is purpose-built for generative AI and large-scale AI inferencing and training. While developers rely on GPUs for parallel processing, performing many tasks at once, GPUs are general-purpose processors. To achieve the best performance for AI workloads, developers need a software bridge that turns their high-level code into GPU-optimized instructions. That bridge is ROCm.

ROCm lets developers run AI frameworks that include PyTorch effectively on AMD GPUs. ROCm converts application code into instructions designed for the hardware. In this way, ROCm helps organizations improve performance, scale workloads across multiple GPUs, and meet increasing demand without sacrificing reliability.
 
For demanding AI workloads such as those using Mixture of Experts (MoE) models, ROCm is essential for execution. MoE models activate only a small group of expert networks for each input, resulting in sparse workloads that are efficient, but hard to schedule. ROCm ensures that GPUs can perform these sparse operations at scale, maintaining high throughput and accuracy across clusters.
 
In other words, ROCm provides the tools and runtime to make even the most complex GPU workloads run efficiently. It connects AI developers with the hardware that supports their applications.
 
That’s important. While increased demand is what every enterprise wants, it still brings challenges that leave little room for mistakes.
 
Open Source Power

But wait, there's more. AMD ROCm has another clever trick up its sleeve: open-source integration.

By using popular open-source frameworks, ROCm lets enterprises and developers run large-scale inference workloads more efficiently. This open source approach also empowers the same organizations and developers to break free of proprietary software and vendor-locked ecosystems.

Free from those dependencies, users can scale AI clusters by deploying commodity components instead of being locked into a single vendor’s hardware. Ultimately, that can lead to lower hardware and licensing costs.

This approach also empowers users to customize their AI operations. In this way, AI systems can be developed to better suit the unique requirements of an organization’s applications, environments and end users.

Another Layer

While ROCm serves the larger market, the recent release of AMD’s new Enterprise AI Suite shows the company’s commitment to developing tools specifically for enterprise-class organizations.

AMD says the new suite can to take enterprises from bare metal server to enterprise-ready AI software in mere minutes.

To accomplish this, the suite provides four additional components: solution blueprints, inference microservices, AI Workbench, and a dedicated resource manager.

These tools are designed to help enterprises better scale their AI workloads, predict costs and capacity, and accelerate time-to-production.

Always Be Developing

Along with these product releases, AMD is being perfectly clear about its focus on AI development. At the company’s recent Financial Analyst Day, AMD CEO Lisa Su explained that over the last five years, the cost of AMD’s AI-related investments and acquisitions has topped $100 billion. That includes building up a staff of some 25,000 engineers.

Looking ahead, Su told financial analysts that AMD’s data-center AI business is on track to draw revenue in the “tens of billions of dollars” by 2027. She also said that over the next three to five years, AMD expects its data-center AI revenue to enjoy a compound annual growth rate (CAGR) of over 80%.

AMD’s roadmap points to updates that will focus on further boosts to performance, productivity and scalability. The company may accomplish these gains by offering more streamlined build and packaging systems, more optimized training and inferencing, and broader hardware support. It’s also reasonable to expect improved virtualization and multi-tenant support.

That said, if you want your speculation about future AI-centric ROCm improvements to be as accurate as possible, your best bet may be to ask an AI chatbot…powered by Supermicro and AMD, of course.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Featured content

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Liquid cooling offers big efficiency gains over traditional air. And while there are upfront costs, for data centers with high-performance AI and HPC servers, the savings can be substantial. Learn how it works.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Increasingly resource-intensive AI workloads are creating more demand for advanced data center cooling systems. Today, the most efficient and cost-effective method is liquid cooling.

A liquid-cooled PC or server relies on a liquid rather than air to remove heat from vital components that include CPUs, GPUs and AI accelerators. The heat produced by these components is transferred to a liquid. Then the liquid carries away the heat to where it can be safely dissipated.

Most computers don’t require liquid cooling. That’s because general-use consumer and business machines don’t generate enough heat to justify liquid cooling’s higher upfront costs and additional maintenance.

However, high-performance systems designed for tasks such as gaming, scientific research and AI can often operate better, longer and more efficiently when equipped with liquid cooling.

How Liquid Cooling Works

For the actual coolant, most liquid systems use either water or dielectric fluids. Before water is added to a liquid cooler, it’s demineralized to prevent corrosion and build-up. And to prevent freezing and bacterial growth, the water may also be mixed with a combination of glycol, corrosion inhibitors and biocides.

Thus treated, the coolant is pushed through the system by an electric pump. A single liquid-cooled PC or server will need to include its own pump. But for enterprise data center racks containing multiple servers, the liquid is pumped by what’s known as an in-rack cooling distribution unit (CDU). Then the liquid is distributed to each server via a coolant distribution manifold (CDM).

As the liquid flows through the system, it’s channeled into cold plates that are mounted atop the system’s CPUs, GPUs, DIMM modules, PCIe switches and other heat-producing components. Each cold plate has microchannels through which the liquid flows, absorbing and carrying away each component’s thermal energy.

The next step is to safely dissipate the collected heat. To accomplish this, the liquid is pumped back through the CDU, which sends the now-hot liquid to a mechanism that removes the heat. This is typically done using chillers, cooling towers or heat exchangers.

Finally, the cooled liquid is sent back to the systems’ heat-producing components to begin the process again.

Liquid Pros & Cons

The most compelling aspect of liquid cooling is its efficiency. Water moves heat up to 25 times better than air while using less energy to do it. In comparison with traditional air, liquid cooling can reduce cooling energy costs by up to 40%.

But there’s more to the efficiency of liquid cooling than just cutting costs. Liquid cooling also enables IT managers to move servers closer together, packing in more power and storage per square foot. Given the high cost of data center real estate, and the fullness of many data centers, that’s an important benefit.

In addition, liquid cooling can better handle the latest high-powered processing components. For instance, Supermicro says its DLC-2 next-generation Direct Liquid-Cooling solutions, introduced in May, can accommodate warmer liquid inflow temperatures while also enhancing AI per watt.

But liquid cooling systems have their downsides, too. For one, higher upfront costs can present a barrier for entry. Sure, data center operators will realize a lower total cost of ownership (TCO) over the long run. But when deploying a liquid-cooled data center, they must still contend with initial capital expense (CapEx) outlays—and justifying those costs to the CFO.

For another, IT managers might think twice about the additional complexity and risks of a liquid cooling solution. More components and variables mean more things that can go wrong. Data center insurance premiums may rise too, since a liquid cooling system can always spring a leak.

Driving Demand: AI

All that said, the market for liquid cooling systems is primed for serious growth.

As AI workloads become increasingly resource-intensive, IT managers are deploying more powerful servers to keep up with demand. These high-performance machines produce more heat than previous generations. And that creates increased demand for efficient, cost-effective cooling solutions.

How much demand? This year, the data center liquid cooling market is projected to drive global sales of $2.84 billion, according to Markets and Markets.

Looking ahead, the industry watcher expects the global liquid cooling market to reach $21.14 billion by 2032. If that happens, the rise will represent a compound annual growth rate (CAGR) over the projected period of 33%.

Coming Soon: Immersive Cooling

In the near future, AI workloads will likely become even more demanding. This means data centers will need to deploy—and cool—ultra-dense AI server clusters that produce tremendous amounts of heat.

To deal with this extra heat, IT managers may need the next step in data center cooling: immersion.

With immersion cooling, an entire rack of servers is submerged horizontally in a tank filled with what’s known as dielectric fluid. This is a non-conductive liquid that ensures the server’s hardware can operate while submerged, and without short-circuiting.

Immersion cooling is being developed along two paths. The most common variety is called single-phase, and it operates similarly to an aquarium’s water filter. As pumps circulate the dielectric fluid around the servers, the fluid is heated by the server’s components. Then it’s cooled by an external heat exchanger.

The other type of immersion cooling is known as two-phase. Here, the system uses water treated to have a relatively low boiling point—around 50 C / 122 F. As this water is heated by the immersed server, it boils, creating a vapor that rises to condensers installed at the top of the tank. The vapor is there condensed to a cooler liquid, then allowed to drip back down into the tank.

This natural convection means there’s no need for electric pumps. It’s a glimpse of a smarter, more efficient liquid future, coming soon to a data center near you.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

4 IT events this fall you won’t want to miss

Featured content

4 IT events this fall you won’t want to miss

Important IT industry events are coming in October and November--with lots of participation from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Summer’s over…somehow it’s already October…and that means it’s time to attend important IT industry conferences, summits and other get-togethers.

Here’s your Performance Intensive Computing preview of four top events coming this month and next.

OCP Global Summit

  • Where & when: San Jose, California; Oct. 13-16, 2025
  • Who it’s for: This event, sponsored by the Open Compute Project (OCP), is for anyone interested in redesigning open source hardware to support the changing demands on compute infrastructure. This year’s theme: “Leading the future of AI.”
  • Who will be there: Speakers this year include Vik Malyala, senior VP of technology and AI at Supermicro; Mark Papermaster, CTO of AMD; Johnson Eung, staff growth product manager in AI at Supermicro; Shane Corban, senior director of technical product management at AMD; and Morris Ruan, director of product management at Supermicro.
     
  • Fun facts: AMD is a Diamond sponsor, and Supermicro is an Emerald sponsor.

~~~~~~~~~~~~~~~~~~~~

AMD AI Developer Day

  • Where & when: San Francisco, Oct. 20, 2025
  • Who it’s for: Developers of artificial intelligence applications and systems. Workshop topics will include developing multi-model, multi-agent systems; generating videos using open source tools; and developing optimized kernals.
  • Who will be there: Speakers will include executives from the University of California, Berkeley; Red Hat AI; Google DeepMind; and OpenAI. Also speaking will be execs from Ollama, an open source platform for AI models; Unsloth AI, an open source AI startup; vLLM, a library for large language model (LLM) inference and serving; and SGLang, an LLM framework.
  • Fun facts:
    • Supermicro is a conference sponsor.
    • During the conference, winners of the AMD Developer Challenge will be announced. The grand prize winner will take home $100,000.
    • AMD, PyTorch and Unsloth AI are co-sponsoring a virtual hackathon, the Synthetic Data AI Agents Challenge, on Oct. 18-20. The first-prize winners will receive $3,000 plus 1,200 hours of GPU credits.

~~~~~~~~~~~~~~~~~~~~

AI Infra Summit

  • Where & when: San Francisco; Nov. 7, 2025
  • Who it’s for: Anyone interested in the convergence of AI innovation and scalable infrastructure. This event is being hosted by Ignite, a go-to-market provider for the technology industry.
  • Who will be there: The speaker lineup is still TBA, but is promised to include enterprise technology leaders, AI and machine learning engineers, cloud and data center architects, venture capital investors, and infrastructure vendors.
  • Fun facts:
    • This is a hybrid event. You can attend either live or online.
    • AMD and Supermicro are Stadium-level sponsors.

~~~~~~~~~~~~~~~~~~~~

SC25

  • Where & when: St. Louis, Missouri; Nov. 16-21, 2025
  • Who it’s for: The global supercomputing community, including those working in high performance computing (HPC), networking, storage and analysis. This year’s theme: “HPC ignites.”
  • Who will be there: Speakers will feature nearly a dozen AMD executives, including Rob Curtis, a Fellow in Data Center Platform Engineering; Shelby Lockhart, a software system engineer; and Nuwan Jayasena, a Fellow in AMD Research. They and other speakers will appear in panels, presentations of papers, workshops, tutorials and more.
     
  • Fun facts: SC25 will feature a series of noncommercial “Birds of a Feather” sessions that allow attendees to openly discuss topics of mutual interest.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Featured content

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Vultr, a global provider of cloud services, now offers Supermicro servers powered by AMD Instinct GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro servers powered by the latest AMD Instinct GPUs and supported by the AMD ROCm open software ecosystem are at the heart of a global cloud infrastructure program offered by Vultr.

Vultr calls itself a modern hyperscaler, meaning it provides cloud solutions for organizations facing complex AI and HPC workloads, high operational costs, vendor lock-in, and the need for rapid insights.

Launched in 2014, Vultr today offers services from 32 data centers worldwide, which it says can reach 90% of the world’s population in under 40 milliseconds. Vultr’s services include cloud instances, dedicated servers, cloud GPUs, and managed services for database, cloud storage and networking.

Vultr’s customers enjoy benefits that include costs 30% to 50% lower than those of the hyperscalers and 20% to 30% lower than those of other independent cloud providers. These customers—there are over 220,000 of them worldwide—also enjoy Vultr’s full native AI stack of compute, storage and networking.

Vultr is the flagship product of The Constant Co., based in West Palm Beach, Fla. The company was founded by David Aninowsky, an entrepreneur who also started GameServers.com and served as its CEO for 18 years.

Now Vultr counts among its partners AMD, which joined the Vultr Cloud Alliance, a partner program, just a year ago. In addition, AMD’s venture group co-led a funding round this past December that brought Vultr $333 million.

Expanded Data Center

Vultr is now expanding its relationship with Supermicro, in part because that company is first to market with the latest AMD Instinct GPUs. Vultr is now offering Supermcro systems powered by AMD Instinct MI355X, MI325X and MI300X GPUs. And as part of the partnership, Supermicro engineers work on-site with Vultr technicians.

Vultr is also relying on Supermicro for scaling. That’s a challenge for large AI implementations, as these configurations require deep expertise for both integration and operations.

Among Vultr’s offerings from Supermicro is a 4U liquid-cooled server (model AS -4126GS-NMR-LCC) with dual AMD EPYC 9005/9004 processors and up to eight AMD GPUs—the user’s choice of either MI325X or MI355X.

Another benefit of the new arrangement is access to AMD’s ROCm open source software environment, which will be made available within Vultr’s composable cloud infrastructure. This AMD-Vultr combo gives users access to thousands of open source, pre-trained AI models & frameworks.

Rockin’ with ROCm

AMD’s latest update to the software is ROCm 7, introduced in July and now live and ready to use. Version 7 offers advancements that include big performance gains, advanced features for scaling AI, and enterprise-ready AI tools.

One big benefit of AMD ROCm is that its open software ecosystem eliminates vendor lock-in. And when integrated with Vultr, ROCm supports AI frameworks that include PyTorch and TensorFlow, enabling flexible, rapid innovation. Further, ROCm future-proofs AI solutions by ensuring compatibility across hardware, promoting adaptability and scalability.

AMD’s roadmap is another attraction for Vultr. AMD products on tap for 2026 include the Instinct 400 family (codename Helios), new EPYC CPUs (Venice) and an 800-Gbit NIC (Vulcano).

Conversely, Vultr is a big business for AMD. Late last year, a tech blog reported that Vultr’s first shipment of AMD Instinct MI300X GPUs numbered “in the thousands.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Retail in the Spotlight: Making Shelf Space for AI

Featured content

Retail in the Spotlight: Making Shelf Space for AI

Learn how retailers including Amazon, Sephora and Walmart are applying artificial intelligence to deliver real business benefits—and help their shoppers find just the right product.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Retailers are relying more and more on artificial intelligence. And the reason is simple: AI technology can help retailers engage customers, lower operational costs and increase revenue.

Indeed, over 70% of retailers anticipate a significant ROI from AI in the next year, according to accounting firm KPMG.

Their customers approve of AI, too. In a poll conducted earlier this year by vision AI provider Everseen, two out of three consumers said AI makes shopping more convenient.

That’s a true win-win scenario.

Customer-facing

On the retail customer side, AI provides helpful features such as support chatbots and personal shopping assistants. AI can also offer visual search, letting customers upload photos of products they like and find similar items in real time.

AI is also capable of creating personalized recommendations that go far beyond the typical “people who bought X also bought Y” message.

For example, the AI behind Amazon’s industry-leading recommendation engine takes into account a customer’s shopping habits all the way back to the time they first created an account. Then the engine combines that data with whatever demographic information it can dig up or infer. The result: Customers receive genuinely useful suggestions.

Amazon also has a retail-focused chatbot called Rufus that can answer online shoppers’ questions about a products they haven’t bought yet, but are only considering. To do this work, the GenAI-powered shopping assistant has been trained on a potent mix of data that includes the entire Amazon catalog, customer reviews, community interviews and information from the public web.

This lets consumers ask Rufus just about anything. For example, “Are these shoes good for narrow feet?” will get an answer. And so will “Can this sharpener create the 16-degree angle recommended by the maker of my fancy Japanese chef’s knife?”

If you’re looking for a bit more wow factor, consider the Sephora Virtual Artist. This AI-powered virtual try-on feature uses your smartphone’s augmented reality (AR) to show how you’d look with a particular shade of lipstick, eye shadow or other makeup.

Don’t care for one shade? Sephora’s AI will suggest a better one based on your skin tone. Then it will find your color in stock at a store near you—along with a complimentary foundation, blush and eye liner.

Behind the Scenes

Deploying AI helps retailers save time and money. That’s especially true for those with big warehouses and complex supply chains.

Both Walmart and Amazon employ small armies of AI-enabled robots to zip around their warehouses. These tireless heavy-lifters find what they’re looking for by scanning bar- and QR-codes. Once they locate a product, their robotic arms grab it off even the highest shelf. Then the robots efficiently transport the products to their shipping departments.

These AI-powered robots can also report to other parts of the system, many of which use AI as well. One example is an inventory-control AI module that forecasts demand and makes sure the warehouse stays well-stocked. Another is a bot designed to manage complex supply chains by calculating trends, market prices, availability and shipping times.

Increasingly, retailers rely on AI for marketing too. They use retail bots to keep an eye on customer sentiment and emerging trends by scraping online reviews and social media posts. This information can also help retailers deal with customer-service issues before they get out of hand. And AI systems provide vital market data that retailers can use as they plan and launch new product lines.

Retail Power

Retail AI software is hugely powerful, but the hardware matters too. Deprived of enough power to collect, analyze and act on terabytes of daily data, AI is just reams of pointless code.

So retailers rely on purpose-built retail AI hardware solutions. That includes the Supermicro AS -2115HE-FTNR server.

This retail AI-server is powered by 5th gen AMD EPYC processors and has room for up to 6TB of ECC DDR5 memory and four GPUs. Retailers can also configure the system with up to 6 hot-swappable drives and their choice of air or liquid cooling.

The improved density in Supermicro’s multi-node racks helps retail organizations achieve a lower total cost of ownership by reducing server counts and energy demands.

Retail’s Future

AI is becoming more sophisticated every day. Soon, powerful new features will catalyze a paradigm shift in retail operations.

As agentic AI changes from a fascinating new design to a daily mainstay, hyper-personalized, frictionless and predictive digital online shopping will eventually become the norm. Retail stores will standardize AI-enabled smart shelves that control inventory, display dynamic pricing and direct shoppers to related items.

Behind the scenes, AI will help retail organizations further cut waste and lower their carbon footprints by better managing inventory and supply chains.

How long will we have to wait for our new AI-powered shopping experience? At the rate things are moving these days, not long at all.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Featured content

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Struggling to deliver business benefits from Generative AI? Supermicro, AMD and PioVation have a new solution that not only works out-of-the-box, but is also highly scalable.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Experimenting with Generative AI can be fun, but CEOs and corporate boards aren’t interested in fun. They want to see real business results—things like an enhanced customer experience, more innovative products, streamlined operations and lower TCO. And they want to see them now.

Getting GenAI to deliver these kinds of business results isn’t easy. A recent report from MIT finds that despite nearly $40 billion worth of enterprise investment in GenAI, 95% of the organizations are getting “zero return.”

That estimate is based on solid numbers. The MIT researchers reviewed over 300 AI projects, interviewed with more than 50 organizations, and surveyed some 150 senior leaders.

The latest forecasts aren’t much cheerier. Research firm Gartner this summer predicted that by the end of this year, nearly a third of all GenAI projects (30%) will be abandoned after the proof-of-concept stage. Gartner says the projects will be cut due to poor data quality, inadequate risk controls, escalating costs and unclear business value.

“After last year’s hype, executives are impatient to see returns on GenAI investments,” says Gartner analyst Rita Sallam. “Yet organizations are struggling to prove and realize value.”

That’s About to Change

Supermicro, AMD and startup PioVation have partnered to jointly develop a GenAI solution that offers a pre-validated, turnkey infrastructure for deploying large language models (LLMs). The benefits include lower deployment overhead, enhanced observability, and ensured control of sovereign data.

Partner PioVation is a developer of AI platforms for enterprises, government agencies, and small and midsize businesses. Its products can be run either on-premises or in PioVation’s cloud in Munich, Germany. The company, founded in 2024 by former AMD executive Mazda Sabony, has formed partnerships with several companies, including AMD and Supermicro.

The GenAI solution being offered by the three companies has been designed to scale all the way from compact on-prem clusters up to large-scale multi-tenant cloud environments. And its architecture integrates Supermicro rack-level systems, AMD Instinct GPUs, and PioVation’s agentic AI platform, PioSphere. The result, the companies say, is out-of-the box agentic AI at any scale.

Full Stack

The Supermicro-AMD-PioVation offering is a full-stack solution. An autonomous microservice chains LLM prompts, invokes domain-specific tools, and integrates seamlessly with your existing systems via REST (an architectural style for distributed hypermedia systems), gRPC (a remote procedure call framework), or event streams running on the pre-validated Supermicro server powered by AMD Instinct GPUs.

Another feature is the solution’s Model Context Protocol (MCP). It lets agents interact with external tools in a way that’s both modular and composable. The MCP also governs how tools are registered, discovered, invoked and composed dynamically at runtime. This includes input/output serialization, maintaining execution context, and enforcing consistency across tool chains. MCP also enables context-aware tool usage, making every agent interoperable, auditable and enterprise-ready from the start.

The solution is available in three topologies, each designed for different operational scales and use cases:

  • MiniStack: For SMBs, pilots, research and the edge.
  • EdgeCluster: For regulated sites, branches and other locations where high availability is required.
  • Cloud Deployment: For cloud service providers (CSPs), enterprises and AI providers.

All three versions include a unified agent dashboard, role-based access control, and policy enforcement.

Business Benefits

The three partners haven’t forgotten about the need for GenAI to deliver real business results that can keep CEOs and corporate boards happy. To that end, the solution offers benefits that include:

  • Turnkey deployment: PioSphere’s Cloud OS has been prevalidated on the Supermicro platform powered by AMD GPUs.
  • Unified operations stack: A tightly integrated environment eliminates fragmented AI tooling.
  • No-code agent development: A PioVation feature known as AgentStudio lets nontechnical users design, deploy and iterate AI agents using a no-code interface.
  • Sovereign data control: Built-in controls support national and regional compliance frameworks, including Europe’s GDPR and the United States’ HIPAA.
  • Multi-tenant scalability: An organization can create separate, secure environments for different business units or clients, yet they’ll all share a common infrastructure footprint.
  • Integrated LLM operations and agent life-cycle management: Users can integrate any LLM published on the Hugging Face or Kaggle communities with one-click connectors. Other built-in features include RAG (retrieval augmented generation) pipelines and full agent life-cycle tools.
  • Intelligent autoscaling: During workload spikes, the solution’s dynamic autoscaling ensures resource utilization, cost efficiency and seamless performance.

Put it all together, and you have a solution that goes far beyond mere experimentation. The three partners—Supermicro, AMD and PioVation—are serious about helping your GenAI projects deliver serious benefits for the business.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s a short-depth server?

Featured content

Tech Explainer: What’s a short-depth server?

Do your customer have locations that need server compute power, but lack data centers? Short-depth servers to the rescue!

Learn More about this topic
  • Applications:
  • Featured Technologies:

There are times when a standard-sized server just won’t do. Maybe your customer’s branch office or retail store has space constraints. Maybe they have concerns over portability. Or maybe their sustainability goals demand a solution that requires low power and efficient cooling.

For these and other related situations, short-depth servers can fit the bill. These relatively diminutive boxes are designed for use in less-than-ideal physical spaces that nevertheless demand high-performance IT infrastructure.

What kinds of organizations could benefit from short-depth server? Consider your local retail store. It’s likely been laid out using a calculus that prioritizes profit per square inch. This means the store’s best spots are dedicated to attracting buyers and generating revenue.

While that’s smart in terms of retail finance, it may not leave much room for vital infrastructure. That includes the servers that power the store’s point of sale (POS), security, advertising and data-collection systems.

This is a case where short-depth servers can help. These systems provide high levels of compute, storage and networking—without needing tall data center racks, elaborate cooling systems or other supporting infrastructure.

Other good candidates for using short-depth servers include remote branch offices, telco edge installations and industrial environments. In other words, any location that needs enterprise-level servers, but is short on space.

Small but Mighty

What’s more, today’s short-depth servers can handle some serious workloads.

Consider, for instance, the Supermicro WIO A+ Server (AS -1115SV-WTNRT), powered by AMD EPYC 8004 series processors. This short-depth server is engineered to tackle a variety of workloads, including virtualization, firewall applications, database, storage, edge and cloud computing.

The WIO A+ ships as a 1U form factor with a depth of just 23.5 inches. Compared with one of Supermicro’s big 8U multi-GPU servers, which has a depth of more than 33 inches, the short-depth server is short indeed.

Yet despite its diminutive size, this Supermicro server is packed with a ton of power—and room to grow. A single AMD EPYC processor sits at the center of the action, aided by either one double-width or two single-width GPUs.

This server also has room for up to 768GB of ECC DDR5 memory. And it can accommodate up to 10 hot-swap drives for NVMe, SAS or SATA storage.

As if that weren’t enough, Supermicro also includes room in this server cabinet for two PCIe 5.0 x16 full-height, full-length (FHFL) expansion cards. There’s also space for a single PCIe 5.0 x16 low-profile (LP) card.

More Power for Smaller Space

Fitting enough tech into a short-depth server can be a challenge. To do this, Supermicro’s designers had a few tricks up their sleeves.

For one, they used a custom motherboard instead of the more common ATX or EEB types. This creates more space in the smaller chassis. It also lets the designers employ a high-density component layout. The processors, GPUs, drives and other elements are placed closer to each other than they could be in a standard server.

Supermicro’s designers also deployed low-profile heat sinks. These use pipes that direct the heat toward fans. To save space, the fans are smaller than usual, but make up the difference by running faster. Sure, faster fans can create more noise. But it’s a worthy trade-off to avoid system failure due to overheating.

Are there downsides to the smaller form factor? There can be. For one, constrained airflow could force a system to throttle both processor and GPU performance in an effort to prevent heat-related issues. This could be an issue when running highly resource-intensive VM workloads.

For another, the smaller power supply units (PSUs) used in many short-depth servers may necessitate a less-powerful configuration than a user might prefer. For example, Supermicro’s short-depth server includes two 860-watt power supplies. That’s far less available power than the company’s multi-GPU powerhouse, which comes with six 5,250-watt PSUs. Of course, from another perspective, the need for less power can be seen as a benefit, especially at remote edge locations.

Short-depth servers represent a useful trade-off. While they give up some power and expandability, their reduced sizes can help IT pros make the most of tight spaces.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Featured content

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Supermicro and MangoBoost are together delivering an optimized end-to-end GenAI stack. It’s based on Supermicro servers powered by AMD Instinct GPUs and running MangoBoost’s LLMBoost software.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While many organizations are implementing AI for business, many are also discovering that deploying and operating large language models (LLMs) at scale isn’t easy.

They’re finding that the hardware demands are intense. And so are the performance and cost trade-offs. Also, with AI workloads increasingly demanding multi-node GPU clusters, orchestration and tuning can be complex.

To address these challenges, Supermicro and MangoBoost Inc. are working together to deliver an optimized end-to-end GenAI stack. They’ve combined Supermicro’s robust AMD Instinct GPU server portfolio with MangoBoost’s LLMBoost software.

Meet MangoBoost

If you’re unfamiliar with MangoBoost, the company offers programmable solutions that improve data-center application performance while lowering CPU overhead. MangoBoost was founded three years ago; today it operates in the United States, Canada and South Korea.

MangoBoost’s core product is called the Data Processing Unit. It ensures full compatibility with general-purpose GPUs, accelerators and storage devices, enabling cost-efficient and standardized AI infrastructures.

MangoBoost also offers a ready-to-deploy, full-stack AI inference server. Known as Mango LLMBoost, it’s available from the Big Three cloud providers—AWS, Microsoft Azure and Google Cloud.

LLMBoost helps organizations accelerate both the training and deploying LLM at scale. Why is this so challenging? Because once a model is ready for inference, developers face what’s known as a “productization tax.”

Integrating the machine-learning processing pipeline into the rest of the application often requires additional time and engineering effort. And this can lead to delays.

Mango LLMBoost addresses these challenges by creating an easy-to-use container. This lets LLM experts optimize their models, then select suitable GPUs on demand.

MangoBoost’s inference engine uses three forms of GPU parallelism, allowing GPUs to balance their compute, memory and network-resource usage. In addition, the software’s intelligent job scheduling optimizes cluster-wide GPU resources, ensuring that the load is balanced equally across GPU nodes.

LLMBoost also ensures the effective use of low-latency GPU caches and high-bandwidth memory through quantization. This reduces the data footprint, but without lowering accuracy.

Complementing Hardware

MangoBoost’s LLMBoost software complements the powerful hardware with a full-stack, production-ready AI MLOps platform. It includes:

  • Plug-and-play deployment: Pre-built Docker images and an intuitive command-line interface (CLI) both help developers to launch LLM workloads quickly.
  • OpenAI-compatible API: Lets developers integrate LLM endpoints with minimal code changes.
  • Kubernetes-native orchestration: Provides automated deployment and management of autoscaling, load balancing and job scheduling for seamless operation across both single- and multi-node clusters.
  • Full-stack performance auto-tuning: Unlike conventional auto-tuners that handle model hyper-parameters only, LLMBoost optimizes every layer from the inference and training back-ends to network configurations and GPU runtime parameters. This ensures maximum hardware utilization, yet without requiring any manual tuning.

Proof of Performance

Supermicro and MangoBoost collaborating to deliver an optimized end-to-end Generative AI stack sounds good. But how does the combined solution actually perform?

To find out, Supermicro, AMD and MangoBoost recently tested their combined solution using real-world GenAI workloads. Here are the results:

  • LLMBoost reduced training time by 40% for two-node training, down to 13.3 minutes on a server built around a dual-node AMD Instinct MI325X. The training was done running Llama 2 70B, an LLM with 70 billion parameters, with LoRA (low-rank adaptation).
  • LLMBoost achieved a 1.96X higher throughput for multiple-node inference on Supermicro AMD servers. That was up to over 61,000 tokens/sec. on a dual-node AMD Instinct MI325X configuration.
  • In-house LLM inference with Llama 4 Maverick and Scout models achieved near-linear scaling on AMD Instinct MI325X nodes. (Maverick is designed for fast responses at low cost; Scout, for long-document analysis.) This shows that Supermicro systems are ready for real-time GenAI deployment.
  • Load balancing: The researchers used LLaVA, an image-capturing model, on three setups. The heterogeneous dual-node configuration—eight AMD Instinct MI300X GPUs and eight AMD Instinct MI325X GPUs—achieved 96% of the sum of individual single-node runs. This demonstrates minimal overhead and high efficiency.

Are your customers looking for a turnkey GenAI cluster solution that’s high-performance, flexible and easy to operate? Then tell them that Supermicro, AMD and MangoBoost have their solution—and the proof that it works.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Need AI for financial services? Supermicro and AMD have your solution

Featured content

Need AI for financial services? Supermicro and AMD have your solution

Financial services companies are making big investments in AI. To speed their time to leadership, Supermicro and AMD are partnering to deliver advanced computing systems.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Financial services companies earn their keep by investing in stocks, bonds and other financial instruments. Now these companies are also making big investments in artificial intelligence technology.

To help these financial services industry (FSI) players adopt AI, Supermicro and AMD are working together. The two are partnering to offer advanced computing solutions designed to empower and speed the finance industry’s move to technology and business leadership.

FSI companies can use these systems to:

  • Detect risks faster, uncovering patterns and anomalies by ingesting ever-larger data sets
  • Supercharge trading with AI in both the front- and back-office
  • Modernize core processes to lower costs while boosting resilience
  • Engage and delight customers by meeting—even exceeding—their expectations

Big Spenders

Already, FSI spending on AI technology is substantial. Last year, when management consulting firm Bain & Co. surveyed nearly 110 U.S. FSI firms, it found that those respondents with annual revenue of at least $5 billion were spending an average of $221 million on AI.

The companies were getting a good return on AI, too. Bain found that 75% of financial services companies said their generative AI initiatives were either achieving or exceeding their expected value. In addition, the GenAI users reported an average productivity gain across all uses of an impressive 20%.

Based on those findings, Bain estimates that by embracing AI, FSI firms can reduce their customer-service costs by 20% to 30% while increasing their revenue by about 5%. 

Electric Companies

One big issue facing all users of AI is meeting the technology’s energy needs. Power consumption is a big-ticket item, accounting for about 40% of all data center costs, according to professional services firm Deloitte.

Greater AI adoption could push that even higher. Deloitte believes global data center electric consumption could double by as soon as 2030, driven by big increases in GenAI training and inference.

As Deloitte points out, some of that will be the result of new hardware requirements. While general-purpose data center CPUs typically run at 150 to 200 watts per chip, the GPUs used for AI run at up to 1,200 watts per chip.

This can also increase the power demand per rack. As of early 2024, data centers typically supported rack power requirements of at least 20 kilowatts, Deloitte says. But with growth of GenAI, that’s expected to reach 50 kilowatts per rack by 2027.

That growth is almost sure to come. Market watcher Grand View Research expects the global market for GPUs in data centers of all industries to rise over the next eight years at a compound annual growth rate (CAGR) of nearly 36%. That translates into data-center GPU sales leaping from $14.48 billion worldwide last year to $190.1 billion in 2033, Grand View predicts.

Partner Power

FSI companies don’t have to meet these challenges alone. Supermicro and AMD have partnered to deliver advanced computing systems that deliver high levels of compute performance and flexibility, yet with a comparatively low total cost of ownership (TCO).

They’re boosting performance with high-performing, dense 4U servers using the latest AMD EPYC CPUs and AMD Instinct GPUs. Some of these servers offer up to 60 storage drive bays, 9TB of DDR5 RAM and 192 CPU cores.

For AI workloads, AMD offers the AMD EPYC 9575F AI host node. It has 64 cores and a maximum boost frequency of up to 5 GHz.

Flexibility is another benefit. Supermicro offers modular Datacenter Building Block Solutions. These include system-level units that have been pre-validated to ease the task of data-center design, among other offerings.

AMD and Supermicro are also offering efficiencies that lower the cost of transforming with AI. Supermicro’s liquid cooling slashes the total cost of ownership (TCO). AMD processors are designed for power efficiency. And SMC’s multi-mode design gives you more processing capability per rack.

Are you working with FSI customers looking to lead the way with AI investments? The latest Supermicro servers powered by AMD CPUs and GPUs have your back.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages