Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: What’s new in AMD ROCm 7?

Featured content

Tech Explainer: What’s new in AMD ROCm 7?

Learn how the AMD ROCm software stack has been updated for the era of AI.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While GPUs have become the digital engines of our increasingly AI-powered lives, controlling them accurately and efficiently can be tricky.

That’s why, in 2016, AMD created ROCm. Pronounced rock-em, it’s a software stack designed to translate the code written by programmers into sets of instructions that AMD GPUs can understand and execute perfectly.

If the GPUs in today’s cutting-edge AI servers are the orchestra, then ROCm is the sheet music being played.

AMD introduced the latest version, ROCm 7.0, earlier this fall. Version 7.0 is designed for the new world of AI.

How ROCm works

ROCm is a platform created by AMD to run programs on its AI-focused GPUs, the Instinct MI350 Series accelerators. AMD calls the latest version, ROCm 7.0, an AI-ready powerhouse designed for performance, efficiency and productivity.

Providing that kind of facility is a matter of far more than just simple software. ROCm is actually an expansive collection of tools, drivers and libraries.

What’s in the collection? The full ROCm stack contains:

  • Drivers that enable a computer’s operating system to communicate with any installed AMD GPUs.
  • The Heterogeneous Interface for Portability (HIP), a coding system for users to create and run custom GPU programs.
  • Math and AI libraries including specialized tools like deep learning operations, fast math routines, matrix multiplication, and tensor ops. These AI building blocks are pre-built to help developers accelerate production.
  • Compilers that turn code into GPU instructions.
  • System-management tools that developers can use to debug applications and optimize GPU performance.

Help Me, GPU

The latest version of ROCm is purpose-built for generative AI and large-scale AI inferencing and training. While developers rely on GPUs for parallel processing, performing many tasks at once, GPUs are general-purpose processors. To achieve the best performance for AI workloads, developers need a software bridge that turns their high-level code into GPU-optimized instructions. That bridge is ROCm.

ROCm lets developers run AI frameworks that include PyTorch effectively on AMD GPUs. ROCm converts application code into instructions designed for the hardware. In this way, ROCm helps organizations improve performance, scale workloads across multiple GPUs, and meet increasing demand without sacrificing reliability.
 
For demanding AI workloads such as those using Mixture of Experts (MoE) models, ROCm is essential for execution. MoE models activate only a small group of expert networks for each input, resulting in sparse workloads that are efficient, but hard to schedule. ROCm ensures that GPUs can perform these sparse operations at scale, maintaining high throughput and accuracy across clusters.
 
In other words, ROCm provides the tools and runtime to make even the most complex GPU workloads run efficiently. It connects AI developers with the hardware that supports their applications.
 
That’s important. While increased demand is what every enterprise wants, it still brings challenges that leave little room for mistakes.
 
Open Source Power

But wait, there's more. AMD ROCm has another clever trick up its sleeve: open-source integration.

By using popular open-source frameworks, ROCm lets enterprises and developers run large-scale inference workloads more efficiently. This open source approach also empowers the same organizations and developers to break free of proprietary software and vendor-locked ecosystems.

Free from those dependencies, users can scale AI clusters by deploying commodity components instead of being locked into a single vendor’s hardware. Ultimately, that can lead to lower hardware and licensing costs.

This approach also empowers users to customize their AI operations. In this way, AI systems can be developed to better suit the unique requirements of an organization’s applications, environments and end users.

Another Layer

While ROCm serves the larger market, the recent release of AMD’s new Enterprise AI Suite shows the company’s commitment to developing tools specifically for enterprise-class organizations.

AMD says the new suite can to take enterprises from bare metal server to enterprise-ready AI software in mere minutes.

To accomplish this, the suite provides four additional components: solution blueprints, inference microservices, AI Workbench, and a dedicated resource manager.

These tools are designed to help enterprises better scale their AI workloads, predict costs and capacity, and accelerate time-to-production.

Always Be Developing

Along with these product releases, AMD is being perfectly clear about its focus on AI development. At the company’s recent Financial Analyst Day, AMD CEO Lisa Su explained that over the last five years, the cost of AMD’s AI-related investments and acquisitions has topped $100 billion. That includes building up a staff of some 25,000 engineers.

Looking ahead, Su told financial analysts that AMD’s data-center AI business is on track to draw revenue in the “tens of billions of dollars” by 2027. She also said that over the next three to five years, AMD expects its data-center AI revenue to enjoy a compound annual growth rate (CAGR) of over 80%.

AMD’s roadmap points to updates that will focus on further boosts to performance, productivity and scalability. The company may accomplish these gains by offering more streamlined build and packaging systems, more optimized training and inferencing, and broader hardware support. It’s also reasonable to expect improved virtualization and multi-tenant support.

That said, if you want your speculation about future AI-centric ROCm improvements to be as accurate as possible, your best bet may be to ask an AI chatbot…powered by Supermicro and AMD, of course.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: IT budgets, server sales, IoT analytics, AI at work

Featured content

Research Roundup: IT budgets, server sales, IoT analytics, AI at work

Catch up on the latest intelligence from leading IT market watchers and pollsters.

Learn More about this topic
  • Applications:

Global IT spending is on track to top $6 trillion next year. Server sales in this year’s second quarter nearly doubled. IoT is gaining real-time analytics. And more people say they use AI at work.

That’s the latest from leading IT market watchers and pollsters. And here’s your Research Roundup.

IT Spending Forecast

IT spending worldwide will rise by nearly 10% next year, topping $6 trillion for the first time, predicts Gartner.

The industry watcher says a big driver of spending is the rise of Generative AI. It’s bringing new features and functionality, and these cost more money, Gartner says. It predicts global spending on software will total $1.43 trillion in 2026 — that's 15% higher than it will be this year.

Other fast-growing sectors, Gartner predicts, will be data center systems (with 2026 spending growth projected at 19%), IT services (8.7%) and devices (6.8%).

But IT buyers aren’t waiting for 2026. “A significant budget flush is anticipated before the end of the year,” says Gartner analyst John-David Lovelock.

Q2 Server Sales Nearly Doubled

In the second quarter, global spending on servers nearly doubled, rising by 97.3% year-on-year, according to IDC. The market watcher attributes this rise to what it calls a “mass deployment” of GPUs.

Server sales by units were also strong. In Q2, they rose by nearly 16% from the year-ago quarter, IDC says.

Buyers of servers generally fall into one of two camps: either large cloud service providers (CSPs) or end-user organizations. Looking ahead, IDC expects CSPs to continue expanding their infrastructure through at least 2029. End users, by contrast, will work to balance spending between their on-premises deployments and cloud services they buy from others.

Looking ahead to the next five years, IDC forecasts global server sales rising by a compound annual growth rate (CAGR) of nearly 29%. Shorter term, IDC predicts global server sales will rise from $455.4 billion this year to $565.9 billion next year, a one-year increase of just over 24%.

IoT Going Real-Time

For Internet of Things (IoT) deployments, the dominant technology priority is real-time analytics.

So finds a survey conducted by market watcher Omdia that reached over 600 enterprises in 10 countries. Fully 82% of organizations surveyed said they either use real-time data processing capabilities now or plan to soon.

“Strong adoption of 5G and edge computing are laying the groundwork for real-time analytics,” says John Canali, an Omdia market analyst. “We’re seeing IoT evolve from simple data collection to process automation.”

All this IoT creates a lot of data. To process it all, over 75% of enterprises are supplementing their IoT systems with additional services such as AI and machine learning, Omdia finds. Their larger goal: Transforming business operations from reactive to predictive.

Will there be a payoff, and if so, how quick? Nearly all respondents to the Omdia survey (95%) said they expect to see measurable benefits from IoT within two years.

More People Using AI at Work

Artificial intelligence is slowly but surely working its way into real life, finds a survey by the Pew Research Center. The survey finds that roughly one in five U.S. workers (21%) now use AI in their jobs. That’s up from 16% a year ago.

Pew conducted the survey in September, connecting with respondents both online and by phone. Responses were collected from 8,750 randomly selected U.S. adults.

As the survey shows, some things about AI haven’t changed. A year ago, Pew found that a tiny 2% of U.S. adults were doing all or most of their work with AI. This year it was also 2%.

Similarly, last year about two-thirds of Pew’s respondents (65%) said they don’t use AI much or at all while on the job. That response rate was also unchanged this time.

One factor has changed: Fewer people admit to a lack of AI knowledge. A year ago, 17% of U.S. adults surveyed by Pew said they had not heard or read much about AI. This year, that group shrank to 12%.

Another change: More people think at least some parts of their job could be done by AI. A year ago, 31% of respondents agreed with that statement; this year, that rose to 36% of all.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Featured content

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Liquid cooling offers big efficiency gains over traditional air. And while there are upfront costs, for data centers with high-performance AI and HPC servers, the savings can be substantial. Learn how it works.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Increasingly resource-intensive AI workloads are creating more demand for advanced data center cooling systems. Today, the most efficient and cost-effective method is liquid cooling.

A liquid-cooled PC or server relies on a liquid rather than air to remove heat from vital components that include CPUs, GPUs and AI accelerators. The heat produced by these components is transferred to a liquid. Then the liquid carries away the heat to where it can be safely dissipated.

Most computers don’t require liquid cooling. That’s because general-use consumer and business machines don’t generate enough heat to justify liquid cooling’s higher upfront costs and additional maintenance.

However, high-performance systems designed for tasks such as gaming, scientific research and AI can often operate better, longer and more efficiently when equipped with liquid cooling.

How Liquid Cooling Works

For the actual coolant, most liquid systems use either water or dielectric fluids. Before water is added to a liquid cooler, it’s demineralized to prevent corrosion and build-up. And to prevent freezing and bacterial growth, the water may also be mixed with a combination of glycol, corrosion inhibitors and biocides.

Thus treated, the coolant is pushed through the system by an electric pump. A single liquid-cooled PC or server will need to include its own pump. But for enterprise data center racks containing multiple servers, the liquid is pumped by what’s known as an in-rack cooling distribution unit (CDU). Then the liquid is distributed to each server via a coolant distribution manifold (CDM).

As the liquid flows through the system, it’s channeled into cold plates that are mounted atop the system’s CPUs, GPUs, DIMM modules, PCIe switches and other heat-producing components. Each cold plate has microchannels through which the liquid flows, absorbing and carrying away each component’s thermal energy.

The next step is to safely dissipate the collected heat. To accomplish this, the liquid is pumped back through the CDU, which sends the now-hot liquid to a mechanism that removes the heat. This is typically done using chillers, cooling towers or heat exchangers.

Finally, the cooled liquid is sent back to the systems’ heat-producing components to begin the process again.

 

Liquid Pros & Cons

The most compelling aspect of liquid cooling is its efficiency. Water moves heat up to 25 times better than air while using less energy to do it. In comparison with traditional air, liquid cooling can reduce cooling energy costs by up to 40%.

But there’s more to the efficiency of liquid cooling than just cutting costs. Liquid cooling also enables IT managers to move servers closer together, packing in more power and storage per square foot. Given the high cost of data center real estate, and the fullness of many data centers, that’s an important benefit.

In addition, liquid cooling can better handle the latest high-powered processing components. For instance, Supermicro says its DLC-2 next-generation Direct Liquid-Cooling solutions, introduced in May, can accommodate warmer liquid inflow temperatures while also enhancing AI per watt.

But liquid cooling systems have their downsides, too. For one, higher upfront costs can present a barrier for entry. Sure, data center operators will realize a lower total cost of ownership (TCO) over the long run. But when deploying a liquid-cooled data center, they must still contend with initial capital expense (CapEx) outlays—and justifying those costs to the CFO.

For another, IT managers might think twice about the additional complexity and risks of a liquid cooling solution. More components and variables mean more things that can go wrong. Data center insurance premiums may rise too, since a liquid cooling system can always spring a leak.

Driving Demand: AI

All that said, the market for liquid cooling systems is primed for serious growth.

As AI workloads become increasingly resource-intensive, IT managers are deploying more powerful servers to keep up with demand. These high-performance machines produce more heat than previous generations. And that creates increased demand for efficient, cost-effective cooling solutions.

How much demand? This year, the data center liquid cooling market is projected to drive global sales of $2.84 billion, according to Markets and Markets.

Looking ahead, the industry watcher expects the global liquid cooling market to reach $21.14 billion by 2032. If that happens, the rise will represent a compound annual growth rate (CAGR) over the projected period of 33%.

Coming Soon: Immersive Cooling

In the near future, AI workloads will likely become even more demanding. This means data centers will need to deploy—and cool—ultra-dense AI server clusters that produce tremendous amounts of heat.

To deal with this extra heat, IT managers may need the next step in data center cooling: immersion.

With immersion cooling, an entire rack of servers is submerged horizontally in a tank filled with what’s known as dielectric fluid. This is a non-conductive liquid that ensures the server’s hardware can operate while submerged, and without short-circuiting.

Immersion cooling is being developed along two paths. The most common variety is called single-phase, and it operates similarly to an aquarium’s water filter. As pumps circulate the dielectric fluid around the servers, the fluid is heated by the server’s components. Then it’s cooled by an external heat exchanger.

The other type of immersion cooling is known as two-phase. Here, the system uses water treated to have a relatively low boiling point—around 50 C / 122 F. As this water is heated by the immersed server, it boils, creating a vapor that rises to condensers installed at the top of the tank. The vapor is there condensed to a cooler liquid, then allowed to drip back down into the tank.

This natural convection means there’s no need for electric pumps. It’s a glimpse of a smarter, more efficient liquid future, coming soon to a data center near you.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Retail AI at the edge: Now here from Supermicro, AMD & Wobot.ai

Featured content

Retail AI at the edge: Now here from Supermicro, AMD & Wobot.ai

Retailers can now use AI to analyze in-store videos, thanks to a new system from Supermicro, AMD and Wobot.ai.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Artificial intelligence is being adapted for specific industry verticals. That now includes retail.

Supermicro, AMD and Wobot Intelligence Inc., a video intelligence supplier, are partnering to provide retailers with a short-depth server they can use to drive AI-powered analysis of their in-store videos. With these analyses, retailers can improve store operations, elevate the customer experience and boost sales.

The new server system was recently showcased by the three partners at NRF Europe 2025, an international conference for retailers. This year’s NRF Europe was held in Paris, France, in mid-September.

The new retail system is based on a Supermicro 1U server, model AS -1115S-FWTRT. It’s a short-depth front I/O system powered by a single AMD EPYC 8004 processor.

The server’s other features include dual 10G ports, dual 2.5-inch drive bays, up to 768GB of DDR5 memory, and an 800W redundant platinum power supply. This server is air-cooled by as many as six heavy-duty fans, and it supports a pair of single-width GPUs.

Good to Go

The retail system’s video-analysis software, provided by Wobot.ai, features a single dashboard, performance benchmarking, and easy installation and configuration. It’s designed to work with a user’s existing CCTV setup.

The company’s WoConnect app helps users connect digital video recorders (DVRs) and network video recorders (NVRs) in their private network to their Wobot.ai account. The app routes the user’s camera feeds to the AI.

Target use cases for retailers include store operations, loss prevention and compliance, customer behavior and footfall analysis.

More specifically, retailers can use the system to conduct video analyses that include:

  • Zone-based analytics: Which areas of the store draw the most attention? Which products draw interaction? How do customers move through the store?
  • Heat maps and event tracking: Visualize “crowd magnets” to improve future sales.
  • Customer-path analysis: Observe which sections of the store customers explore the most, and also see where they linger.

Using the system, retailers can enjoy a long list of benefits that include accelerated checkout processes, fewer customer walkaways, fine-tuned staffing levels, and improved product placement.

For example, a chain of juice bars with nearly 145 locations in California turned to Wobot.ai for help speeding customer service and improving employee productivity. Based on its video analyses, the retailer worked with Wobot.ai to design a pilot program for 10 stores. In just three months, the pilot delivered additional revenue in the test stores equivalent to 2% to 2.5% a year.

Wobot.ai also offers its video intelligence systems to other verticals, including hospitality, food service and security.

Edgy

One important feature of the new server is that it allows retailers to run real-time AI-powered video analysis at the edge. The Supermicro server is housed in a short-depth form factor, meaning it can be run in retail sites that lack a full-fledged data center.

Similarly, the system’s AMD EPYC 8004 processor has been optimized for power efficiency—important for installations at the edge. Featuring up to 64 ‘Zen4c’ dense cores, this AMD processor is specifically designed for intelligent edge and communications workloads.

By processing the AI analysis on-premises, the new system also offers low latency and high levels of privacy. Wobot.ai says its software is scalable across literally thousands of locations.

And the software is designed to be integrated easily with retailers’ existing camera infrastructure. In this way, it offers fast time-to-value and a quick return on investment.

Do you have retail customers looking for an edge—with AI at the edge? Tell them about this new retail solution today.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

4 IT events this fall you won’t want to miss

Featured content

4 IT events this fall you won’t want to miss

Important IT industry events are coming in October and November--with lots of participation from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Summer’s over…somehow it’s already October…and that means it’s time to attend important IT industry conferences, summits and other get-togethers.

Here’s your Performance Intensive Computing preview of four top events coming this month and next.

OCP Global Summit

  • Where & when: San Jose, California; Oct. 13-16, 2025
  • Who it’s for: This event, sponsored by the Open Compute Project (OCP), is for anyone interested in redesigning open source hardware to support the changing demands on compute infrastructure. This year’s theme: “Leading the future of AI.”
  • Who will be there: Speakers this year include Vik Malyala, senior VP of technology and AI at Supermicro; Mark Papermaster, CTO of AMD; Johnson Eung, staff growth product manager in AI at Supermicro; Shane Corban, senior director of technical product management at AMD; and Morris Ruan, director of product management at Supermicro.
     
  • Fun facts: AMD is a Diamond sponsor, and Supermicro is an Emerald sponsor.

~~~~~~~~~~~~~~~~~~~~

AMD AI Developer Day

  • Where & when: San Francisco, Oct. 20, 2025
  • Who it’s for: Developers of artificial intelligence applications and systems. Workshop topics will include developing multi-model, multi-agent systems; generating videos using open source tools; and developing optimized kernals.
  • Who will be there: Speakers will include executives from the University of California, Berkeley; Red Hat AI; Google DeepMind; and OpenAI. Also speaking will be execs from Ollama, an open source platform for AI models; Unsloth AI, an open source AI startup; vLLM, a library for large language model (LLM) inference and serving; and SGLang, an LLM framework.
  • Fun facts:
    • Supermicro is a conference sponsor.
    • During the conference, winners of the AMD Developer Challenge will be announced. The grand prize winner will take home $100,000.
    • AMD, PyTorch and Unsloth AI are co-sponsoring a virtual hackathon, the Synthetic Data AI Agents Challenge, on Oct. 18-20. The first-prize winners will receive $3,000 plus 1,200 hours of GPU credits.

~~~~~~~~~~~~~~~~~~~~

AI Infra Summit

  • Where & when: San Francisco; Nov. 7, 2025
  • Who it’s for: Anyone interested in the convergence of AI innovation and scalable infrastructure. This event is being hosted by Ignite, a go-to-market provider for the technology industry.
  • Who will be there: The speaker lineup is still TBA, but is promised to include enterprise technology leaders, AI and machine learning engineers, cloud and data center architects, venture capital investors, and infrastructure vendors.
  • Fun facts:
    • This is a hybrid event. You can attend either live or online.
    • AMD and Supermicro are Stadium-level sponsors.

~~~~~~~~~~~~~~~~~~~~

SC25

  • Where & when: St. Louis, Missouri; Nov. 16-21, 2025
  • Who it’s for: The global supercomputing community, including those working in high performance computing (HPC), networking, storage and analysis. This year’s theme: “HPC ignites.”
  • Who will be there: Speakers will feature nearly a dozen AMD executives, including Rob Curtis, a Fellow in Data Center Platform Engineering; Shelby Lockhart, a software system engineer; and Nuwan Jayasena, a Fellow in AMD Research. They and other speakers will appear in panels, presentations of papers, workshops, tutorials and more.
     
  • Fun facts: SC25 will feature a series of noncommercial “Birds of a Feather” sessions that allow attendees to openly discuss topics of mutual interest.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: Cloud infrastructure spending, AI PoCs, preemptive security, AI worries

Featured content

Research Roundup: Cloud infrastructure spending, AI PoCs, preemptive security, AI worries

Get the latest insights from leading IT researchers, industry analysts and market watchers.

Learn More about this topic
  • Applications:

Global spending on cloud infrastructure services rose in the latest quarter by over 20%. Only about one in four AI tests in the Asia/Pacific are moving on to full production. Cybersecurity is about to become preemptive. And the rise of AI has many U.S. adults concerned.

That’s the latest from leading IT researchers, market watchers and pollsters. And here’s your research roundup.

Cloud Infrastructure Spending: Up, Up and Away

Global spending on cloud infrastructure services rose 22% year-on-year in this year’s second quarter (April, May and June), reaching a total of $95.3 billion, according to market watcher Canalys. This marks the sector’s fourth consecutive quarter of year-on-year growth topping 20%.

All that demand was driven by three main forces, Canalys says: AI consumption, revived legacy migrations, and cloud-native scale-ups.

Also during Q2, the Big Three cloud providers—Amazon Web Services, Google Cloud and Microsoft Azure—held their collective 65% share of the market. What’s more, customer spending with the Big Three increased in the quarter by 27% year-on-year, Canalys says.

Customer demand for AI is shifting, too. “An increasing number of enterprises are seeking the capability to switch between different AI models based on specific business requirements,” says Canalys senior analyst Yi Zhang. Their goal: an optimal balance of performance, cost and application fit.

AI PoC to Production? Not Many Yet

Organizations in the Asia/Pacific region are experimenting with AI, but fewer than one in four of their AI applications (23%) have moved from proof-of-concept (PoC) to production, finds industry analyst IDC.

One result, says IDC researcher Abhishek Kumar: “Many Asian businesses are reassessing how to launch and scale AI.”

Part of this reassessment involves a shift to new AI approaches based on end-to-end platforms. However, moving to these approaches won’t be easy, Kumar says. Organizations need to understand not only each vendor’s approach, but also how the proposed systems align with their own organization’s requirements.

IDC recommends that organizations start thinking of their AI suppliers as partners, not just providers. Though we’ve heard that before, this time it’s different: AI is likely to dramatically reshape entire workflows.

Cybersecurity’s Future: Preemptive

Detection and response are currently the main cybersecurity techniques, but that’s about to change, predicts Gartner. The research firm believes that by 2030, over half of all cybersecurity spending worldwide will instead go to technologies that are preemptive.

“Preemptive cybersecurity will soon be “the new gold standard,” asserts Gartner VP Carl Manion.

Why the shift? Because detection/response-based cybersecurity will no longer be enough to keep assets safe from AI-enabled attackers, Manion says.

As part of this shift, organizations will move away from one-size-fits-all security solutions, instead adopting approaches that are more targeted. These could include security systems for specific verticals, such as healthcare and finance; specific application types, such as industrial control systems; and specific threat actor methods, such as supply-chain attacks.

Preemptive cybersec could also include what are known as autonomous cyber immune systems (ACIS). Like a biological immune system, an ACIS will be able to both detect attacks and fight them off.

Resistance to this shift will be futile, Manion says. Organizations that stick with older detection and response security systems, will be exposing their products, services and customers to what he calls “a new, rapidly escalating level of danger.”

AI has U.S. Adults Fretting

The rise of artificial intelligence has U.S. adults concerned, finds a new poll by Pew Research. A majority of respondents say they believe the rising use of AI will worsen people’s ability to think creatively, form meaningful relationships, make difficult decisions and solve problems.

The poll, conducted by Pew in June, reached over 5,000 adults who live in the United States. Pew released the poll results earlier this month.

Overall, more than half the survey respondents (57%) rated the societal risks of AI as high. Only one in four (25%) said the benefits of AI are high.

Other findings include:

  • Creative thinking: In the poll, more than half the respondents (53%) said increased use of AI will worsen people’s ability to think creatively. Only 16% thought increased use of AI would improve this ability. Another 16% said it would be neither better nor worse, and a final 16% wasn’t sure.
  • Relationships: Exactly half the respondents (50%) believe increased use of AI will worsen people’s ability to form meaningful relationships. Only 5% believe wider AI use would improve this ability. A quarter (25%) thought there would be no change, while one in five (20%) weren’t sure.
  • Decisions: More than a third of respondents (40%) believe increased use of AI will worsen our ability to make difficult decisions. Fewer than one in five (19%) expect AI to improve this ability. About the same number (20%) foresee no change, and the same percentage said they weren’t sure.
  • Problem-solving: This was a closer contest. Over a third of respondents (38%) said wider use of AI will worsen our ability to solve problems, while more than a quarter (29%) said it would improve this ability. Fifteen percent expect no change, and 17% weren’t sure.
  • Deepfakes: Over three-quarters of respondents (76%) said it’s important to be able to detect whether a picture, video or text was created by AI. But over half of all (53%) also said they’re not confident they can make these detections.

These concerns aside, the AI market still has plenty of room for growth. A recent forecast from Grand View Research has global AI sales rising from about $280 billion last year to nearly $3.5 trillion in 2033. That would represent an impressive 8-year compound annual growth rate (CAGR) of just over 30%.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Featured content

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Vultr, a global provider of cloud services, now offers Supermicro servers powered by AMD Instinct GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro servers powered by the latest AMD Instinct GPUs and supported by the AMD ROCm open software ecosystem are at the heart of a global cloud infrastructure program offered by Vultr.

Vultr calls itself a modern hyperscaler, meaning it provides cloud solutions for organizations facing complex AI and HPC workloads, high operational costs, vendor lock-in, and the need for rapid insights.

Launched in 2014, Vultr today offers services from 32 data centers worldwide, which it says can reach 90% of the world’s population in under 40 milliseconds. Vultr’s services include cloud instances, dedicated servers, cloud GPUs, and managed services for database, cloud storage and networking.

Vultr’s customers enjoy benefits that include costs 30% to 50% lower than those of the hyperscalers and 20% to 30% lower than those of other independent cloud providers. These customers—there are over 220,000 of them worldwide—also enjoy Vultr’s full native AI stack of compute, storage and networking.

Vultr is the flagship product of The Constant Co., based in West Palm Beach, Fla. The company was founded by David Aninowsky, an entrepreneur who also started GameServers.com and served as its CEO for 18 years.

Now Vultr counts among its partners AMD, which joined the Vultr Cloud Alliance, a partner program, just a year ago. In addition, AMD’s venture group co-led a funding round this past December that brought Vultr $333 million.

Expanded Data Center

Vultr is now expanding its relationship with Supermicro, in part because that company is first to market with the latest AMD Instinct GPUs. Vultr is now offering Supermcro systems powered by AMD Instinct MI355X, MI325X and MI300X GPUs. And as part of the partnership, Supermicro engineers work on-site with Vultr technicians.

Vultr is also relying on Supermicro for scaling. That’s a challenge for large AI implementations, as these configurations require deep expertise for both integration and operations.

Among Vultr’s offerings from Supermicro is a 4U liquid-cooled server (model AS -4126GS-NMR-LCC) with dual AMD EPYC 9005/9004 processors and up to eight AMD GPUs—the user’s choice of either MI325X or MI355X.

Another benefit of the new arrangement is access to AMD’s ROCm open source software environment, which will be made available within Vultr’s composable cloud infrastructure. This AMD-Vultr combo gives users access to thousands of open source, pre-trained AI models & frameworks.

Rockin’ with ROCm

AMD’s latest update to the software is ROCm 7, introduced in July and now live and ready to use. Version 7 offers advancements that include big performance gains, advanced features for scaling AI, and enterprise-ready AI tools.

One big benefit of AMD ROCm is that its open software ecosystem eliminates vendor lock-in. And when integrated with Vultr, ROCm supports AI frameworks that include PyTorch and TensorFlow, enabling flexible, rapid innovation. Further, ROCm future-proofs AI solutions by ensuring compatibility across hardware, promoting adaptability and scalability.

AMD’s roadmap is another attraction for Vultr. AMD products on tap for 2026 include the Instinct 400 family (codename Helios), new EPYC CPUs (Venice) and an 800-Gbit NIC (Vulcano).

Conversely, Vultr is a big business for AMD. Late last year, a tech blog reported that Vultr’s first shipment of AMD Instinct MI300X GPUs numbered “in the thousands.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Retail in the Spotlight: Making Shelf Space for AI

Featured content

Retail in the Spotlight: Making Shelf Space for AI

Learn how retailers including Amazon, Sephora and Walmart are applying artificial intelligence to deliver real business benefits—and help their shoppers find just the right product.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Retailers are relying more and more on artificial intelligence. And the reason is simple: AI technology can help retailers engage customers, lower operational costs and increase revenue.

Indeed, over 70% of retailers anticipate a significant ROI from AI in the next year, according to accounting firm KPMG.

Their customers approve of AI, too. In a poll conducted earlier this year by vision AI provider Everseen, two out of three consumers said AI makes shopping more convenient.

That’s a true win-win scenario.

Customer-facing

On the retail customer side, AI provides helpful features such as support chatbots and personal shopping assistants. AI can also offer visual search, letting customers upload photos of products they like and find similar items in real time.

AI is also capable of creating personalized recommendations that go far beyond the typical “people who bought X also bought Y” message.

For example, the AI behind Amazon’s industry-leading recommendation engine takes into account a customer’s shopping habits all the way back to the time they first created an account. Then the engine combines that data with whatever demographic information it can dig up or infer. The result: Customers receive genuinely useful suggestions.

Amazon also has a retail-focused chatbot called Rufus that can answer online shoppers’ questions about a products they haven’t bought yet, but are only considering. To do this work, the GenAI-powered shopping assistant has been trained on a potent mix of data that includes the entire Amazon catalog, customer reviews, community interviews and information from the public web.

This lets consumers ask Rufus just about anything. For example, “Are these shoes good for narrow feet?” will get an answer. And so will “Can this sharpener create the 16-degree angle recommended by the maker of my fancy Japanese chef’s knife?”

If you’re looking for a bit more wow factor, consider the Sephora Virtual Artist. This AI-powered virtual try-on feature uses your smartphone’s augmented reality (AR) to show how you’d look with a particular shade of lipstick, eye shadow or other makeup.

Don’t care for one shade? Sephora’s AI will suggest a better one based on your skin tone. Then it will find your color in stock at a store near you—along with a complimentary foundation, blush and eye liner.

Behind the Scenes

Deploying AI helps retailers save time and money. That’s especially true for those with big warehouses and complex supply chains.

Both Walmart and Amazon employ small armies of AI-enabled robots to zip around their warehouses. These tireless heavy-lifters find what they’re looking for by scanning bar- and QR-codes. Once they locate a product, their robotic arms grab it off even the highest shelf. Then the robots efficiently transport the products to their shipping departments.

These AI-powered robots can also report to other parts of the system, many of which use AI as well. One example is an inventory-control AI module that forecasts demand and makes sure the warehouse stays well-stocked. Another is a bot designed to manage complex supply chains by calculating trends, market prices, availability and shipping times.

Increasingly, retailers rely on AI for marketing too. They use retail bots to keep an eye on customer sentiment and emerging trends by scraping online reviews and social media posts. This information can also help retailers deal with customer-service issues before they get out of hand. And AI systems provide vital market data that retailers can use as they plan and launch new product lines.

Retail Power

Retail AI software is hugely powerful, but the hardware matters too. Deprived of enough power to collect, analyze and act on terabytes of daily data, AI is just reams of pointless code.

So retailers rely on purpose-built retail AI hardware solutions. That includes the Supermicro AS -2115HE-FTNR server.

This retail AI-server is powered by 5th gen AMD EPYC processors and has room for up to 6TB of ECC DDR5 memory and four GPUs. Retailers can also configure the system with up to 6 hot-swappable drives and their choice of air or liquid cooling.

The improved density in Supermicro’s multi-node racks helps retail organizations achieve a lower total cost of ownership by reducing server counts and energy demands.

Retail’s Future

AI is becoming more sophisticated every day. Soon, powerful new features will catalyze a paradigm shift in retail operations.

As agentic AI changes from a fascinating new design to a daily mainstay, hyper-personalized, frictionless and predictive digital online shopping will eventually become the norm. Retail stores will standardize AI-enabled smart shelves that control inventory, display dynamic pricing and direct shoppers to related items.

Behind the scenes, AI will help retail organizations further cut waste and lower their carbon footprints by better managing inventory and supply chains.

How long will we have to wait for our new AI-powered shopping experience? At the rate things are moving these days, not long at all.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Featured content

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Struggling to deliver business benefits from Generative AI? Supermicro, AMD and PioVation have a new solution that not only works out-of-the-box, but is also highly scalable.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Experimenting with Generative AI can be fun, but CEOs and corporate boards aren’t interested in fun. They want to see real business results—things like an enhanced customer experience, more innovative products, streamlined operations and lower TCO. And they want to see them now.

Getting GenAI to deliver these kinds of business results isn’t easy. A recent report from MIT finds that despite nearly $40 billion worth of enterprise investment in GenAI, 95% of the organizations are getting “zero return.”

That estimate is based on solid numbers. The MIT researchers reviewed over 300 AI projects, interviewed with more than 50 organizations, and surveyed some 150 senior leaders.

The latest forecasts aren’t much cheerier. Research firm Gartner this summer predicted that by the end of this year, nearly a third of all GenAI projects (30%) will be abandoned after the proof-of-concept stage. Gartner says the projects will be cut due to poor data quality, inadequate risk controls, escalating costs and unclear business value.

“After last year’s hype, executives are impatient to see returns on GenAI investments,” says Gartner analyst Rita Sallam. “Yet organizations are struggling to prove and realize value.”

That’s About to Change

Supermicro, AMD and startup PioVation have partnered to jointly develop a GenAI solution that offers a pre-validated, turnkey infrastructure for deploying large language models (LLMs). The benefits include lower deployment overhead, enhanced observability, and ensured control of sovereign data.

Partner PioVation is a developer of AI platforms for enterprises, government agencies, and small and midsize businesses. Its products can be run either on-premises or in PioVation’s cloud in Munich, Germany. The company, founded in 2024 by former AMD executive Mazda Sabony, has formed partnerships with several companies, including AMD and Supermicro.

The GenAI solution being offered by the three companies has been designed to scale all the way from compact on-prem clusters up to large-scale multi-tenant cloud environments. And its architecture integrates Supermicro rack-level systems, AMD Instinct GPUs, and PioVation’s agentic AI platform, PioSphere. The result, the companies say, is out-of-the box agentic AI at any scale.

Full Stack

The Supermicro-AMD-PioVation offering is a full-stack solution. An autonomous microservice chains LLM prompts, invokes domain-specific tools, and integrates seamlessly with your existing systems via REST (an architectural style for distributed hypermedia systems), gRPC (a remote procedure call framework), or event streams running on the pre-validated Supermicro server powered by AMD Instinct GPUs.

Another feature is the solution’s Model Context Protocol (MCP). It lets agents interact with external tools in a way that’s both modular and composable. The MCP also governs how tools are registered, discovered, invoked and composed dynamically at runtime. This includes input/output serialization, maintaining execution context, and enforcing consistency across tool chains. MCP also enables context-aware tool usage, making every agent interoperable, auditable and enterprise-ready from the start.

The solution is available in three topologies, each designed for different operational scales and use cases:

  • MiniStack: For SMBs, pilots, research and the edge.
  • EdgeCluster: For regulated sites, branches and other locations where high availability is required.
  • Cloud Deployment: For cloud service providers (CSPs), enterprises and AI providers.

All three versions include a unified agent dashboard, role-based access control, and policy enforcement.

Business Benefits

The three partners haven’t forgotten about the need for GenAI to deliver real business results that can keep CEOs and corporate boards happy. To that end, the solution offers benefits that include:

  • Turnkey deployment: PioSphere’s Cloud OS has been prevalidated on the Supermicro platform powered by AMD GPUs.
  • Unified operations stack: A tightly integrated environment eliminates fragmented AI tooling.
  • No-code agent development: A PioVation feature known as AgentStudio lets nontechnical users design, deploy and iterate AI agents using a no-code interface.
  • Sovereign data control: Built-in controls support national and regional compliance frameworks, including Europe’s GDPR and the United States’ HIPAA.
  • Multi-tenant scalability: An organization can create separate, secure environments for different business units or clients, yet they’ll all share a common infrastructure footprint.
  • Integrated LLM operations and agent life-cycle management: Users can integrate any LLM published on the Hugging Face or Kaggle communities with one-click connectors. Other built-in features include RAG (retrieval augmented generation) pipelines and full agent life-cycle tools.
  • Intelligent autoscaling: During workload spikes, the solution’s dynamic autoscaling ensures resource utilization, cost efficiency and seamless performance.

Put it all together, and you have a solution that goes far beyond mere experimentation. The three partners—Supermicro, AMD and PioVation—are serious about helping your GenAI projects deliver serious benefits for the business.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Featured content

How Supermicro/AMD servers boost AI boost performance with MangoBoost

Supermicro and MangoBoost are together delivering an optimized end-to-end GenAI stack. It’s based on Supermicro servers powered by AMD Instinct GPUs and running MangoBoost’s LLMBoost software.

Learn More about this topic
  • Applications:
  • Featured Technologies:

While many organizations are implementing AI for business, many are also discovering that deploying and operating large language models (LLMs) at scale isn’t easy.

They’re finding that the hardware demands are intense. And so are the performance and cost trade-offs. Also, with AI workloads increasingly demanding multi-node GPU clusters, orchestration and tuning can be complex.

To address these challenges, Supermicro and MangoBoost Inc. are working together to deliver an optimized end-to-end GenAI stack. They’ve combined Supermicro’s robust AMD Instinct GPU server portfolio with MangoBoost’s LLMBoost software.

Meet MangoBoost

If you’re unfamiliar with MangoBoost, the company offers programmable solutions that improve data-center application performance while lowering CPU overhead. MangoBoost was founded three years ago; today it operates in the United States, Canada and South Korea.

MangoBoost’s core product is called the Data Processing Unit. It ensures full compatibility with general-purpose GPUs, accelerators and storage devices, enabling cost-efficient and standardized AI infrastructures.

MangoBoost also offers a ready-to-deploy, full-stack AI inference server. Known as Mango LLMBoost, it’s available from the Big Three cloud providers—AWS, Microsoft Azure and Google Cloud.

LLMBoost helps organizations accelerate both the training and deploying LLM at scale. Why is this so challenging? Because once a model is ready for inference, developers face what’s known as a “productization tax.”

Integrating the machine-learning processing pipeline into the rest of the application often requires additional time and engineering effort. And this can lead to delays.

Mango LLMBoost addresses these challenges by creating an easy-to-use container. This lets LLM experts optimize their models, then select suitable GPUs on demand.

MangoBoost’s inference engine uses three forms of GPU parallelism, allowing GPUs to balance their compute, memory and network-resource usage. In addition, the software’s intelligent job scheduling optimizes cluster-wide GPU resources, ensuring that the load is balanced equally across GPU nodes.

LLMBoost also ensures the effective use of low-latency GPU caches and high-bandwidth memory through quantization. This reduces the data footprint, but without lowering accuracy.

Complementing Hardware

MangoBoost’s LLMBoost software complements the powerful hardware with a full-stack, production-ready AI MLOps platform. It includes:

  • Plug-and-play deployment: Pre-built Docker images and an intuitive command-line interface (CLI) both help developers to launch LLM workloads quickly.
  • OpenAI-compatible API: Lets developers integrate LLM endpoints with minimal code changes.
  • Kubernetes-native orchestration: Provides automated deployment and management of autoscaling, load balancing and job scheduling for seamless operation across both single- and multi-node clusters.
  • Full-stack performance auto-tuning: Unlike conventional auto-tuners that handle model hyper-parameters only, LLMBoost optimizes every layer from the inference and training back-ends to network configurations and GPU runtime parameters. This ensures maximum hardware utilization, yet without requiring any manual tuning.

Proof of Performance

Supermicro and MangoBoost collaborating to deliver an optimized end-to-end Generative AI stack sounds good. But how does the combined solution actually perform?

To find out, Supermicro, AMD and MangoBoost recently tested their combined solution using real-world GenAI workloads. Here are the results:

  • LLMBoost reduced training time by 40% for two-node training, down to 13.3 minutes on a server built around a dual-node AMD Instinct MI325X. The training was done running Llama 2 70B, an LLM with 70 billion parameters, with LoRA (low-rank adaptation).
  • LLMBoost achieved a 1.96X higher throughput for multiple-node inference on Supermicro AMD servers. That was up to over 61,000 tokens/sec. on a dual-node AMD Instinct MI325X configuration.
  • In-house LLM inference with Llama 4 Maverick and Scout models achieved near-linear scaling on AMD Instinct MI325X nodes. (Maverick is designed for fast responses at low cost; Scout, for long-document analysis.) This shows that Supermicro systems are ready for real-time GenAI deployment.
  • Load balancing: The researchers used LLaVA, an image-capturing model, on three setups. The heterogeneous dual-node configuration—eight AMD Instinct MI300X GPUs and eight AMD Instinct MI325X GPUs—achieved 96% of the sum of individual single-node runs. This demonstrates minimal overhead and high efficiency.

Are your customers looking for a turnkey GenAI cluster solution that’s high-performance, flexible and easy to operate? Then tell them that Supermicro, AMD and MangoBoost have their solution—and the proof that it works.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages