Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Featured content

Tech Explainer: What’s liquid cooling? And why might your data center need it now?

Liquid cooling offers big efficiency gains over traditional air. And while there are upfront costs, for data centers with high-performance AI and HPC servers, the savings can be substantial. Learn how it works.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Increasingly resource-intensive AI workloads are creating more demand for advanced data center cooling systems. Today, the most efficient and cost-effective method is liquid cooling.

A liquid-cooled PC or server relies on a liquid rather than air to remove heat from vital components that include CPUs, GPUs and AI accelerators. The heat produced by these components is transferred to a liquid. Then the liquid carries away the heat to where it can be safely dissipated.

Most computers don’t require liquid cooling. That’s because general-use consumer and business machines don’t generate enough heat to justify liquid cooling’s higher upfront costs and additional maintenance.

However, high-performance systems designed for tasks such as gaming, scientific research and AI can often operate better, longer and more efficiently when equipped with liquid cooling.

How Liquid Cooling Works

For the actual coolant, most liquid systems use either water or dielectric fluids. Before water is added to a liquid cooler, it’s demineralized to prevent corrosion and build-up. And to prevent freezing and bacterial growth, the water may also be mixed with a combination of glycol, corrosion inhibitors and biocides.

Thus treated, the coolant is pushed through the system by an electric pump. A single liquid-cooled PC or server will need to include its own pump. But for enterprise data center racks containing multiple servers, the liquid is pumped by what’s known as an in-rack cooling distribution unit (CDU). Then the liquid is distributed to each server via a coolant distribution manifold (CDM).

As the liquid flows through the system, it’s channeled into cold plates that are mounted atop the system’s CPUs, GPUs, DIMM modules, PCIe switches and other heat-producing components. Each cold plate has microchannels through which the liquid flows, absorbing and carrying away each component’s thermal energy.

The next step is to safely dissipate the collected heat. To accomplish this, the liquid is pumped back through the CDU, which sends the now-hot liquid to a mechanism that removes the heat. This is typically done using chillers, cooling towers or heat exchangers.

Finally, the cooled liquid is sent back to the systems’ heat-producing components to begin the process again.

 

Liquid Pros & Cons

The most compelling aspect of liquid cooling is its efficiency. Water moves heat up to 25 times better than air while using less energy to do it. In comparison with traditional air, liquid cooling can reduce cooling energy costs by up to 40%.

But there’s more to the efficiency of liquid cooling than just cutting costs. Liquid cooling also enables IT managers to move servers closer together, packing in more power and storage per square foot. Given the high cost of data center real estate, and the fullness of many data centers, that’s an important benefit.

In addition, liquid cooling can better handle the latest high-powered processing components. For instance, Supermicro says its DLC-2 next-generation Direct Liquid-Cooling solutions, introduced in May, can accommodate warmer liquid inflow temperatures while also enhancing AI per watt.

But liquid cooling systems have their downsides, too. For one, higher upfront costs can present a barrier for entry. Sure, data center operators will realize a lower total cost of ownership (TCO) over the long run. But when deploying a liquid-cooled data center, they must still contend with initial capital expense (CapEx) outlays—and justifying those costs to the CFO.

For another, IT managers might think twice about the additional complexity and risks of a liquid cooling solution. More components and variables mean more things that can go wrong. Data center insurance premiums may rise too, since a liquid cooling system can always spring a leak.

Driving Demand: AI

All that said, the market for liquid cooling systems is primed for serious growth.

As AI workloads become increasingly resource-intensive, IT managers are deploying more powerful servers to keep up with demand. These high-performance machines produce more heat than previous generations. And that creates increased demand for efficient, cost-effective cooling solutions.

How much demand? This year, the data center liquid cooling market is projected to drive global sales of $2.84 billion, according to Markets and Markets.

Looking ahead, the industry watcher expects the global liquid cooling market to reach $21.14 billion by 2032. If that happens, the rise will represent a compound annual growth rate (CAGR) over the projected period of 33%.

Coming Soon: Immersive Cooling

In the near future, AI workloads will likely become even more demanding. This means data centers will need to deploy—and cool—ultra-dense AI server clusters that produce tremendous amounts of heat.

To deal with this extra heat, IT managers may need the next step in data center cooling: immersion.

With immersion cooling, an entire rack of servers is submerged horizontally in a tank filled with what’s known as dielectric fluid. This is a non-conductive liquid that ensures the server’s hardware can operate while submerged, and without short-circuiting.

Immersion cooling is being developed along two paths. The most common variety is called single-phase, and it operates similarly to an aquarium’s water filter. As pumps circulate the dielectric fluid around the servers, the fluid is heated by the server’s components. Then it’s cooled by an external heat exchanger.

The other type of immersion cooling is known as two-phase. Here, the system uses water treated to have a relatively low boiling point—around 50 C / 122 F. As this water is heated by the immersed server, it boils, creating a vapor that rises to condensers installed at the top of the tank. The vapor is there condensed to a cooler liquid, then allowed to drip back down into the tank.

This natural convection means there’s no need for electric pumps. It’s a glimpse of a smarter, more efficient liquid future, coming soon to a data center near you.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

4 IT events this fall you won’t want to miss

Featured content

4 IT events this fall you won’t want to miss

Important IT industry events are coming in October and November--with lots of participation from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Summer’s over…somehow it’s already October…and that means it’s time to attend important IT industry conferences, summits and other get-togethers.

Here’s your Performance Intensive Computing preview of four top events coming this month and next.

OCP Global Summit

  • Where & when: San Jose, California; Oct. 13-16, 2025
  • Who it’s for: This event, sponsored by the Open Compute Project (OCP), is for anyone interested in redesigning open source hardware to support the changing demands on compute infrastructure. This year’s theme: “Leading the future of AI.”
  • Who will be there: Speakers this year include Vik Malyala, senior VP of technology and AI at Supermicro; Mark Papermaster, CTO of AMD; Johnson Eung, staff growth product manager in AI at Supermicro; Shane Corban, senior director of technical product management at AMD; and Morris Ruan, director of product management at Supermicro.
     
  • Fun facts: AMD is a Diamond sponsor, and Supermicro is an Emerald sponsor.

~~~~~~~~~~~~~~~~~~~~

AMD AI Developer Day

  • Where & when: San Francisco, Oct. 20, 2025
  • Who it’s for: Developers of artificial intelligence applications and systems. Workshop topics will include developing multi-model, multi-agent systems; generating videos using open source tools; and developing optimized kernals.
  • Who will be there: Speakers will include executives from the University of California, Berkeley; Red Hat AI; Google DeepMind; and OpenAI. Also speaking will be execs from Ollama, an open source platform for AI models; Unsloth AI, an open source AI startup; vLLM, a library for large language model (LLM) inference and serving; and SGLang, an LLM framework.
  • Fun facts:
    • Supermicro is a conference sponsor.
    • During the conference, winners of the AMD Developer Challenge will be announced. The grand prize winner will take home $100,000.
    • AMD, PyTorch and Unsloth AI are co-sponsoring a virtual hackathon, the Synthetic Data AI Agents Challenge, on Oct. 18-20. The first-prize winners will receive $3,000 plus 1,200 hours of GPU credits.

~~~~~~~~~~~~~~~~~~~~

AI Infra Summit

  • Where & when: San Francisco; Nov. 7, 2025
  • Who it’s for: Anyone interested in the convergence of AI innovation and scalable infrastructure. This event is being hosted by Ignite, a go-to-market provider for the technology industry.
  • Who will be there: The speaker lineup is still TBA, but is promised to include enterprise technology leaders, AI and machine learning engineers, cloud and data center architects, venture capital investors, and infrastructure vendors.
  • Fun facts:
    • This is a hybrid event. You can attend either live or online.
    • AMD and Supermicro are Stadium-level sponsors.

~~~~~~~~~~~~~~~~~~~~

SC25

  • Where & when: St. Louis, Missouri; Nov. 16-21, 2025
  • Who it’s for: The global supercomputing community, including those working in high performance computing (HPC), networking, storage and analysis. This year’s theme: “HPC ignites.”
  • Who will be there: Speakers will feature nearly a dozen AMD executives, including Rob Curtis, a Fellow in Data Center Platform Engineering; Shelby Lockhart, a software system engineer; and Nuwan Jayasena, a Fellow in AMD Research. They and other speakers will appear in panels, presentations of papers, workshops, tutorials and more.
     
  • Fun facts: SC25 will feature a series of noncommercial “Birds of a Feather” sessions that allow attendees to openly discuss topics of mutual interest.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s a NIC? And how can it empower AI?

Featured content

Tech Explainer: What’s a NIC? And how can it empower AI?

With the acceleration of AI, the network interface card is playing a new, leading role.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The humble network interface card (NIC) is getting a status boost from AI.

At a fundamental level, the NIC enables one computing device to communicate with others across a network. That network could be a rendering farm run by a small multimedia production house, an enterprise-level data center, or a global network like the internet.

From smartphones to supercomputers, most modern devices use a NIC for this purpose. On laptops, phones and other mobile devices, the NIC typically connects via a wireless antenna. For servers in enterprise data centers, it’s more common to connect the hardware infrastructure with Ethernet cables.

Each NIC—or NIC port, in the case of an enterprise NIC—has its own media access control (MAC) address. This unique identifier enables the NIC to send and receive relevant packets. Each packet, in turn, is a small chunk of a much larger data set, enabling it to move at high speeds.

Networking for the Enterprise

At the enterprise level, everything needs to be highly capable and powerful, and the NIC is no exception. Organizations operating full-scale data centers rely on NICs to do far more than just send emails and sniff packets (the term used to describe how a NIC “watches” a data stream, collecting only the data addressed to its MAC address).

Today’s NICs are also designed to handle complex networking tasks onboard, relieving the host CPU so it can work more efficiently. This process, known as smart offloading, relies on several functions:

  • TCP segmentation offloading: This breaks big data into small packets.
  • Checksum offloading: Here, the NIC independently checks for errors in the data.
  • Receive side scaling: This helps balance network traffic across multiple processor cores, preventing them from getting bogged down.
  • Remote Direct Memory Access (RDMA): This process bypasses the CPU and sends data directly to GPU memory.

Important as these capabilities are, they become even more vital when dealing with AI and machine learning (ML) workloads. By taking pressure off the CPU, modern NICs enable the rest of the system to focus on running these advanced applications and processing their scads of data.

This symbiotic relationship also helps lower a server’s operating temperature and reduce its power usage. The NIC does this by increasing efficiency throughout the system, especially when it comes to the CPU.

Enter the AI NIC

Countless organizations both big and small are clamoring to stake their claims in the AI era. Some are creating entirely new AI and ML applications; others are using the latest AI tools to develop new products that better serve their customers.

Either way, these organizations must deal with the challenges now facing traditional Ethernet networks in AI clusters. Remember, Ethernet was invented over 50 years ago.

AMD has a solution: a revolutionary NIC it has created for AI workloads, the AMD AI NIC card. Recently released, this NIC card is designed to provide the intense communication capabilities demanded by AI and ML models. That includes tightly coupled parallel processing, rapid data transfers and low-latency communications.

AMD says its AI NIC offers a significant advancement in addressing the issues IT managers face as they attempt to reconcile the broad compatibility of an aging network technology with modern AI workloads. It’s a specialized network accelerator explicitly designed to optimize data transfer within back-end AI networks for GPU-to-GPU communication.

To address the challenges of AI workloads, what’s needed is a network that can support distributed computing over multiple GPU nodes with low jitter and RDMA. The AMD AI NIC is designed to manage the unique communication patterns of AI workloads and offer high throughput across all available links. It also offers congestion avoidance, reduced tail latency, scalable performance, and fast job-completion times.

Validated NIC

Following rigorous validation by the engineers at Supermicro, the AMD AI NIC is now supported on the Supermicro 8U GPU Server (AS -8126GS-TNMR). This behemoth is designed specifically for AI, deep learning, high-performance computing (HPC), industrial automation, retail and climate modeling.

In this configuration, AMD’s smart AI-focused NIC can offload networking tasks. This lets the Supermicro SuperServer’s dual AMD EPYC 9000-series processors run at even higher efficiency.

In the Supermicro server, the new AMD AI NIC occupies one of the myriad PCI Express x16 slots. Other optional high-performance PCIe cards include a CPU-to-GPU interconnect and up to eight AMD Instinct GPU accelerators.

In the NIC of time

A chain is only as strong as its weakest link. The chain that connects our ever-expanding global network of AI operations is strengthened by the advent of NICs focused on AI.

As NICs grow more powerful, these advanced network interface cards will help fuel the expansion of the AI/ML applications that power our homes, offices, and everything in between. They’ll also help us bypass communication bottlenecks and speed time to market.

For SMBs and enterprises alike, that’s good news indeed.

Do More:

1

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Oil & gas spotlight: Fueling up with AI

Featured content

Oil & gas spotlight: Fueling up with AI

AI is helping industry players that include BP, Chevron and Shell automate a wide range of important use cases. To serve them, AMD and Supermicro offer powerful accelerators and servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

What’s artificial intelligence good for? For managers in the oil and gas industry, quite a lot.

Industry players that include Shell, BP, ExxonMobil and Chevron are already using machine learning and AI. Use cases include predictive maintenance, seismic data analysis, reservoir management and safety monitoring, says a recent report by Chirag Bharadwaj of consultants Appinventiv.

AI’s potential benefits for oil and gas companies are substantial. Anurag Jain of AI consultants Oyelabs cites estimates of AI lowering oil production costs by up to $5 a barrel with a 25% productivity gain, and increasing oil reserves by as much as 20% with enhanced resource recovery.

Along the same lines is a recent report from market watcher Global Growth Insights. It says adoption of AI in North American oil shale drilling has increased production efficiency by an impressive 20%.

All this has led Jain of Oyelabs to expect a big increase in the oil and gas industry’s AI spend. He predicts the industry’s worldwide spending on AI will rise from $3 billion last year to nearly $5.3 billion in 2028.

Assuming Jain is right, that would put the oil and gas industry’s AI spend at about 15% of its total IT spend. Last year, the industry spent nearly $20 billion on all IT goods and services worldwide, says Global Growth Insights.

Powerful Solutions

All this AI activity in the oil and gas industry hasn’t passed the notice of AMD and Supermicro. They’re on the case.

AMD is offering the industry its AMD Instinct MI300A, an accelerator that combines CPU cores and GPUs to fuel the convergence of high-performance computing (HPC) with AI. And Supermicro is offering rackmount servers driven by this AMD accelerator.

Here are some of the benefits the two companies are offering oil and gas companies:

  • An APU multi-chip architecture that enables dense compute, high-bandwidth memory integration, and chips for both CPU and GPU all in one.
  • Up to 2.6x the HPC performance/watt vs. the older AMD Instinct MI250X.
  • Up to 5.1x the AI-training workload performance with INT8 vs. the AMD Instinct MI250X. (INT8 is a fixed-point representation using 8 bits.)
  • Up to 128GB of unified HBM3 memory dedicated to GPUs. (HBM3 is a high-bandwidth memory chip technology that offers increased bandwidth, memory capacity and power efficiency, all in a smaller form factor.)
  • Double-precision power up to 122.6 TFLOPS with FP64 matrix HPC performance. (FP64 is a double-precision floating point format using 64 bits in memory.)
  • Complete, pre-validated solutions that are ready for rack-scale deployment on day one. These offer the choice of either 2U (liquid cooled) or 4U (air cooled) form factors.
     

If you have customers in oil and gas looking to get into AI, tell them about these Supermicro and AMD solutions.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

What you need to know about high-performance storage for media & entertainment

Featured content

What you need to know about high-performance storage for media & entertainment

To store, process and share their terabytes of data, media and entertainment content creators need more than your usual storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Maintaining fast, efficient and reliable data storage in the age of modern media and entertainment is an increasingly difficult challenge.

Content creators ranging from independent filmmakers to major studios like Netflix and Amazon are churning out enormous amounts of TV shows, movies, video games, and augmented and virtual reality (AR/VR) experiences. Each piece of content must be stored in a way that ensures it’s easy to access, ready to share and fast enough to stream.

This becomes a monumental task when you’re dealing with petabytes of high-resolution footage and graphics. Operating at that scale can overwhelm even the most seasoned professionals.

Those pros must also ensure they have both primary and secondary storage. Primary storage is designed to deliver rapid data retrieval speeds. Secondary storage, on the other hand, provides slower access times and is used for long-term storage.

Seemingly Insurmountable Odds

For media and entertainment production companies, the goal is always the same: speed production and cut costs. That’s why fast, efficient and reliable data storage solutions have become a vital necessity for those who want to survive and thrive in the modern age of media and entertainment.

The amount of data created in a single media project can be staggering.

Each new project uses one or more cameras producing footage with a resolution as high as 8K. And content captured at 8K has 16 times more pixels per frame than traditional HD video. That translates to around 1 terabyte of data for every 1.5 to 2 hours of footage.

For large-scale productions, shooting can continue for weeks, even months. At roughly a terabyte for every 2 hours of shooting, that footage quickly adds up, creating a major data-storage headache.

But wait, there’s more: Your customer’s projects may also include both AR and VR data. High-quality AR/VR can contain hundreds of effects, textures and 3D models, producing data that measures not just in terabytes but petabytes.

Further complicating matters even more, AR/VR data often requires real-time processing, low-latency transfer and multiuser access.

Deploying AI adds yet another dimension. Generative AI (GenAI) now has the ability to create stunning additions to any multimedia project. These may include animated backgrounds, special effects and even virtual actors.

However, AI accounts for some of the most resource-intensive workloads in the world. To meet these stringent demands, not just any storage solution will do.

Extreme Performance Required

For media and entertainment content creators, choosing the right storage solution can be a make-or-break decision. Production companies that produce the highest rate of data must opt for something like the Supermicro H13 Petascale storage server.

The H13 Petascale storage server boasts extreme performance for data-intensive applications. For major content producers, that means high-resolution media editing, AR and VR creation, special effects and the like.

The H13 Petascale storage server is also designed to handle some of the tech industry’s most demanding workloads. These include AI and machine learning (ML) applications, geophysical modeling and big data.

Supermicro’s H13 Petascale storage server delivers up to 480 terabytes of high-performance storage via 16 hot-swap all-flash drives. The system is based on the Enterprise Data Center Standard Form Factor (EDSFF) E3 form factor NVMe storage to provide high-capacity scaling. The 2U Petascale version has double the storage bays and capacity.

Operating on the EDSFF standard also offers better performance with PCIe 5 connectivity and improved thermal efficiency.

Under the hood of this storage beast is a 4th generation AMD EPYC processor with up to 128 cores and 6TB of DDR5 memory. Combined with 128 lanes of PCIe 5 bandwidth, H13 delivers more than 200GB/sec. of bandwidth and more than 25 million input/output operations per second (IOPS).

Data’s Golden Age

Storing, sending and streaming massive amounts of data will continue to be a challenge for the media and entertainment industry.

Emerging formats will push the boundaries of resolution. New computer-aided graphics systems will become the industry standard. And consumers will continue to demand fully immersive AR and VR experiences.

Each of these evolutions will produce more and more data, forcing content creators to search for faster and more cost-effective storage methods.

Note: The media and entertainment industry will be the focus of a special session at the upcoming Supermicro Open Storage Summit ‘24, streaming live from Aug. 13 to Aug. 29. The M&E session, scheduled for Aug. 14 at 10 a.m. PDT / 1 p.m. EDT, will focus on AI and the future of media storage workflows. The speakers will represent Supermicro, AMD, Quantum and Western Digital. Learn more and register now to attend the 2024 Supermicro Open Storage Summit.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: AI edition

Featured content

Research Roundup: AI edition

Catch up on the latest research and analysis around artificial intelligence.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is the No. 1 AI solution being deployed. Three in 4 knowledge workers are already using AI. The supply of workers with AI skills can’t meet the demand. And supply chains can be helped by AI, too.

Here’s your roundup of the latest in AI research and analysis.

GenAI is No. 1

Generative AI isn’t just a good idea, it’s now the No. 1 type of AI solution being deployed.

In a survey recently conducted by research and analysis firm Gartner, more than a quarter of respondents (29%) said they’ve deployed and are now using GenAI.

That was a higher percentage than any other type of AI in the survey, including natural language processing, machine learning and rule-based systems.

The most common way of using GenAI, the survey found, is embedding it in existing applications. For example, using Microsoft Copilot for 365. This was cited by about 1 in 3 respondents (34%).

Other approaches mentioned by respondents included prompt engineering (cited by 25%), fine-tuning (21%) and using standalone tools such as ChatGPT (19%).

Yet respondents said only about half of their AI projects (48%) make it into production. Even when that happens, it’s slow. Moving an AI project from prototype to production took respondents an average of 8 months.

Other challenges loom, too. Nearly half the respondents (49%) said it’s difficult to estimate and demonstrate an AI project’s value. They also cited a lack of talent and skills (42%), lack of confidence in AI technology (40%) and lack of data (39%).

Gartner conducted the survey in last year’s fourth quarter and released the results earlier this month. In all, valid responses were culled from 644 executives working for organizations in the United States, the UK and Germany.

AI ‘gets real’ at work

Three in 4 knowledge workers (75%) now use AI at work, according to the 2024 Work Trend Index, a joint project of Microsoft and LinkedIn.

Among these users, nearly 8 in 10 (78%) are bringing their own AI tools to work. That’s inspired a new acronym: BYOAI, short for Bring Your Own AI.

“2024 is the year AI at work gets real,” the Work Trend report says.

2024 is also a year of real challenges. Like the Gartner survey, the Work Trend report finds that demonstrating AI’s value can be tough.

In the Microsoft/LinkedIn survey, nearly 8 in 10 leaders agreed that adopting AI is critical to staying competitive. Yet nearly 6 in 10 said they worry about quantifying the technology’s productivity gains. About the same percentage also said their organization lacks an AI vision and plan.

The Work Trend report also highlights the mismatch between AI skills demand and supply. Over half the leaders surveyed (55%) say they’re concerned about having enough AI talent. And nearly two-thirds (65%) say they wouldn’t hire someone who lacked AI skills.

Yet fewer than 4 in 10 users (39%) have received AI training from their company. And only 1 in 4 companies plan to offer AI training this year.

The Work Trend report is based on a mix of sources: a survey of 31,000 people in 31 countries; labor and hiring trends on the LinkedIn site; Microsoft 365 productivity signals; and research with Fortune 500 customers.

AI skills: supply-demand mismatch

The mismatch between AI skills supply and demand was also examined recently by market watcher IDC. It expects that by 2026, 9 of every 10 organizations will be hurt by an overall IT skills shortage. This will lead to delays, quality issues and revenue loss that IDC predicts will collectively cost these organizations $5.5 trillion.

To be sure, AI skills are currently the most in-demand skill for most organizations. The good news, IDC finds, is that more than half of organizations are now using or piloting training for GenAI.

“Getting the right people with the right skills into the right roles has never been more difficult,” says IDC researcher Gina Smith. Her prescription for success: Develop a “culture of learning.”

AI helps supply chains, too

Did you know AI is being used to solve supply-chain problems?

It’s a big issue. Over 8 in 10 global businesses (84%) said they’ve experienced supply-chain disruptions in the last year, finds a survey commissioned by Blue Yonder, a vendor of supply-chain solutions.

In response, supply-chain executives are making strategic investments in AI and sustainability, Blue Yonder finds. Nearly 8 in 10 organizations (79%) said they’ve increased their investments in supply-chain operations. Their 2 top areas of investment were sustainability (cited by 48%) and AI (41%).

The survey also identified the top supply-chain areas for AI investment. They are planning (cited by 56% of those investing in AI), transportation (53%) and order management (50%).

In addition, 8 in 10 respondents to the survey said they’ve implemented GenAI in their supply chains at some level. And more than 90% said GenAI has been effective in optimizing their supply chains and related decisions.

The survey, conducted by an independent research firm with sponsorship by Blue Yonder, was fielded in March, with the results released earlier this month. The survey received responses from more than 600 C-suite and senior executives, all of them employed by businesses or government agencies in the United States, UK and Europe.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

Featured content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

The Supermicro SuperBlade powered by AMD EPYC processors provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads. They're valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems. 
 
Learn More about this topic
  • Applications:
  • Featured Technologies:

If you have engineering customers, take note. Supermicro and AMD have partnered with Ansys Inc. to create an advanced HPC platform for engineering simulation software.

The Supermicro SuperBlade, powered by AMD EPYC processors, provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads.

This makes the Supermicro system especially valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems.

The power of simulation

As you may know, engineers design the objects that make up our daily lives—everything from iPhones to airplane wings. Simulation software from Ansys enables them to do it faster, more efficiently and less expensively, resulting in highly optimized products.

Product development requires careful consideration of physics and material properties. Improperly simulating the impact of natural physics on a theoretical structure could have dramatic, even life-threatening consequences.

How bad could it get? Picture the wheels coming off a new car on the highway.

That’s why it’s so important for engineers to have access to the best simulation software operating on the best-designed hardware.

And that’s what makes the partnership of Supermicro, AMD and Ansys so valuable.The result of this partnership is a software/hardware platform that can run complex structural simulations without sacrificing either quality or efficiency.

Wanted: right tool for the job

Product simulations can lead to vital developments, whether artificial heart valves that save lives or green architectures that battle climate change.

Yet complex simulation software is extremely resource-intensive. Running a simulation on under-equipped hardware can be a frustrating and costly exercise in futility.

Even with modern, well-equipped systems, users of simulation software can encounter a myriad of roadblocks. These are often due to inadequate processor frequency and core density, insufficient memory capacity and bandwidth, and poorly optimized I/O.

Best-of-breed simulation software like Ansys Fluent, Mechanical, CFX, and LS-DYNA demands a cutting-edge turnkey hardware solution that can keep up, no matter what.

That’s one super blade

In the case of Supermicro’s SuperBlade, that solution leverages some of the world’s most advanced computing tech to ensure stability and efficiency.

The SuperBlade’s 8U enclosure can be equipped with up to 20 compute blades. Each blade may contain up to 2TB of DDR4 memory, two hot-swap drives, AMD Instinct accelerators and 3rd gen AMD EPYC 7003 processors.

The AMD processors include up to 64 cores and 768 MB of L3 cache. All told, the SuperBlade enclosure can contain a total of 1,280 CPU cores.

Optimized I/O comes in the form of 1G, 10G, 25G or 100G Ethernet or 200G InfiniBand. And each node can house up to 2 additional low-profile PCIe 4.0 x16 expansion cards.

The modular design of SuperBlade enables Ansys users to run simultaneous jobs on multiple nodes in parallel. The system is so flexible, users can assign any number of jobs to any set of nodes.

As an added benefit, different blades can be used in the same chassis. This allows workloads to be assigned to wherever the maximum performance can be achieved.

For instance, a user could launch a four-node parallel job on four nodes and simultaneously two 8-node parallel jobs on the remaining 16 nodes. Alternatively, an engineer could run five 4-node parallel jobs on 20 nodes or ten 2-node parallel jobs on 20 nodes.

The bottom line

Modern business leaders must act as both engineers and accountants. With a foot planted firmly on either side, they balance the limitless possibilities of design with the limited cash flow at their discretion.

The Supermicro SuperBlade helps make that job a little easier. Supermicro, AMD and Ansys have devised a way to give your engineering customers the tools they need, yet still optimize data-center footprint, power requirements and cooling systems.

The result is a lower total cost of ownership (TCO), and with absolutely no compromise in quality.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: How does design simulation work? Part 2

Featured content

Tech Explainer: How does design simulation work? Part 2

Cutting-edge technology powers the virtual design process.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The market for simulation software is hot, growing at a compound annual growth rate (CAGR) of 13.2%, according to Markets and Markets. The research firm predicts that the global market for simulation software, worth an estimated $18.1 billion this year, will rise to $33.5 billion by 2027.

No surprise, then, that tech titans AMD and Supermicro would design an advanced hardware platform to meet the demands of this burgeoning software market.

AMD and Supermicro have teamed up with Ansys Inc., a U.S.-based designer of engineering simulation software. One result of this three-way collaboration is the Supermicro SuperBlade.

Shanthi Adloori, senior director of product management at Supermicro, calls the SuperBlade “one of the fastest simulation-in-a-box solutions.”

Adloori adds: “With a high core count, large memory capacity and faster memory bandwidth, you can reduce the time it takes to complete a simulation .”

One very super blade

Adloori isn’t overstating the case.

Supermicro’s SuperBlade can house up to 20 hot-swappable nodes in its 8U chassis. Each of those blades can be equipped with AMD EPYC CPUs and AMD Instinct GPUs. In fact, SuperBlade is the only platform of its kind designed to support both GPU and non-GPU nodes in the same enclosure.

Supermicro SuperBlade’s other tech specs may be less glamorous, but they’re no less impressive. When it comes to memory, each blade can address a maximum of either 8TB or 16TB of DDR5-4800 memory.

Each node can also house 2 NVMe/SAS/SATA drives and as many as eight 3000W Titanium Level power supplies.

Because networking is an essential element of enterprise-grade design simulation, SuperBlade includes redundant 25Gb/10Gb/1Gb Ethernet switches and up to 200Gbps/100Gbps InfiniBand networking for HPC applications.

For smaller operations, the Supermicro SuperBlade is also available in smaller configurations, including  6U and 4U. These versions pack fewer nodes, which ultimately means they’re able to bring less power to bear. But, hey, not every design team makes passenger jets for a living.

It’s all about the silicon

If Supermicro’s SuperBlade is the tractor-trailer of design simulation technology, then AMD CPUs and GPUs are the engines under the hood.

The differing designs of these chips lend themselves to specific core competencies. CPUs can focus tremendous power on a few tasks at a time. Sure, they can multitask. But there’s a limit to how many simultaneous operations they can address.

AMD bills its EPYC 7003 Series CPUs as the world’s highest-performing server processors for technical computing. The addition of AMD 3D V-Cache technology delivers an expanded L3 cache to help accelerate simulations.

GPUs, on the other hand, are required when running simulations where certain tasks require simultaneous operations to be performed. The AMD Instinct MI250X Accelerator contains 220 compute units with 14,080 stream processors.

Instead of throwing a ton of processing power at a small number of operations, the AMD Instinct can address thousands of less resource-intensive operations simultaneously. It’s that capability that makes GPUs ideal for HPC and AI-enabled operations, an increasingly essential element of modern design simulation.

The future of design simulation

The development of advanced hardware like SuperBlade and the AMD CPUs and GPUs that power it will continue to progress as more organizations adopt design simulation as their go-to product development platform.

That progression will continue to manifest in global companies like Boeing and Volkswagen. But it will also find its way into small startups and single users.

Also, as the required hardware becomes more accessible, simulation software should become more efficient.

This confluence of market trends could empower millions of independent designers with the ability to perform complex design, testing and validation functions.

The result could be nothing short of a design revolution.

Part 1 of this two-part Tech Explainer explores the many ways design simulation is used to create new products, from tiny heart valves to massive passenger aircraft. Read Part 1 now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: How does design simulation work? Part 1

Featured content

Tech Explainer: How does design simulation work? Part 1

Design simulation lets designers and engineers create, test and improve designs of real-world airplanes, cars, medical devices and more while working safely and quickly in virtual environments. This workflow also reduces the need for physical tests and allows designers to investigate more alternatives and optimize their products.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Design simulation is a type of computer-aided engineering used to create new products, reducing the need for physical prototypes. The result is a faster, more efficient design process in which complex physics and math do much of the heavy lifting.

Rapid advances in CPUs and GPUs that are used to perform simulation and software have made it possible to shift product design from the physical world to a virtual one.

In this virtual space, engineers can create and test new designs as quickly as their servers can calculate the results and then render them with visualization software.

Getting better all the time

Designing via AI-powered virtual simulation offers significant improvements over older methods.

Back in the day, it might have taken a small army of automotive engineers years to produce a single new model. Prototypes were often sculpted from clay and carted into a wind tunnel to test aerodynamics.

Each new model went through a seemingly endless series of time-consuming physical simulations. The feedback from those tests would literally send designers back to the drawing board.

It was an arduous and expensive process. And the resources necessary to accomplish these feats of engineering often came at the expense of competition. Companies whose pockets weren’t deep enough might fail to keep up.

Fast-forward to the present. Now, we’ve got smaller design teams aided by increasingly powerful clusters of high-performance systems.

These engineers can tweak a car’s crumple zone in the morning … run the new version through a virtual crash test while eating lunch … and send revised instructions to the design team before day’s end.

Changing designs, saving lives

Faster access to this year’s Ford Mustang is one thing. But if you really want to know how design simulation is changing the world, talk to someone whose life was saved by a mechanical heart valve.

Using the latest tech, designers can simulate new prosthetics in relation to the physiology they’ll inhabit. Many factors come into play here, including size, shape, materials, fluid dynamics, failure models and structural integrity over time.

What’s more, it’s far better to theorize how a part will interact with the human body before the doctor installs it. Simulations can warn medical pros about potential infections, rejections and physical mismatches. AI can play a big part in these types of simulations and manufacturing.

Sure, perfection may be unattainable. But the closer doctors get to a perfect match between a prosthetic and its host body, the better the patient will fair after the procedure.

Making the business case

Every business wants to cut costs, increase efficiency and get an edge over the competition. Here, too, design simulation offers a variety of ways to achieve those lofty goals.

As mentioned above, simulation can drastically reduce the need for expensive physical prototypes. Creating and testing a new airplane design virtually means not having to come within 100 miles of a runway until the first physical prototype is ready to take flight. 

Aerospace and automotive industries rely heavily on both the structural integrity of an assembly but also on computational fluid dynamics. In this way, simulation can potentially save an aerospace company billions of dollars over the long run.

What’s more, virtual airplanes don’t crash. They can’t be struck by lightning. And in a virtual passenger jet, test pilots don’t need to worry about their safety.

By the time a new aircraft design rolls onto the tarmac, it’s already been proven air-worthy—at least to the extent that a virtual simulation can make those kinds of guarantees.

Greater efficiency

Simulation makes every aspect of design more efficient. For instance, iteration, a vital element of the design process, becomes infinitely more manageable in a simulated environment.

Want to find out how a convertible top will affect your new supercar’s 0-to-60 time? Simulation allows engineers to quickly replace the hard-top with some virtual canvas and then create a virtual drag race against the original model.

Simulation can take a product to the manufacturing phase, too. Once a design is finished, engineers can simulate its journey through a factory environment.

This virtual factory, or digital twin, can help determine how long it will take to build a product and how it will react to various materials and environmental conditions. It can even determine how many moves a robot arm will need to make and when human intervention might become necessary. This process helps engineers optmize the manufacturing process.

In countless ways, simulation has never been more real.

In Part 2 of this 2-part blog, we’ll explore the digital technology behind design simulation. This cutting-edge technology is made possible by the latest silicon, vast swaths of high-speed storage, and sophisticated blade servers that bring it all together.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 2

Featured content

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 2

In Part 1 of this 2-part Tech Explainer, we explored the difference between how machine learning and deep learning models are trained and deployed. Now, in Part 2, we’ll get deeper into deep learning to discover how this advanced form of AI is changing the way we work, learn and create.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Where Machine Learning is designed to reduce the need for human intervention, Deep Learning—an extension of ML—removes much of the human element altogether.

If ML were a driver-assistance feature that helped you parallel park and avoid collisions, DL would be an autonomous, self-driving car.

The human intervention we’re talking about has much to do with categorizing and labeling the data used by ML models. Producing this structured data is both time-consuming and expensive.

DL shortens the time and lowers the cost by learning from unstructured data. This elimnates much of the data pre-processing performed by humans for ML.

That’s good news for modern businesses. Market watcher IDC estimates that as much as 90% of corporate data is associated with unstructured data.

DL is particularly good at processing unstructured data. That includes information coming from the edge, the core and millions of both personal and IoT devices.

Like a brain, but digital

Deep Learning systems “think” with a neural network—multiple layers of interconnected nodes designed to mimic the way the human brain works. A DL system processes data inputs in an attempt to recognize, classify and accurately describe objects within data.

The layers of a neural network are stacked vertically. Each layer builds on the work performed by the one below it. By pushing data through each successive layer, the overall system improves its predictions and categorizations.

For instance, imagine you’ve tasked a DL system to identify pictures of junk food. The system would quickly learn—on its own—how to differentiate Pringles from Doritos.

It might do this by learning to recognize Pringles’ iconic tubular packaging. Then the system would categorize Pringles differently than the family-size sack of Doritos.

What if you fed this hypothetical DL system with more pictures of chips? Then it could begin to identify varying angles of packaging, as well as colors, logos, shapes and granular aspects of the chips themselves.

As this example illustrates, the longer a DL system operates, the more intelligent and accurate it becomes.

Things we used to do

DL tends to be deployed when it’s time to pull out the big guns. This isn’t tech you throw at a mere spam filter or recommendation engine.

Instead, it’s the tech that powers the world’s finance, biomedical advances and law enforcement. For these verticals, failure is simply not an option.

For these verticals, here are some of the ways DL operates behind the scenes:

  • BioMed: DL helps healthcare staff analyze medical imaging such as X-rays and CT scans. In many cases, the technology is more accurate than well-trained physicians with decades of experience.
  • Finance: For those seeking a market edge (read: everyone), DL employs powerful, algorithmic-based predictive analytics. This helps modern-day robber barons manage their portfolios based on insights from data so vast, they couldn’t leverage it themselves. DL also helps financial institutions assess loans, detect fraud and manage credit.
  • Law Enforcement: In the 2002 movie “Minority Report,” Tom Cruise played a police officer who could arrest people before they committed a crime. With DL, this fiction could turn into an unsettling reality. DL can be used to analyze millions of data points, then predict who is most likely to break the law. It might even give authorities an idea of where, when and how it could happen.

The future…?

Looking into a crystal ball—which these days probably uses DL—we can see a long succession of similar technologies coming. Just as ML begat DL, so too will DL beget the next form of AI—and the one after that.

The future of DL isn’t a question of if, but when. Clearly, DL will be used to advance a growing number of industries. But just when each sector will come to be ruled by our new smarty-pants robots is less clear.

Keep in mind: Even as you read this, DL systems are working tirelessly to help data scientists make AI more accurate and able to provide more useful assessments of datasets for specific outcomes. And as the science progresses, neural networks will continue to become more complex—and more like human brains.

That means the next generation of DL will likely be far more capable than the current one. Future AI systems could figure out how to reverse the aging process, map distant galaxies, even produce bespoke food based on biometric feedback from hungry diners.

For example, the upcoming AMD Instinct MI300 accelerators promise to usher in a new era of computing capabilities. That includes the ability to handle large language models (LLMs), the key approach behind generative AI systems such as ChatGPT.

Yes, the robots are here, and they want to feed you custom Pringles. Bon appétit!

 

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages