Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

Tech Explainer: What’s a NIC? And how can it empower AI?

Featured content

Tech Explainer: What’s a NIC? And how can it empower AI?

With the acceleration of AI, the network interface card is playing a new, leading role.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The humble network interface card (NIC) is getting a status boost from AI.

At a fundamental level, the NIC enables one computing device to communicate with others across a network. That network could be a rendering farm run by a small multimedia production house, an enterprise-level data center, or a global network like the internet.

From smartphones to supercomputers, most modern devices use a NIC for this purpose. On laptops, phones and other mobile devices, the NIC typically connects via a wireless antenna. For servers in enterprise data centers, it’s more common to connect the hardware infrastructure with Ethernet cables.

Each NIC—or NIC port, in the case of an enterprise NIC—has its own media access control (MAC) address. This unique identifier enables the NIC to send and receive relevant packets. Each packet, in turn, is a small chunk of a much larger data set, enabling it to move at high speeds.

Networking for the Enterprise

At the enterprise level, everything needs to be highly capable and powerful, and the NIC is no exception. Organizations operating full-scale data centers rely on NICs to do far more than just send emails and sniff packets (the term used to describe how a NIC “watches” a data stream, collecting only the data addressed to its MAC address).

Today’s NICs are also designed to handle complex networking tasks onboard, relieving the host CPU so it can work more efficiently. This process, known as smart offloading, relies on several functions:

  • TCP segmentation offloading: This breaks big data into small packets.
  • Checksum offloading: Here, the NIC independently checks for errors in the data.
  • Receive side scaling: This helps balance network traffic across multiple processor cores, preventing them from getting bogged down.
  • Remote Direct Memory Access (RDMA): This process bypasses the CPU and sends data directly to GPU memory.

Important as these capabilities are, they become even more vital when dealing with AI and machine learning (ML) workloads. By taking pressure off the CPU, modern NICs enable the rest of the system to focus on running these advanced applications and processing their scads of data.

This symbiotic relationship also helps lower a server’s operating temperature and reduce its power usage. The NIC does this by increasing efficiency throughout the system, especially when it comes to the CPU.

Enter the AI NIC

Countless organizations both big and small are clamoring to stake their claims in the AI era. Some are creating entirely new AI and ML applications; others are using the latest AI tools to develop new products that better serve their customers.

Either way, these organizations must deal with the challenges now facing traditional Ethernet networks in AI clusters. Remember, Ethernet was invented over 50 years ago.

AMD has a solution: a revolutionary NIC it has created for AI workloads, the AMD AI NIC card. Recently released, this NIC card is designed to provide the intense communication capabilities demanded by AI and ML models. That includes tightly coupled parallel processing, rapid data transfers and low-latency communications.

AMD says its AI NIC offers a significant advancement in addressing the issues IT managers face as they attempt to reconcile the broad compatibility of an aging network technology with modern AI workloads. It’s a specialized network accelerator explicitly designed to optimize data transfer within back-end AI networks for GPU-to-GPU communication.

To address the challenges of AI workloads, what’s needed is a network that can support distributed computing over multiple GPU nodes with low jitter and RDMA. The AMD AI NIC is designed to manage the unique communication patterns of AI workloads and offer high throughput across all available links. It also offers congestion avoidance, reduced tail latency, scalable performance, and fast job-completion times.

Validated NIC

Following rigorous validation by the engineers at Supermicro, the AMD AI NIC is now supported on the Supermicro 8U GPU Server (AS -8126GS-TNMR). This behemoth is designed specifically for AI, deep learning, high-performance computing (HPC), industrial automation, retail and climate modeling.

In this configuration, AMD’s smart AI-focused NIC can offload networking tasks. This lets the Supermicro SuperServer’s dual AMD EPYC 9000-series processors run at even higher efficiency.

In the Supermicro server, the new AMD AI NIC occupies one of the myriad PCI Express x16 slots. Other optional high-performance PCIe cards include a CPU-to-GPU interconnect and up to eight AMD Instinct GPU accelerators.

In the NIC of time

A chain is only as strong as its weakest link. The chain that connects our ever-expanding global network of AI operations is strengthened by the advent of NICs focused on AI.

As NICs grow more powerful, these advanced network interface cards will help fuel the expansion of the AI/ML applications that power our homes, offices, and everything in between. They’ll also help us bypass communication bottlenecks and speed time to market.

For SMBs and enterprises alike, that’s good news indeed.

Do More:

1

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Oil & gas spotlight: Fueling up with AI

Featured content

Oil & gas spotlight: Fueling up with AI

AI is helping industry players that include BP, Chevron and Shell automate a wide range of important use cases. To serve them, AMD and Supermicro offer powerful accelerators and servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

What’s artificial intelligence good for? For managers in the oil and gas industry, quite a lot.

Industry players that include Shell, BP, ExxonMobil and Chevron are already using machine learning and AI. Use cases include predictive maintenance, seismic data analysis, reservoir management and safety monitoring, says a recent report by Chirag Bharadwaj of consultants Appinventiv.

AI’s potential benefits for oil and gas companies are substantial. Anurag Jain of AI consultants Oyelabs cites estimates of AI lowering oil production costs by up to $5 a barrel with a 25% productivity gain, and increasing oil reserves by as much as 20% with enhanced resource recovery.

Along the same lines is a recent report from market watcher Global Growth Insights. It says adoption of AI in North American oil shale drilling has increased production efficiency by an impressive 20%.

All this has led Jain of Oyelabs to expect a big increase in the oil and gas industry’s AI spend. He predicts the industry’s worldwide spending on AI will rise from $3 billion last year to nearly $5.3 billion in 2028.

Assuming Jain is right, that would put the oil and gas industry’s AI spend at about 15% of its total IT spend. Last year, the industry spent nearly $20 billion on all IT goods and services worldwide, says Global Growth Insights.

Powerful Solutions

All this AI activity in the oil and gas industry hasn’t passed the notice of AMD and Supermicro. They’re on the case.

AMD is offering the industry its AMD Instinct MI300A, an accelerator that combines CPU cores and GPUs to fuel the convergence of high-performance computing (HPC) with AI. And Supermicro is offering rackmount servers driven by this AMD accelerator.

Here are some of the benefits the two companies are offering oil and gas companies:

  • An APU multi-chip architecture that enables dense compute, high-bandwidth memory integration, and chips for both CPU and GPU all in one.
  • Up to 2.6x the HPC performance/watt vs. the older AMD Instinct MI250X.
  • Up to 5.1x the AI-training workload performance with INT8 vs. the AMD Instinct MI250X. (INT8 is a fixed-point representation using 8 bits.)
  • Up to 128GB of unified HBM3 memory dedicated to GPUs. (HBM3 is a high-bandwidth memory chip technology that offers increased bandwidth, memory capacity and power efficiency, all in a smaller form factor.)
  • Double-precision power up to 122.6 TFLOPS with FP64 matrix HPC performance. (FP64 is a double-precision floating point format using 64 bits in memory.)
  • Complete, pre-validated solutions that are ready for rack-scale deployment on day one. These offer the choice of either 2U (liquid cooled) or 4U (air cooled) form factors.
     

If you have customers in oil and gas looking to get into AI, tell them about these Supermicro and AMD solutions.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

What you need to know about high-performance storage for media & entertainment

Featured content

What you need to know about high-performance storage for media & entertainment

To store, process and share their terabytes of data, media and entertainment content creators need more than your usual storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Maintaining fast, efficient and reliable data storage in the age of modern media and entertainment is an increasingly difficult challenge.

Content creators ranging from independent filmmakers to major studios like Netflix and Amazon are churning out enormous amounts of TV shows, movies, video games, and augmented and virtual reality (AR/VR) experiences. Each piece of content must be stored in a way that ensures it’s easy to access, ready to share and fast enough to stream.

This becomes a monumental task when you’re dealing with petabytes of high-resolution footage and graphics. Operating at that scale can overwhelm even the most seasoned professionals.

Those pros must also ensure they have both primary and secondary storage. Primary storage is designed to deliver rapid data retrieval speeds. Secondary storage, on the other hand, provides slower access times and is used for long-term storage.

Seemingly Insurmountable Odds

For media and entertainment production companies, the goal is always the same: speed production and cut costs. That’s why fast, efficient and reliable data storage solutions have become a vital necessity for those who want to survive and thrive in the modern age of media and entertainment.

The amount of data created in a single media project can be staggering.

Each new project uses one or more cameras producing footage with a resolution as high as 8K. And content captured at 8K has 16 times more pixels per frame than traditional HD video. That translates to around 1 terabyte of data for every 1.5 to 2 hours of footage.

For large-scale productions, shooting can continue for weeks, even months. At roughly a terabyte for every 2 hours of shooting, that footage quickly adds up, creating a major data-storage headache.

But wait, there’s more: Your customer’s projects may also include both AR and VR data. High-quality AR/VR can contain hundreds of effects, textures and 3D models, producing data that measures not just in terabytes but petabytes.

Further complicating matters even more, AR/VR data often requires real-time processing, low-latency transfer and multiuser access.

Deploying AI adds yet another dimension. Generative AI (GenAI) now has the ability to create stunning additions to any multimedia project. These may include animated backgrounds, special effects and even virtual actors.

However, AI accounts for some of the most resource-intensive workloads in the world. To meet these stringent demands, not just any storage solution will do.

Extreme Performance Required

For media and entertainment content creators, choosing the right storage solution can be a make-or-break decision. Production companies that produce the highest rate of data must opt for something like the Supermicro H13 Petascale storage server.

The H13 Petascale storage server boasts extreme performance for data-intensive applications. For major content producers, that means high-resolution media editing, AR and VR creation, special effects and the like.

The H13 Petascale storage server is also designed to handle some of the tech industry’s most demanding workloads. These include AI and machine learning (ML) applications, geophysical modeling and big data.

Supermicro’s H13 Petascale storage server delivers up to 480 terabytes of high-performance storage via 16 hot-swap all-flash drives. The system is based on the Enterprise Data Center Standard Form Factor (EDSFF) E3 form factor NVMe storage to provide high-capacity scaling. The 2U Petascale version has double the storage bays and capacity.

Operating on the EDSFF standard also offers better performance with PCIe 5 connectivity and improved thermal efficiency.

Under the hood of this storage beast is a 4th generation AMD EPYC processor with up to 128 cores and 6TB of DDR5 memory. Combined with 128 lanes of PCIe 5 bandwidth, H13 delivers more than 200GB/sec. of bandwidth and more than 25 million input/output operations per second (IOPS).

Data’s Golden Age

Storing, sending and streaming massive amounts of data will continue to be a challenge for the media and entertainment industry.

Emerging formats will push the boundaries of resolution. New computer-aided graphics systems will become the industry standard. And consumers will continue to demand fully immersive AR and VR experiences.

Each of these evolutions will produce more and more data, forcing content creators to search for faster and more cost-effective storage methods.

Note: The media and entertainment industry will be the focus of a special session at the upcoming Supermicro Open Storage Summit ‘24, streaming live from Aug. 13 to Aug. 29. The M&E session, scheduled for Aug. 14 at 10 a.m. PDT / 1 p.m. EDT, will focus on AI and the future of media storage workflows. The speakers will represent Supermicro, AMD, Quantum and Western Digital. Learn more and register now to attend the 2024 Supermicro Open Storage Summit.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Research Roundup: AI edition

Featured content

Research Roundup: AI edition

Catch up on the latest research and analysis around artificial intelligence.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is the No. 1 AI solution being deployed. Three in 4 knowledge workers are already using AI. The supply of workers with AI skills can’t meet the demand. And supply chains can be helped by AI, too.

Here’s your roundup of the latest in AI research and analysis.

GenAI is No. 1

Generative AI isn’t just a good idea, it’s now the No. 1 type of AI solution being deployed.

In a survey recently conducted by research and analysis firm Gartner, more than a quarter of respondents (29%) said they’ve deployed and are now using GenAI.

That was a higher percentage than any other type of AI in the survey, including natural language processing, machine learning and rule-based systems.

The most common way of using GenAI, the survey found, is embedding it in existing applications. For example, using Microsoft Copilot for 365. This was cited by about 1 in 3 respondents (34%).

Other approaches mentioned by respondents included prompt engineering (cited by 25%), fine-tuning (21%) and using standalone tools such as ChatGPT (19%).

Yet respondents said only about half of their AI projects (48%) make it into production. Even when that happens, it’s slow. Moving an AI project from prototype to production took respondents an average of 8 months.

Other challenges loom, too. Nearly half the respondents (49%) said it’s difficult to estimate and demonstrate an AI project’s value. They also cited a lack of talent and skills (42%), lack of confidence in AI technology (40%) and lack of data (39%).

Gartner conducted the survey in last year’s fourth quarter and released the results earlier this month. In all, valid responses were culled from 644 executives working for organizations in the United States, the UK and Germany.

AI ‘gets real’ at work

Three in 4 knowledge workers (75%) now use AI at work, according to the 2024 Work Trend Index, a joint project of Microsoft and LinkedIn.

Among these users, nearly 8 in 10 (78%) are bringing their own AI tools to work. That’s inspired a new acronym: BYOAI, short for Bring Your Own AI.

“2024 is the year AI at work gets real,” the Work Trend report says.

2024 is also a year of real challenges. Like the Gartner survey, the Work Trend report finds that demonstrating AI’s value can be tough.

In the Microsoft/LinkedIn survey, nearly 8 in 10 leaders agreed that adopting AI is critical to staying competitive. Yet nearly 6 in 10 said they worry about quantifying the technology’s productivity gains. About the same percentage also said their organization lacks an AI vision and plan.

The Work Trend report also highlights the mismatch between AI skills demand and supply. Over half the leaders surveyed (55%) say they’re concerned about having enough AI talent. And nearly two-thirds (65%) say they wouldn’t hire someone who lacked AI skills.

Yet fewer than 4 in 10 users (39%) have received AI training from their company. And only 1 in 4 companies plan to offer AI training this year.

The Work Trend report is based on a mix of sources: a survey of 31,000 people in 31 countries; labor and hiring trends on the LinkedIn site; Microsoft 365 productivity signals; and research with Fortune 500 customers.

AI skills: supply-demand mismatch

The mismatch between AI skills supply and demand was also examined recently by market watcher IDC. It expects that by 2026, 9 of every 10 organizations will be hurt by an overall IT skills shortage. This will lead to delays, quality issues and revenue loss that IDC predicts will collectively cost these organizations $5.5 trillion.

To be sure, AI skills are currently the most in-demand skill for most organizations. The good news, IDC finds, is that more than half of organizations are now using or piloting training for GenAI.

“Getting the right people with the right skills into the right roles has never been more difficult,” says IDC researcher Gina Smith. Her prescription for success: Develop a “culture of learning.”

AI helps supply chains, too

Did you know AI is being used to solve supply-chain problems?

It’s a big issue. Over 8 in 10 global businesses (84%) said they’ve experienced supply-chain disruptions in the last year, finds a survey commissioned by Blue Yonder, a vendor of supply-chain solutions.

In response, supply-chain executives are making strategic investments in AI and sustainability, Blue Yonder finds. Nearly 8 in 10 organizations (79%) said they’ve increased their investments in supply-chain operations. Their 2 top areas of investment were sustainability (cited by 48%) and AI (41%).

The survey also identified the top supply-chain areas for AI investment. They are planning (cited by 56% of those investing in AI), transportation (53%) and order management (50%).

In addition, 8 in 10 respondents to the survey said they’ve implemented GenAI in their supply chains at some level. And more than 90% said GenAI has been effective in optimizing their supply chains and related decisions.

The survey, conducted by an independent research firm with sponsorship by Blue Yonder, was fielded in March, with the results released earlier this month. The survey received responses from more than 600 C-suite and senior executives, all of them employed by businesses or government agencies in the United States, UK and Europe.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

Featured content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

The Supermicro SuperBlade powered by AMD EPYC processors provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads. They're valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems. 
 
Learn More about this topic
  • Applications:
  • Featured Technologies:

If you have engineering customers, take note. Supermicro and AMD have partnered with Ansys Inc. to create an advanced HPC platform for engineering simulation software.

The Supermicro SuperBlade, powered by AMD EPYC processors, provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads.

This makes the Supermicro system especially valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems.

The power of simulation

As you may know, engineers design the objects that make up our daily lives—everything from iPhones to airplane wings. Simulation software from Ansys enables them to do it faster, more efficiently and less expensively, resulting in highly optimized products.

Product development requires careful consideration of physics and material properties. Improperly simulating the impact of natural physics on a theoretical structure could have dramatic, even life-threatening consequences.

How bad could it get? Picture the wheels coming off a new car on the highway.

That’s why it’s so important for engineers to have access to the best simulation software operating on the best-designed hardware.

And that’s what makes the partnership of Supermicro, AMD and Ansys so valuable.The result of this partnership is a software/hardware platform that can run complex structural simulations without sacrificing either quality or efficiency.

Wanted: right tool for the job

Product simulations can lead to vital developments, whether artificial heart valves that save lives or green architectures that battle climate change.

Yet complex simulation software is extremely resource-intensive. Running a simulation on under-equipped hardware can be a frustrating and costly exercise in futility.

Even with modern, well-equipped systems, users of simulation software can encounter a myriad of roadblocks. These are often due to inadequate processor frequency and core density, insufficient memory capacity and bandwidth, and poorly optimized I/O.

Best-of-breed simulation software like Ansys Fluent, Mechanical, CFX, and LS-DYNA demands a cutting-edge turnkey hardware solution that can keep up, no matter what.

That’s one super blade

In the case of Supermicro’s SuperBlade, that solution leverages some of the world’s most advanced computing tech to ensure stability and efficiency.

The SuperBlade’s 8U enclosure can be equipped with up to 20 compute blades. Each blade may contain up to 2TB of DDR4 memory, two hot-swap drives, AMD Instinct accelerators and 3rd gen AMD EPYC 7003 processors.

The AMD processors include up to 64 cores and 768 MB of L3 cache. All told, the SuperBlade enclosure can contain a total of 1,280 CPU cores.

Optimized I/O comes in the form of 1G, 10G, 25G or 100G Ethernet or 200G InfiniBand. And each node can house up to 2 additional low-profile PCIe 4.0 x16 expansion cards.

The modular design of SuperBlade enables Ansys users to run simultaneous jobs on multiple nodes in parallel. The system is so flexible, users can assign any number of jobs to any set of nodes.

As an added benefit, different blades can be used in the same chassis. This allows workloads to be assigned to wherever the maximum performance can be achieved.

For instance, a user could launch a four-node parallel job on four nodes and simultaneously two 8-node parallel jobs on the remaining 16 nodes. Alternatively, an engineer could run five 4-node parallel jobs on 20 nodes or ten 2-node parallel jobs on 20 nodes.

The bottom line

Modern business leaders must act as both engineers and accountants. With a foot planted firmly on either side, they balance the limitless possibilities of design with the limited cash flow at their discretion.

The Supermicro SuperBlade helps make that job a little easier. Supermicro, AMD and Ansys have devised a way to give your engineering customers the tools they need, yet still optimize data-center footprint, power requirements and cooling systems.

The result is a lower total cost of ownership (TCO), and with absolutely no compromise in quality.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: How does design simulation work? Part 2

Featured content

Tech Explainer: How does design simulation work? Part 2

Cutting-edge technology powers the virtual design process.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The market for simulation software is hot, growing at a compound annual growth rate (CAGR) of 13.2%, according to Markets and Markets. The research firm predicts that the global market for simulation software, worth an estimated $18.1 billion this year, will rise to $33.5 billion by 2027.

No surprise, then, that tech titans AMD and Supermicro would design an advanced hardware platform to meet the demands of this burgeoning software market.

AMD and Supermicro have teamed up with Ansys Inc., a U.S.-based designer of engineering simulation software. One result of this three-way collaboration is the Supermicro SuperBlade.

Shanthi Adloori, senior director of product management at Supermicro, calls the SuperBlade “one of the fastest simulation-in-a-box solutions.”

Adloori adds: “With a high core count, large memory capacity and faster memory bandwidth, you can reduce the time it takes to complete a simulation .”

One very super blade

Adloori isn’t overstating the case.

Supermicro’s SuperBlade can house up to 20 hot-swappable nodes in its 8U chassis. Each of those blades can be equipped with AMD EPYC CPUs and AMD Instinct GPUs. In fact, SuperBlade is the only platform of its kind designed to support both GPU and non-GPU nodes in the same enclosure.

Supermicro SuperBlade’s other tech specs may be less glamorous, but they’re no less impressive. When it comes to memory, each blade can address a maximum of either 8TB or 16TB of DDR5-4800 memory.

Each node can also house 2 NVMe/SAS/SATA drives and as many as eight 3000W Titanium Level power supplies.

Because networking is an essential element of enterprise-grade design simulation, SuperBlade includes redundant 25Gb/10Gb/1Gb Ethernet switches and up to 200Gbps/100Gbps InfiniBand networking for HPC applications.

For smaller operations, the Supermicro SuperBlade is also available in smaller configurations, including  6U and 4U. These versions pack fewer nodes, which ultimately means they’re able to bring less power to bear. But, hey, not every design team makes passenger jets for a living.

It’s all about the silicon

If Supermicro’s SuperBlade is the tractor-trailer of design simulation technology, then AMD CPUs and GPUs are the engines under the hood.

The differing designs of these chips lend themselves to specific core competencies. CPUs can focus tremendous power on a few tasks at a time. Sure, they can multitask. But there’s a limit to how many simultaneous operations they can address.

AMD bills its EPYC 7003 Series CPUs as the world’s highest-performing server processors for technical computing. The addition of AMD 3D V-Cache technology delivers an expanded L3 cache to help accelerate simulations.

GPUs, on the other hand, are required when running simulations where certain tasks require simultaneous operations to be performed. The AMD Instinct MI250X Accelerator contains 220 compute units with 14,080 stream processors.

Instead of throwing a ton of processing power at a small number of operations, the AMD Instinct can address thousands of less resource-intensive operations simultaneously. It’s that capability that makes GPUs ideal for HPC and AI-enabled operations, an increasingly essential element of modern design simulation.

The future of design simulation

The development of advanced hardware like SuperBlade and the AMD CPUs and GPUs that power it will continue to progress as more organizations adopt design simulation as their go-to product development platform.

That progression will continue to manifest in global companies like Boeing and Volkswagen. But it will also find its way into small startups and single users.

Also, as the required hardware becomes more accessible, simulation software should become more efficient.

This confluence of market trends could empower millions of independent designers with the ability to perform complex design, testing and validation functions.

The result could be nothing short of a design revolution.

Part 1 of this two-part Tech Explainer explores the many ways design simulation is used to create new products, from tiny heart valves to massive passenger aircraft. Read Part 1 now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: How does design simulation work? Part 1

Featured content

Tech Explainer: How does design simulation work? Part 1

Design simulation lets designers and engineers create, test and improve designs of real-world airplanes, cars, medical devices and more while working safely and quickly in virtual environments. This workflow also reduces the need for physical tests and allows designers to investigate more alternatives and optimize their products.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Design simulation is a type of computer-aided engineering used to create new products, reducing the need for physical prototypes. The result is a faster, more efficient design process in which complex physics and math do much of the heavy lifting.

Rapid advances in CPUs and GPUs that are used to perform simulation and software have made it possible to shift product design from the physical world to a virtual one.

In this virtual space, engineers can create and test new designs as quickly as their servers can calculate the results and then render them with visualization software.

Getting better all the time

Designing via AI-powered virtual simulation offers significant improvements over older methods.

Back in the day, it might have taken a small army of automotive engineers years to produce a single new model. Prototypes were often sculpted from clay and carted into a wind tunnel to test aerodynamics.

Each new model went through a seemingly endless series of time-consuming physical simulations. The feedback from those tests would literally send designers back to the drawing board.

It was an arduous and expensive process. And the resources necessary to accomplish these feats of engineering often came at the expense of competition. Companies whose pockets weren’t deep enough might fail to keep up.

Fast-forward to the present. Now, we’ve got smaller design teams aided by increasingly powerful clusters of high-performance systems.

These engineers can tweak a car’s crumple zone in the morning … run the new version through a virtual crash test while eating lunch … and send revised instructions to the design team before day’s end.

Changing designs, saving lives

Faster access to this year’s Ford Mustang is one thing. But if you really want to know how design simulation is changing the world, talk to someone whose life was saved by a mechanical heart valve.

Using the latest tech, designers can simulate new prosthetics in relation to the physiology they’ll inhabit. Many factors come into play here, including size, shape, materials, fluid dynamics, failure models and structural integrity over time.

What’s more, it’s far better to theorize how a part will interact with the human body before the doctor installs it. Simulations can warn medical pros about potential infections, rejections and physical mismatches. AI can play a big part in these types of simulations and manufacturing.

Sure, perfection may be unattainable. But the closer doctors get to a perfect match between a prosthetic and its host body, the better the patient will fair after the procedure.

Making the business case

Every business wants to cut costs, increase efficiency and get an edge over the competition. Here, too, design simulation offers a variety of ways to achieve those lofty goals.

As mentioned above, simulation can drastically reduce the need for expensive physical prototypes. Creating and testing a new airplane design virtually means not having to come within 100 miles of a runway until the first physical prototype is ready to take flight. 

Aerospace and automotive industries rely heavily on both the structural integrity of an assembly but also on computational fluid dynamics. In this way, simulation can potentially save an aerospace company billions of dollars over the long run.

What’s more, virtual airplanes don’t crash. They can’t be struck by lightning. And in a virtual passenger jet, test pilots don’t need to worry about their safety.

By the time a new aircraft design rolls onto the tarmac, it’s already been proven air-worthy—at least to the extent that a virtual simulation can make those kinds of guarantees.

Greater efficiency

Simulation makes every aspect of design more efficient. For instance, iteration, a vital element of the design process, becomes infinitely more manageable in a simulated environment.

Want to find out how a convertible top will affect your new supercar’s 0-to-60 time? Simulation allows engineers to quickly replace the hard-top with some virtual canvas and then create a virtual drag race against the original model.

Simulation can take a product to the manufacturing phase, too. Once a design is finished, engineers can simulate its journey through a factory environment.

This virtual factory, or digital twin, can help determine how long it will take to build a product and how it will react to various materials and environmental conditions. It can even determine how many moves a robot arm will need to make and when human intervention might become necessary. This process helps engineers optmize the manufacturing process.

In countless ways, simulation has never been more real.

In Part 2 of this 2-part blog, we’ll explore the digital technology behind design simulation. This cutting-edge technology is made possible by the latest silicon, vast swaths of high-speed storage, and sophisticated blade servers that bring it all together.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 2

Featured content

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 2

In Part 1 of this 2-part Tech Explainer, we explored the difference between how machine learning and deep learning models are trained and deployed. Now, in Part 2, we’ll get deeper into deep learning to discover how this advanced form of AI is changing the way we work, learn and create.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Where Machine Learning is designed to reduce the need for human intervention, Deep Learning—an extension of ML—removes much of the human element altogether.

If ML were a driver-assistance feature that helped you parallel park and avoid collisions, DL would be an autonomous, self-driving car.

The human intervention we’re talking about has much to do with categorizing and labeling the data used by ML models. Producing this structured data is both time-consuming and expensive.

DL shortens the time and lowers the cost by learning from unstructured data. This elimnates much of the data pre-processing performed by humans for ML.

That’s good news for modern businesses. Market watcher IDC estimates that as much as 90% of corporate data is associated with unstructured data.

DL is particularly good at processing unstructured data. That includes information coming from the edge, the core and millions of both personal and IoT devices.

Like a brain, but digital

Deep Learning systems “think” with a neural network—multiple layers of interconnected nodes designed to mimic the way the human brain works. A DL system processes data inputs in an attempt to recognize, classify and accurately describe objects within data.

The layers of a neural network are stacked vertically. Each layer builds on the work performed by the one below it. By pushing data through each successive layer, the overall system improves its predictions and categorizations.

For instance, imagine you’ve tasked a DL system to identify pictures of junk food. The system would quickly learn—on its own—how to differentiate Pringles from Doritos.

It might do this by learning to recognize Pringles’ iconic tubular packaging. Then the system would categorize Pringles differently than the family-size sack of Doritos.

What if you fed this hypothetical DL system with more pictures of chips? Then it could begin to identify varying angles of packaging, as well as colors, logos, shapes and granular aspects of the chips themselves.

As this example illustrates, the longer a DL system operates, the more intelligent and accurate it becomes.

Things we used to do

DL tends to be deployed when it’s time to pull out the big guns. This isn’t tech you throw at a mere spam filter or recommendation engine.

Instead, it’s the tech that powers the world’s finance, biomedical advances and law enforcement. For these verticals, failure is simply not an option.

For these verticals, here are some of the ways DL operates behind the scenes:

  • BioMed: DL helps healthcare staff analyze medical imaging such as X-rays and CT scans. In many cases, the technology is more accurate than well-trained physicians with decades of experience.
  • Finance: For those seeking a market edge (read: everyone), DL employs powerful, algorithmic-based predictive analytics. This helps modern-day robber barons manage their portfolios based on insights from data so vast, they couldn’t leverage it themselves. DL also helps financial institutions assess loans, detect fraud and manage credit.
  • Law Enforcement: In the 2002 movie “Minority Report,” Tom Cruise played a police officer who could arrest people before they committed a crime. With DL, this fiction could turn into an unsettling reality. DL can be used to analyze millions of data points, then predict who is most likely to break the law. It might even give authorities an idea of where, when and how it could happen.

The future…?

Looking into a crystal ball—which these days probably uses DL—we can see a long succession of similar technologies coming. Just as ML begat DL, so too will DL beget the next form of AI—and the one after that.

The future of DL isn’t a question of if, but when. Clearly, DL will be used to advance a growing number of industries. But just when each sector will come to be ruled by our new smarty-pants robots is less clear.

Keep in mind: Even as you read this, DL systems are working tirelessly to help data scientists make AI more accurate and able to provide more useful assessments of datasets for specific outcomes. And as the science progresses, neural networks will continue to become more complex—and more like human brains.

That means the next generation of DL will likely be far more capable than the current one. Future AI systems could figure out how to reverse the aging process, map distant galaxies, even produce bespoke food based on biometric feedback from hungry diners.

For example, the upcoming AMD Instinct MI300 accelerators promise to usher in a new era of computing capabilities. That includes the ability to handle large language models (LLMs), the key approach behind generative AI systems such as ChatGPT.

Yes, the robots are here, and they want to feed you custom Pringles. Bon appétit!

 

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Interview: How German system integrator SVA serves high performance computing with AMD and Supermicro

Featured content

Interview: How German system integrator SVA serves high performance computing with AMD and Supermicro

In an interview, Bernhard Homoelle, head of the HPC competence center at German system integrator SVA, explains how his company serves customers with help from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • SVA System Vertrieb Alexander GmbH

SVA System Vertrieb Alexander GmbH, better known as SVA, is among the leading IT system integrators of Germany. Headquartered in Wiesbaden, the company employs more than 2,700 people in 27 branch offices. SVA’s customers include organizations in automotive, financial services and healthcare.

To learn more about how SVA works jointly with Supermicro and AMD on advanced technologies, PIC managing editor Peter Krass spoke recently with Bernhard Homoelle, head of SVA’s high performance computing (HPC) competence center (pictured above). Their interview has been lightly edited.

For readers outside of Germany, please tell us about SVA?

First of all, SVA is an owner-operated system integrator. We offer high-quality products, we sell infrastructure, we support certain types of implementations, and we offer operational support to help our customers achieve optimum solutions.

We work with partners to figure out what might be the best solution for our customers, rather than just picking one vendor and trying to convince the customer they should use them. Instead, we figure out what is really needed. Then we go in the direction where the customer can really have their requirements met. The result is a good relationship with the customer, even after a particular deal has been closed.

Does SVA focus on specific industries?

While we do support almost all the big industries—automotive, transportation, public sector, healthcare and more—we are not restricted to any specific vertical. Our main business is helping customers solve their daily IT problems, deal with the complexity of new IT systems, and implement new things like AI and even quantum computing. So we’re open to new solutions. We also offer training with some of our partners.

Germany has a robust auto industry. How do you work with these clients?

In general, they need huge HPC clusters and machine learning. For example, autonomous driving demands not only more computing power, but also more storage. We’re talking about petabytes of data, rather than terabytes. And this huge amount of data needs to be stored somewhere and finally processed. That puts pressure on the infrastructure—not just on storage, but also on the network infrastructure as well as on the compute side. For their way into cloud, some these customers are saying, “Okay, offer me HPC as a Service.”

How do you work with AMD and Supermicro?

It’s a really good relationship. We like working with them because Supermicro has all these various types of servers for individual needs. Customers are different, and therefore they have their own requirements. Figuring out what might be the best server for them is difficult if you have limited types of servers available. But with Supermicro, you can get what you have in mind. You don’t have to look for special implementations because they have these already at hand.

We’re also partnering with AMD, and we have access to their benchmark labs, so we can get very helpful information. We start with discussions with the customer to figure out their needs. Typically, we pick up an application from the customer and then use it as a kind of benchmark. Next, we put it on a cluster with different memory, different CPUs, and look for the best solution in terms of performance for their particular application. Based on the findings, we can recommend a specific CPU, number of cores, memory type and size, and more.

With HPC applications, core memory bandwidth is almost as important as the number of cores. AMD’s new Genoa-X processors should help to overcome some of these limitations. And looking ahead, I’m keen to see what AMD will offer with the Instinct MI300.

Are there special customer challenges you’re solving with Supermicro and AMD solutions?

With HPC workloads, our academic customers say, “This is the amount of money available, so how many servers can you really give us for this budget?” Supermicro and AMD really help here with reasonable prices. They’re a good choice for price/performance.

With AI and machine learning, the real issue is software tools. It really depends what kinds of models you can use and how easy it is to use the hardware with those models.

This discussion is not easy, because for many of our customers today, AI means Nvidia. But I really recommend alternatives, and AMD is bringing some alternatives that are great. They offer a fast time to solution, but they also need to be easy to switch to.

How about "green" computing? Is this an important issue for your customers now?

Yes, more and more we’re seeing customers ask for this green computing approach. Typically, a customer has a thermal budget and a power-price budget. They may say, “In five years, the expenses paid for power should not exceed a certain limit.”

In Europe, we also have a supply-chain discussion. Vendors must increasingly provide proof that they’re taking care in their supply chain with issues including child labor and working conditions. This is almost mandatory, especially in government calls. If you’re unable to answer these questions, you’re out of the bid.

With green computing, we see that the power needed for CPUs and GPUs is going up and up. Five years ago, the maximum a CPU could burn was 200W, but now even 400W might not be enough. Some GPUs are as high as 700W, and there are super-chips beyond even that.

All this makes it difficult to use air-cooled systems. Customers can use air conditioning to a certain extent, but there’s only so much air you can press through the rack. Then you need either on-chip water cooling or some kind of immersion cooling. This can help in two dimensions: saving energy and getting density — you can put the components closer together, and you don’t need the big heat sink anymore.

One issue now is that each vendor offers a different cooling infrastructure. Some of our customers run multi-vendor data centers, so this could create a compatibility issue. That’s one reason we’re looking into immersion cooling. We think we could do some of our first customer implementations in 2024.

Looking ahead, what do you see as a big challenge?

One area is that we want to help customers get easier access to their HPC clusters. That’s done on the software side.

In contrast to classic HPC users, machine learning and AI engineers are not that interested in Linux stuff, compiler options or any other infrastructure details. Instead, they’d like to work on their frameworks. The challenge is getting them to their work as easily as possible—so that they can just log in, and they’re in their development environment. That way, they won’t have to care about what sort of operating system is underneath or what kind of scheduler, etc., is running.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Genoa-X: a deeper dive into AMD’s new EPYC processors optimized for technical computing

Featured content

Genoa-X: a deeper dive into AMD’s new EPYC processors optimized for technical computing

AMD has introduced its EPYC 9X84X series processors, formerly codenamed Genoa-X. The new CPUs are designed specifically for technical workloads, and they support up to 1.1GB of L3 Cache.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD is responding to greater specialization in the data center by creating workload-optimized versions of its 4th gen EPYC server processors.

That now includes the AMD EPYC 9x84X series processors, formerly codenamed Genoa-X.

These new CPUs are optimized for technical computing workloads. Those include engineering simulation, product design, structural design, aerodynamics modeling and electronic design automation (EDA).

Big cache

A key feature of the new AMD EPYC 9x84X processors is the new 2nd generation of AMD’s 3D V-Cache technology. It supports more than 1GB of L3 Cache on a 96-core CPU. The larger cache can feed the CPU faster with data needed for large and complex simulations.

Speaking at AMD’s Data Center and AI Technology Premier earlier this month, Dan McNamara, GM of AMD’s server business, said this will deliver a “new dimension” of workload optimization. This will help users get to market faster with higher-quality products while also reducing their OpEx budgets, he added.

The new AMD EPYC 9x84X processors also use the new AMD Zen 4c cores, the company’s new EPYC processors optimized for cloud-native workloads. The 94X8X CPUs are also socket-compatible with earlier Genoa processors. And they offer security protection with AMD Infinity Guard, the company’s suite of hardware-level security features.

It’s worth noting that AMD last year introduced a similar optimization for its Milan series processors. Those processors were code-named Milan-X.

Total ecosystem

To create a complete technical-computing environment, AMD has been working closely with developers of highly technical software. These partners include Altair, Ansys, Cadence, Dassault Systemes, Siemens and Synopsys.

Hardware partners are jumping in, too. Supermicro recently announced that its entire line of Supermicro H13 AMD-based systems now support 4th gen AMD EPYC processors with AMD 3D V-cache technology.

As this table shows, courtesy of AMD, the AMD EPYC 9x84X series now comes in 3 SKUs:

In addition, all 3 SKUs support both DDR5 memory and PCIe 5.0 connectivity.

The new AMD EPYC 9x84X processors are available now. OEM systems based on these processors are expected to start shipping in the third quarter.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages