Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Featured content

Eliovp Increases Blockchain-Based App Performance with Supermicro Servers

Eliovp, which brings together computing and storage solutions for blockchain workloads, rewrote its code to take full advantage of AMD’s Instinct MI100 and MI250 GPUs. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Eliovp

When you’re building blockchain-based applications, you typically need a lot of computing and storage horsepower. This is the niche that Belgium-based Eliovp fills. They have developed a line of extremely fast cloud-based servers designed to run demanding blockchain workloads.

 

Eliovp has been recognized as the top Filecoin storage provider in Europe. This refers to a decentralized blockchain-based protocol that lets anyone rent spare local storage and is a key Web3 component.

 

To satisfy the compute  and storage needs, Eliiovp employs Supermicro’s A+ AS-1124US® and AS-4124GS® servers, running quad-core AMD EPYC 7543 and 7313 CPUs and as many as 8 AMD Instinct MI100 and MI250 GPUs to further boost performance.

 

What makes these servers especially potent is that Eliovp rewrote its code to run on this specific AMD Instinct GPU family. As a result, Eliovp’s blockchain calculations run up to 35% faster than what it saw on previous generations of its servers.

 

One of the attractions of the Supermicro servers is the capability to leverage the high-density core count and higher clock speeds as well as the 32 memory slots. And it comes packaged in a relatively small form factor.

 

“By working with Supermicro, we get new generations of servers with AMD technology earlier in our development cycle, enabling us to bring our products to market faster," said Elio Van Puyvelde, CEO of Eliovp. The company was able to take advantage of new CPU and GPU instructions and memory management to make its code more efficient and effective. Eliovp was also able to reduce overall server power consumption, which is always important in blockchain applications that span dozens of machines.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Microsoft Azure’s More Capable Compute Instances Take Advantage of the Latest AMD EPYC™ Processors

Featured content

Microsoft Azure’s More Capable Compute Instances Take Advantage of the Latest AMD EPYC™ Processors

Azure HBv3 series virtual machines (VMs) are optimized for HPC applications, such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, and various simulation tasks. HBv3 VMs feature up to 120 Third-Generation AMD EPYC™ 7v73X-series CPU cores with more than 450 GB of RAM.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Azure

Increasing demands for higher-performance computing mean that the cloud-based computing needs to ratchet up its performance too. Microsoft Azure has introduced more capable compute virtual machines (VMs) that take advantage of the latest from AMD EPYC™ processors. This means that developers can easily spin up VMs that normally cost thousands of dollars if they were to purchase their physical equivalents.

 

This story's focus is on two of Azure's series: HBv3 and NVv4. In most cases, a single virtual machine is used to take advantage of all its resources. High-performance examples of Azure HBv3 series VMs are optimized for HPC applications, such as fluid dynamics, explicit and implicit finite element analysis, weather modeling, seismic processing, and various simulation tasks. HBv3 VMs feature up to 120 Third-Generation AMD EPYC™ 7v73X-series CPU cores with more than 450 GB of RAM. This series of VMs has processor clock frequencies up to 3.5GHz. All HBv3-series VMs feature 200Gb/sec HDR InfiniBand switches to enable supercomputer-scale HPC workloads. The VMs are connected and optimized to deliver the most consistent performance. Get more information about AMD EPYC and Microsoft Azure virtual machines.

 

A Dutch construction company, TBI, is using the Azure NVv4 to run computer-aided design and building modeling tasks on a series of virtual Windows desktops. The NVv4 VMs are only available running Windows powered by from four to 32 AMD EPYC™ vCPUs and offering a partial to full AMD Instinct™ M125 GPU with memory ranging from 2GB to 17GB. Previous generations of NV instances used Intel CPUs and NVIDIA GPUs that offer less performance.

 

TBI chose this solution because it was cheaper, easier to support and keep its software collection updated. Using virtual desktops meant that no client data was stored on any laptops, making things more secure. Also, these instances delivered equivalent performance, taking advantage of the SR-IOV technology.

 

Supermicro offers a wide range of servers that incorporate the AMD EPYC™ CPU and a number of servers optimized for applications that use GPUs. These servers range from 1U rackmount servers to high end 4U GPU optimized systems. Whether you’re using it on-prem or you’re building your own cloud, Supermicro’s Aplus servers are optimized for performance and technical computing applications and they run Azure and other systems well. Get more information about Supermicro servers with AMD’s EPYC™ CPUs.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Featured content

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Using six nanometer processes and the CDNA2 graphics dies, AMD has created the third generation of GPU accelerators, which have more than twice the performance of previous GPU processors and deliver 181 teraflops of mixed precision peak computing power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD and Supermicro have made it easier to exploit the most advanced combination of GPU and CPU technologies.

Derek Bouius, a senior product manager at AMD, said “Using six nanometer processes and the CDNA2 graphics dies, we created the third generation of GPU chipsets that have more than twice the performance of previous GPU processors. They deliver 181 teraflops of mixed precision peak computing power.” Called the AMD Instinct MI210™ and AMD Instinct MI250™, they have twice the memory (64 GB) to work with and deliver data at the rate of 1.6 TB/sec. Both these accelerators are packaged as fourth generation PCIe expansion cards and come with direct connectors to Infinity Fabric bridges for faster I/O throughput between GPU cards -- without having their traffic go through the standard PCIe bus.

The Instinct accelerators have immediate benefit for improving performance in the most complex computational applications, such as molecular dynamics, computer-aided engineering, weather and oil and gas modeling.

"We provided optimized containerized applications that are pre-built to support the accelerator and run them out of the box," Bouius said. “It is a very easy lift to go from existing solutions to the AMD accelerator,” he added. It’s accomplished by bringing together AMD’s ROCm™ support libraries and tools with its HIP programming language and device drivers – all of which are open source. They can unlock the GPU performance enhancements to make it easier for software developers to take advantage of its latest processors. AMD offers a catalog of dozens of currently available applications.

Supermicro’s SuperBlade product line combines the new AMD Instinct™ GPU accelerators and AMD EPYC™ processors to deliver higher performance with lower latency for its enterprise customers.

One packaging option is to combine six chassis with 20 blades each, delivering 120 servers that provide a total of more than 3,000 teraflops of combined processing power. This equipment delivers more power efficiency in less space with fewer cables, providing a lower cost of ownership. The blade servers are all hot-pluggable and come with two onboard front-mounted 25 gigabit and two 10 gigabit Ethernet connectors.

“Everything is faster now for running enterprise workloads,” says Shanthi Adloori, senior director of product management for Supermicro. “This is why our Supermicro servers have won the world record in performance from the Standard Performance Evaluation Corp. three years in row.” Another popular design for the SuperBlade is to provide an entire “private cloud in a box” that combines administration and worker nodes and handles deploying a Red Hat Openshift platform to run Kubernetes-based deployments with minimal provisioning.

Related Resources

Featured videos


Events


Find AMD & Supermicro Elsewhere

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Featured content

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Learn More about this topic
  • Applications:
  • Featured Technologies:

Solving some of business’ bigger computing challenges requires a solid partnership between CPU vendor, system builders and channel partners. That is what AMD and Supermicro have brought to the market with the third generation of AMD's EPYC™ processors with AMD 3D V-Cache™ and AMD Instinct™ MI200 series GPU accelerators wrapped up in SuperBlade servers built by Supermicro.

 

“This has immediate benefits for particular fields such as crash and digital circuit simulations and electronic design automation,” said David Weber, Senior Manager for AMD. “It means we can create virtual chips and track workflows and performance before we design and build the silicon." The same situation holds for computational fluid dynamics, he added, "in which we can determine the virtual air and water flows across wings and through water pumps and save a lot of time and money, and the AMD 3D V-Cache™ makes this process a lot faster.” Without any software coding changes, these applications are seeing 50% to 80% performance improvement, Weber said.

 

The chips are not just fast, they come with several built-in security features, including support for Zen 3 and Shadow Stack. Zen 3 is the overall name for a series of improvements to the AMD higher-end CPU line that have shown a 19% improvement in instructions per clock, lower latency for doubled cache delivery when compared to the earlier Zen 2 architecture chips.

 

These processors also support Microsoft’s Hardware-enforced Stack Protection to help detect and thwart control-flow attacks by checking the normal program stack against a secured hardware-stored copy. This helps to boot securely, protect the computer from firmware vulnerabilities, shield the operating system from attacks, and prevent unauthorized access to devices and data with advanced access controls and authentication systems.

 

Supermicro offers its SuperBlade servers that take advantage of all these performance and security improvements. For more information, see this webcast.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Lawrence Livermore Labs Advances Scientific Research with AMD GPU Accelerators

Featured content

Lawrence Livermore Labs Advances Scientific Research with AMD GPU Accelerators

The Lawrence Livermore National Lababoratory chose to use a cluster of 120 servers running AMD EPYC™ processors with nearly 1,000 AMD Instinct™ GPU accelerators. The hardware, facilitated by Supermicro, was an excellent match for the molecular dynamics simulations required for the Lab's cutting-edge research, which combines machine learning with structural biology concepts.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Lawrence Livermore National Laboratory is one of the centers of high-performance computing (HPC) in the world and it is constantly upgrading its equipment to meet increasing computational demands. It houses one of the world's largest computing environments. Among its more pressing research goals derives from the COVID-19 crisis.

Lawrence Livermore researches and supports proposals from the COVID-19 HPC Consortium, which is composed of more than a dozen research organizations across government, academia and private industry. It aims to accelerate disease detection and treatment efforts, as well as to screen antibody candidates virtually and run several disease-related mathematical simulations.

"By leveraging the massive compute capabilities of the world’s [more] powerful supercomputers, we can help accelerate critical modeling and research to help fight the virus," said Forrest Norrod, senior vice president and general manager, AMD Datacenter and Embedded Systems Group.

The lab chose to use a cluster of 120 servers running AMD EPYC™ processors with nearly 1,000 AMD Instinct™ GPU accelerators. The servers were connected by Mellanox switches. The product choices had two benefits: First, the hardware, facilitated by Supermicro, was an excellent match for the molecular dynamics simulations required for this research. The lab is performing cutting-edge research that combines machine learning with structural biology concepts. Second, the gear was tested and packaged together, so it could become operational when it was delivered to the lab.

AMD software engineers and application specialists were able to modify components to run GPU-based applications. This is top-of-the-line gear. The AMD accelerators deliver up to 13.3 teraFLOPS of single-precision peak floating-point performance combined with 32GB of high-bandwidth memory. The scientists were able to reduce their simulation run-times from seven hours to just 40 minutes, allowing  them to test multiple modeling iterations efficiently.

For more information, see the Supermicro case study and Lawrence Livermore report.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages