Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

Featured content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

A video discussion between Charles Liang, Supermicro CEO, and Dr. Lisa Su, AMD CEO.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Higher clock rates, more cores and larger onboard memory caches are some of the traditional areas of improvement for generational CPU upgrades. Performance improvements are almost a given with a new generation CPU. Increasingly, howeer, the more difficult challenges for data centers and performance-intensive computing are energy efficiency and managing heat. Energy costs have spiked in many parts of the world and “performance per watt” is what many companies are looking for. AMD’s 4th-gen EPYC™ CPU runs a little hotter than its predecessor, but its performance gains far outpace the thermal rise, making for much greater performance per watt. It’s a trade-off that makes sense, especially for performance-intensive computing, such HPC and technical computing applications.

In addition to the energy efficiency and heat dissipation concerns, Dr. Su and Mr. Liang discuss the importance of the AMD EPYC™ roadmap. You’ll learn one or two nuances about AMD’s plans. SMC is ready with 15 products that leverage the Genoa, AMD’s fourth generation EPYC™ CPU. This under 15-minute video recorded on November 15, 2022, will bring you up to date on all things AMD EPYC™. Click the link to see the video:

Supermicro & AMD CEOs Video – The Future of Data Center Computing

 

 

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

Featured content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

The CPU package is configurable at time of purchase with various options that you can match up to the specific characteristics of your workloads. Ask yourself the three questions the story poses.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In a previous post, Performance-Intensive Computing explored the benefits of making your applications and workloads more parallel. Chief among the paybacks may be being able to take advantage of the latest innovations in performance-intensive computing.

 

Although it isn’t strictly a parallel approach, the CPU package is configurable at the time of purchase with various options that you can match up to the specific characteristics of your workloads. The goal of this story is to outline how to match up the appropriate features to purchase the best processors for your particular application collection. For starters: You should be asking yourself these three questions:

 

Question 1. Does your application require a great deal of memory and storage? Memory-bound apps are typically found when an application has to manipulate a large amount of data. To alleviate potential bottlenecks, purchase a CPU with the largest possible onboard caches to avoid swapping data from storage. Apps such as Reveal and others used in the oil and gas industry will typically require large onboard CPU caches to help prevent memory bottlenecks as data moves in and out of the processor.

 

Question 2. Do you have the right amount and type of storage for your data requirements? Storage has a lot of different parameters and how it interacts with the processor and your application isn’t one-size-fits-all. Performance-Intensive Computing has previously written about specialized file systems such as the one developed and sold by WekaIO that can aid in onboarding and manipulating large data collections.

 

Question 3. Does your application spend a lot of time communicating across networks, or is your application bound by the limits of your processor? For either of these situations, it might mean you might need CPUs with more cores and/or higher-processing clock speeds. This is the case, for example, with molecular dynamic apps such as Gromacs and Lammps. These situations might call for parts such as AMD’s Threadripper.

 

As you can see, figuring out the right kind of CPU – and its supporting chipsets – is a lot more involved than just purchasing the highest clock speed and largest number of cores. Knowing your data and applications will guide you to buying CPU hardware that makes your business more efficient.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Featured content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Energy company Petrobas, based in Brazil, is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. Petrobas used system integrator Atos to provide more than 250 Supermicro SuperServers. The cluster is ranked 33 on the current top500 list and goes by the name Pegaso.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Atos

Brazilian energy company Petrobas is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. These techniques can help reduce costs and make finding and extracting new hydrocarbon deposits quicker. Petrobras' geoscientists and software engineers quickly modify algorithms to take advantage of new capabilities as new CPU and GPU technologies become available.

 

The energy company used system integrator Atos to provide more than 250 Supermicro SuperServer AS-4124GO-NART+ servers running dual AMD EPYC™ 7512 processors. The cluster goes by the name Pegaso (which in Portuguese means the mythological horse Pegasus) and is currently listed at number 33 on the top500 list of fastest computing systems. Atos is a global leader in digital transformation with 112,000 world-wide employees. They have built other systems that appeared on the top500 list, and AMD powers 38 of them.

 

Petrobas has had three other systems listed on previous iterations of the Top500 list, using other processors. Pegaso is now the largest supercomputer in South America. It is expected to become fully operational next month.  Each of its servers runs CentOS and has 2TB of memory, for a total of 678TB. The cluster contains more than 230,000 core processors, is running more than 2,000 GPUs and is connected via an InfiniBand HDR networking system running at 400Gb/s. To give you an idea of how much gear is involved with Pegaso, it took more than 30 truckloads to deliver and consists of over 30 tons of hardware.

 

The geophysics team has a series of applications that require all this computing power, including seismic acquisition apps that collect data and is then processed to deliver high-resolution subsurface imaging to precisely locate the oil and gas deposits. Having the GPU accelerators in the cluster helps to reduce the processing time, so that the drilling teams can locate their rigs more precisely.

 

For more information, see this case study about Pegaso.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

How the New EPYC CPUs Deliver System-on-Chip Electronics

Featured content

How the New EPYC CPUs Deliver System-on-Chip Electronics

CPU chipsets are not normally considered systems-on-chip (SoC) but the fourth generation of AMD EPYC processors incorporate numerous I/O functionality at a high level of integration.

Learn More about this topic
  • Applications:
  • Featured Technologies:
Typically, CPU chipsets are not normally considered systems-on-chip (SoC) but the fourth generation of AMD EPYC processors incorporate numerous I/O functionality at a high level of integration. Previous generations have delivered this functionality on external chipsets. The SoC design helps reduce power consumption, packaging costs and improve data throughput by reducing interconnection latencies.
 
The new EPYC processors have 12 DDR5 memory controllers – 50 percent more controllers than any other x86 CPU, which keeps up the higher memory demands of performance-intensive computing applications. As we mentioned in an earlier blog, these controllers also include inline encryption engines for supporting AMD’s Infinity Guard features, including support for an integrated security processor that establishes a secure root of trust and other security tasks.
 
They also include 128 or 160 lanes of PCIe Gen5 controllers, which also helps with higher I/O throughput of these more demanding applications. These support the same physical interfaces for Infinity fabric connectors and provide more remote memory access among CPUs at up to 36 GBps between servers. The new Zen 4 CPU cores can make use of one or two interfaces.
 
The PCIe Gen 5 I/O is supported in the I/O die with eight serializer/deserializer silicon controllers with one independent set of traces to support each port of 16 PCIe lanes.
 
 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Featured content

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols. More on the release of the AMD EPYC™ 9004 series processors in this part three of a four-part series..

Learn More about this topic
  • Applications:
  • Featured Technologies:

 
 
Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols that have seen further innovation with the release of the AMD EPYC™ 9004 series processors. These have a direct benefit for performance-intensive computing applications, particularly for supporting higher-density virtual machines (VMs) and using technologies that can protect data flows from leaving the confines of what Google calls confidential VMs as well as further isolating VM hypervisors. They offer a collection of N2D and C2D instances that support these confidential VMs.
 
“Product security is always our top focus,” said AMD CTO Mark Papermaster. “We are continuously investing and collaborating in the security of these technologies.” 
 
Royal Hansen, VP of engineering for Google Cloud said: “Our customers expect the most trustworthy computing experience on the planet. Google and AMD have a long history and a variety of relationships with the deepest experts on security and chip development. This was at the core of our going to market with AMD’s security solutions for datacenters.”
 
The two companies also worked together on this security analysis.
 
Called Infinity Guard collectively, the security technologies theyv'e been working on involve four initiatives:
 
1. Secure encrypted virtualization provides each VM with its own unique encryption key known only to the processor.
 
2. Secure nested paging complements this virtualization to protect each VM from any malicious hypervisor attacks and provide for an isolated and trusted environment.
 
3. AMD’s secure boot along with the Trusted Platform Module attestation of the confidential VMs happen every time a VM boots, ensuring its integrity and to mitigate any persistent threats.
 
4. AMD’s secure memory encryption and integration into the memory channels speed performance.
 
These technologies are combined and communicate using the AMD Infinity Fabric pathways to deliver breakthrough performance along with better secure communications.
 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Understanding the New Core Architecture of the AMD EPYC 9004 Series Processors

Featured content

Understanding the New Core Architecture of the AMD EPYC 9004 Series Processors

AMD’s announcement of its fourth generation EPYC 9004 Series processors includes major advances in how these chipsets are designed and produced. Part 2 of 4.

Learn More about this topic
  • Applications:
  • Featured Technologies:
AMD’s announcement of its fourth generation EPYC 9004 Series processors includes major advances in how these chipsets are designed and produced for delivering the highest performance levels. These advances involve using a hybrid multi-die architecture.
 
This architecture makes use of two different production processes for cores and I/O pathways. The former makes use of five nanometer dies, while the latter uses six nanometer dies. Each processor package can have up to 12 CPU dies, each with eight 8 cores for a total of 96 cores in the maximum configuration. Each eight-core assembly has its own set of eight 8 dedicated 1 MB L2 caches, and the overall assembly can access a shared 32 MB L3 cache, as shown in the diagram below.
 
32 MB L3 cache image
 
 
 
 
 
 
 
 
 
 
 
In addition to these changes, AMD announced improvements called Zen 4 that involve boosting instructions-per-clock counts and overall clock- speed increases. AMD promises roughly 29 percent faster single-core CPU performance in Zen 4 relative to Zen 3, which were affirmed with Ars Technica’s tests earlier this fall. (Zen 3 chips used the older seven 7 nanometer dies.)
 
 
This configuration provides a great deal of flexibility in how the CPU, memory channels, and I/O paths are arranged. The multi-die setup can reduce fabrication waste and offer better parallel processing support. In addition, AMD EPYC processors are produced in single and dual socket configurations, with the latter offering more I/O pathways and dedicated PCIe generation 5 I/O connections.
 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

Featured content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

AMD announces its fourth-generation EPYC™ CPUs. The new EPYC 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. Part 1 of 4.

Learn More about this topic
  • Applications:
  • Featured Technologies:
AMD very recently announced its fourth-generation EPYC™ CPUs.This generation will provide innovative solutions that can satisfy the most demanding performance-intensive computing requirements for cloud computing, AI and highly parallelized data analytic applications. The design decisions AMD made on this processor generation strirke a good balance among specificaitons, including higher CPU power and I/O performance, latency reductions and improvements in overall data throughput. This lets a single CPU socket address an increasingly larger world of complex workloads. 
 
The new AMD EPYC™ 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. The new chip dies support 12 DDR5 memory channels, doubling the I/O throughput of previous generations. The new CPUs also increase core counts from 64 cores in the previous EPYC 7003 chips to 96 cores in the new chips using 5-nanometer processes. The new generation of chips also increases the maximum memory capacity from 4TB of DDR4-3200 to 6TB of DDR5-4800 memory.
 
 
 
There are three major innovations evident in the AMD EPYC™ 9004 processor series:
  1. A  new hybrid multi-die chip architecture coupled with multi-processor server innovations and a new and more advanced Zen 4 instruction set along with support for an increase in dedicated L2 and shared L3 cache storage
  2. Security enhancements to AMD’s Infinity Guard
  3. Advances to system-on-chip designs that extend and enhance AMD Infinity switching fabric technology,
Taken together, the new AMD EPYC™ 9004 series processors can offer plenty of innovation and performance advantage. The new processors offer better performance per watt of power consumed and better per core performance, too.
 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages