Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Perspective: Don’t Back into Performance-Intensive Computing

Featured content

Perspective: Don’t Back into Performance-Intensive Computing

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and automation to differentiate their products and services. In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive.

Learn More about this topic
  • Applications:

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and decision-support analytics, technical computing, big data, modeling and simulation, cryptocurrency and other blockchain applications, automation and high-performance computing to differentiate their products and services.

 

In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive. Without thinking through the compute performance you need as measured against your most demanding workloads – now and at least two years from now – you’re setting yourself up for failure or unnecessary expense. When it comes to performance-intensive computing: plan, don’t dabble.

 

There are questions you should ask before jumping in, too. In the cloud or on-premises? There are pluses and minuses to each. Is your data highly distributed? If so, you’ll need network services that won’t become a bottleneck. There’s a long list of environmental and technology needs that are required to make performance-intensive computing pay off. Among them is making it possible to scale. And, of course, planning and building out your environment in advance of your need is vastly preferable to stumbling into it.

 

The requirement that sometimes gets short shrift is organizational. Ultimately, this is about revealing data with which your company can make strategic decisions. There’s no longer anything mundane about enterprise technology and especially the data it manages. It has become so important that virtually every department in your company affects and is affected by it. If you double down on computational performance, the C-suite needs to be fully represented in how you use that power, not just the approval process. Leaving top leadership, marketing, finance, tax, design, manufacturing, HR or IT out of the picture would be a mistake. And those are just sample company building blocks. You also need measurable, meaningful metrics that will help your people determine the ROI of your efforts. Even so, it’s people who make the leap of faith that turns data into ideas.

 

Finally, if you don’t already have the expertise on staff to learn the ins and outs of this endeavor, hire or contract or enter into a consulting arrangement with smart people who clearly have the chops to do this right. You don’t want to be the company with a rocket ship that no one can fly.

 

So, don’t back into performance-intensive computing. But don’t back out of it either. Being able to take full advantage of your data at scale can play an important role in ensuring the viability of your company going forward.

 

Related Content:

 


 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

Featured content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

A video discussion between Charles Liang, Supermicro CEO, and Dr. Lisa Su, AMD CEO.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Higher clock rates, more cores and larger onboard memory caches are some of the traditional areas of improvement for generational CPU upgrades. Performance improvements are almost a given with a new generation CPU. Increasingly, howeer, the more difficult challenges for data centers and performance-intensive computing are energy efficiency and managing heat. Energy costs have spiked in many parts of the world and “performance per watt” is what many companies are looking for. AMD’s 4th-gen EPYC™ CPU runs a little hotter than its predecessor, but its performance gains far outpace the thermal rise, making for much greater performance per watt. It’s a trade-off that makes sense, especially for performance-intensive computing, such HPC and technical computing applications.

In addition to the energy efficiency and heat dissipation concerns, Dr. Su and Mr. Liang discuss the importance of the AMD EPYC™ roadmap. You’ll learn one or two nuances about AMD’s plans. SMC is ready with 15 products that leverage the Genoa, AMD’s fourth generation EPYC™ CPU. This under 15-minute video recorded on November 15, 2022, will bring you up to date on all things AMD EPYC™. Click the link to see the video:

Supermicro & AMD CEOs Video – The Future of Data Center Computing

 

 

 

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

Featured content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

The CPU package is configurable at time of purchase with various options that you can match up to the specific characteristics of your workloads. Ask yourself the three questions the story poses.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In a previous post, Performance-Intensive Computing explored the benefits of making your applications and workloads more parallel. Chief among the paybacks may be being able to take advantage of the latest innovations in performance-intensive computing.

 

Although it isn’t strictly a parallel approach, the CPU package is configurable at the time of purchase with various options that you can match up to the specific characteristics of your workloads. The goal of this story is to outline how to match up the appropriate features to purchase the best processors for your particular application collection. For starters: You should be asking yourself these three questions:

 

Question 1. Does your application require a great deal of memory and storage? Memory-bound apps are typically found when an application has to manipulate a large amount of data. To alleviate potential bottlenecks, purchase a CPU with the largest possible onboard caches to avoid swapping data from storage. Apps such as Reveal and others used in the oil and gas industry will typically require large onboard CPU caches to help prevent memory bottlenecks as data moves in and out of the processor.

 

Question 2. Do you have the right amount and type of storage for your data requirements? Storage has a lot of different parameters and how it interacts with the processor and your application isn’t one-size-fits-all. Performance-Intensive Computing has previously written about specialized file systems such as the one developed and sold by WekaIO that can aid in onboarding and manipulating large data collections.

 

Question 3. Does your application spend a lot of time communicating across networks, or is your application bound by the limits of your processor? For either of these situations, it might mean you might need CPUs with more cores and/or higher-processing clock speeds. This is the case, for example, with molecular dynamic apps such as Gromacs and Lammps. These situations might call for parts such as AMD’s Threadripper.

 

As you can see, figuring out the right kind of CPU – and its supporting chipsets – is a lot more involved than just purchasing the highest clock speed and largest number of cores. Knowing your data and applications will guide you to buying CPU hardware that makes your business more efficient.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Featured content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Energy company Petrobas, based in Brazil, is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. Petrobas used system integrator Atos to provide more than 250 Supermicro SuperServers. The cluster is ranked 33 on the current top500 list and goes by the name Pegaso.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Atos

Brazilian energy company Petrobas is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. These techniques can help reduce costs and make finding and extracting new hydrocarbon deposits quicker. Petrobras' geoscientists and software engineers quickly modify algorithms to take advantage of new capabilities as new CPU and GPU technologies become available.

 

The energy company used system integrator Atos to provide more than 250 Supermicro SuperServer AS-4124GO-NART+ servers running dual AMD EPYC™ 7512 processors. The cluster goes by the name Pegaso (which in Portuguese means the mythological horse Pegasus) and is currently listed at number 33 on the top500 list of fastest computing systems. Atos is a global leader in digital transformation with 112,000 world-wide employees. They have built other systems that appeared on the top500 list, and AMD powers 38 of them.

 

Petrobas has had three other systems listed on previous iterations of the Top500 list, using other processors. Pegaso is now the largest supercomputer in South America. It is expected to become fully operational next month.  Each of its servers runs CentOS and has 2TB of memory, for a total of 678TB. The cluster contains more than 230,000 core processors, is running more than 2,000 GPUs and is connected via an InfiniBand HDR networking system running at 400Gb/s. To give you an idea of how much gear is involved with Pegaso, it took more than 30 truckloads to deliver and consists of over 30 tons of hardware.

 

The geophysics team has a series of applications that require all this computing power, including seismic acquisition apps that collect data and is then processed to deliver high-resolution subsurface imaging to precisely locate the oil and gas deposits. Having the GPU accelerators in the cluster helps to reduce the processing time, so that the drilling teams can locate their rigs more precisely.

 

For more information, see this case study about Pegaso.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Featured content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • GigaIO

Today’s data center has numerous challenges: provisioning hardware and cloud workloads, balancing the needs of performance-intensive applications across compute, storage and network resources, and having a consistent monitoring and analytics framework to feed intelligent systems management. Plus, you may have the need to deploy or re-deploy all these resources as needs shift, moment to moment.

Supermicro has created its own tool to assist with these decisions to monitor and manage this broad IT portfolio, called the SuperCloud Composer (SCC). It combines a standardized web-based interface using an Open Distributed Infrastructure Management interface with a unified dashboard based on the RedFish message bus and service agents.

SCC can track the various resources and assign them to different pools with its own predictive analytics and telemetry. It delivers a single intelligent management solution that covers both existing on-premises IT equipment as well as a more software-defined cloud collection. Additional details can be found in this SuperCloud Composer white paper.

SuperCloud Composer makes the use of a cluster-level PCIe network using the FabreX software from GigaIO Networks. It has the capability to flexibly scale up and out storage systems while using the lowest latency paths available.

It also supports Weka.IO cluster members, which can be deployed across multiple systems simultaneously. See our story The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs.

SCC can create automated installation playbooks in Ansible, including a software boot image repository that can quickly deploy new images across the server infrastructure. It has a fast-deploy feature that allows a new image to be deployed within seconds.

SuperCloud Composer offers a robust analytics engine that collects historical and up-to-date analytics stored in an indexed database within its framework. This data can produce a variety of charts, graphs and tables so that users can better visualize what is happening with their server resources. Each end-user is provided with analytic capable charting represented by IOPS, network, telemetry, thermal, power, composed node status, storage allocation and system status.

Last but not least, SCC also has both network provisioning and storage fabric provisioning features where build plans are pushed to data or fabric switches either as single-threaded or multithreaded operations, such that multiple switches can be updated simultaneously by shared or unique build plan templates.

For more information, watch this short SCC explainer video. Or schedule an online demo of SCC and request a free 90-day trial of the software.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

Featured content

AMD Announces Fourth-Generation EPYC™ CPUs with the 9004 Series Processors

AMD announces its fourth-generation EPYC™ CPUs. The new EPYC 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. Part 1 of 4.

Learn More about this topic
  • Applications:
  • Featured Technologies:
AMD very recently announced its fourth-generation EPYC™ CPUs.This generation will provide innovative solutions that can satisfy the most demanding performance-intensive computing requirements for cloud computing, AI and highly parallelized data analytic applications. The design decisions AMD made on this processor generation strirke a good balance among specificaitons, including higher CPU power and I/O performance, latency reductions and improvements in overall data throughput. This lets a single CPU socket address an increasingly larger world of complex workloads. 
 
The new AMD EPYC™ 9004 Series processors demonstrate advances in hybrid, multi-die architecture by decoupling core and I/O processes. The new chip dies support 12 DDR5 memory channels, doubling the I/O throughput of previous generations. The new CPUs also increase core counts from 64 cores in the previous EPYC 7003 chips to 96 cores in the new chips using 5-nanometer processes. The new generation of chips also increases the maximum memory capacity from 4TB of DDR4-3200 to 6TB of DDR5-4800 memory.
 
 
 
There are three major innovations evident in the AMD EPYC™ 9004 processor series:
  1. A  new hybrid multi-die chip architecture coupled with multi-processor server innovations and a new and more advanced Zen 4 instruction set along with support for an increase in dedicated L2 and shared L3 cache storage
  2. Security enhancements to AMD’s Infinity Guard
  3. Advances to system-on-chip designs that extend and enhance AMD Infinity switching fabric technology,
Taken together, the new AMD EPYC™ 9004 series processors can offer plenty of innovation and performance advantage. The new processors offer better performance per watt of power consumed and better per core performance, too.
 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Are Your App Workloads Running in Parallel?

Featured content

Are Your App Workloads Running in Parallel?

Learn More about this topic
  • Applications:

To be effective at delivering performance-intensive applications, it pays to split up your workloads and run them simultaneously, a.k.a., in parallel. In the past, we didn’t really think about the resources required to run workloads, because many business computers were all-purpose machines. There was also a tendency to run loads serially to avoid bogging down due to heavy CPU utilization, heavy I/0 and so on.

 

But computers have become much more capable of late. What were once thought of as “desktop” computers have approached the arena once occupied by minicomputers and mainframes. Like the larger systems, they serve multiple concurrent users and higher-demanding applications. As a result, we need to think more carefully about how their various components – processor, memory, storage and network connections – interact, find and eliminate the bottlenecks between these components to make them useful for higher-end workloads.
 

Straighten out Bottlenecks


One way to eliminate bottlenecks is to break your apps into smaller, more digestible pieces that can run concurrently. As the new processors employ more cores and more sophisticated components, this means that more of your code can be consumed by the entire CPU package. This is the inherent nature of parallel processing, and why the world’s fastest supercomputers now routinely span thousands (and some in the millions) of cores.


A company called Weka has developed a file system designed to provide higher-speed data ingestion and more appropriate for machine learning and advanced mathematical modeling applications. Understanding the particular type of data storage – whether it is a parallel file system such as Weka, more scratch space for computations or better backups – can make a big difference in overall performance.


But it is also important how your apps work across the network. Is there a lot of back-and-forth between clients and servers, or sending a small chunk of data and waiting for a reply? This introduces a lot of downtime for the app, and these “wait states” should be identified and potentially eliminated.
 

Offload Workloads


Does your application do a lot of calculation? As discussed in an earlier story appearing on Performance-Intensive Computing, complementary processors, such as co-processors and GPUs, can be a big performance boost so long the processor can move on to its next task, working in parallel, instead of waiting for data returned from the offloaded computation.

 

Working in parallel can be a challenge when your apps frequently pause to wait for data from another process or are highly monolithic designed to run in a serial fashion. Such apps may be challenging to rewrite to take advantage cloud native or parallel operations. At some point, you are going to have to make that break and put in the programming effort to modernize your apps, but only you or your company can decide when it’s right to do that.

 

But if you can modify your workloads for this parallel structure and your hardware was designed to support it, you will see big benefits.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Perspective: Looking Back on the Rise of Supercomputing

Featured content

Perspective: Looking Back on the Rise of Supercomputing

Learn More about this topic
  • Applications:
  • Featured Technologies:

We’ve come a long way on the development of high performance computing. Back in 2004, I attended an event held in the gym at the University of San Francisco. The goal was to crowd-source computing power by connecting the PCs of volunteers who were participating in the first “Flash Mob Computing” cluster computing event. Several hundred PCs were networked together in the hope that they would create one of the largest supercomputers, albeit for a few hours.

 

I brought two laptops for the cause. The participation rules stated that the  data on our hard drives would remain intact. Each computer would run a specially crafted boot CD that ran a benchmark called Linpack, a software library for performing numerical linear algebra running on Linux. It was used to measure the collective computing power.

 

The event attracted people with water-cooled overclocked PCs, naked PCs (no cases, just the boards and other components) and custom-made rigs with fancy cases. After a few hours, we had roughly 650 PCs on the floor of the gym. Each PC was connected to a bunch of Foundry BigIron super-switches that were located around the room.

 

The 2004 experiment brought out several industry luminaries, such as Gordon Bell, who was the father of the Digital Equipment Corporation VAX minicomputer, and Jim Gray, who was one of the original designers behind the TPC benchmark while he was at Tandem. Both men at the time were Microsoft fellows. Bell was carrying his own laptop but had forgotten to bring his CD drive, so he couldn’t connect to the mob.

 

Network shortcomings

 

What was most interesting to me, and what gave rise to the mob’s eventual undoing, were the networking issues involved with assembling and running such a huge collection of gear. The mob used ordinary 100BaseT Ethernet, which was a double-edged sword. While easy to set up, it was difficult to debug when network problems arose. The Linpack benchmark requires all the component machines to be running concurrently during the test, and the organizers had trouble getting all 600-plus PCs to operate online flawlessly. The best benchmark accomplished was a peak rate of 180 gigaflops using 256 computers, but that wasn’t an official score as one node failed during the test.

 

To give you an idea of where this stood in terms of overall supercomputing prowess, it was better than the Cray supercomputers of the early 1990s, which delivered around 16 gigaflops.If you lo

 

At the website top500.org (which tracks the fastest supercomputers around the globe), you can see that all the current top 500 machines are measured in petaflops (1 million gigaflops). The Oak Ridge National Laboratory’s Frontier machine, which has occupied the number one spot this year, weighs in at more than 1,000 petaflops and uses 8 million cores. To make the fastest 500 list back in 2004, the mob would have had to achieve a benchmark of over 600 gigaflops. Because of the networking problems, we’ll never know for sure.Still, it was an impressive achievement, given the motley mix of machines. All of the world’s top 500 supercomputers are custom built and carefully curated and assembled to attain that level of computing performance.

 

Another historical note: back in 2004, one of the more interesting entries came in third on the top500.org list: a collection of several thousand Apple Macintoshes running at Virginia Polytechnic University. Back in the present, as you might imagine, almost all the fastest 500 supercomputers are based on a combination of CPU and GPU chip architectures.

 

Today, you can buy your own supercomputer on the retail market, such as the Supermicro SuperBlade® models. And of course, you can routinely run much faster networking protocols than 100-megabit Ethernet.

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Pages