Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Perspective: Don’t Back into Performance-Intensive Computing

Featured content

Perspective: Don’t Back into Performance-Intensive Computing

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and automation to differentiate their products and services. In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive.

Learn More about this topic
  • Applications:

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and decision-support analytics, technical computing, big data, modeling and simulation, cryptocurrency and other blockchain applications, automation and high-performance computing to differentiate their products and services.

 

In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive. Without thinking through the compute performance you need as measured against your most demanding workloads – now and at least two years from now – you’re setting yourself up for failure or unnecessary expense. When it comes to performance-intensive computing: plan, don’t dabble.

 

There are questions you should ask before jumping in, too. In the cloud or on-premises? There are pluses and minuses to each. Is your data highly distributed? If so, you’ll need network services that won’t become a bottleneck. There’s a long list of environmental and technology needs that are required to make performance-intensive computing pay off. Among them is making it possible to scale. And, of course, planning and building out your environment in advance of your need is vastly preferable to stumbling into it.

 

The requirement that sometimes gets short shrift is organizational. Ultimately, this is about revealing data with which your company can make strategic decisions. There’s no longer anything mundane about enterprise technology and especially the data it manages. It has become so important that virtually every department in your company affects and is affected by it. If you double down on computational performance, the C-suite needs to be fully represented in how you use that power, not just the approval process. Leaving top leadership, marketing, finance, tax, design, manufacturing, HR or IT out of the picture would be a mistake. And those are just sample company building blocks. You also need measurable, meaningful metrics that will help your people determine the ROI of your efforts. Even so, it’s people who make the leap of faith that turns data into ideas.

 

Finally, if you don’t already have the expertise on staff to learn the ins and outs of this endeavor, hire or contract or enter into a consulting arrangement with smart people who clearly have the chops to do this right. You don’t want to be the company with a rocket ship that no one can fly.

 

So, don’t back into performance-intensive computing. But don’t back out of it either. Being able to take full advantage of your data at scale can play an important role in ensuring the viability of your company going forward.

 

Related Content:

 


 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

Featured content

Some Key Drivers behind AMD’s Plans for Future EPYC™ CPUs

A video discussion between Charles Liang, Supermicro CEO, and Dr. Lisa Su, AMD CEO.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Higher clock rates, more cores and larger onboard memory caches are some of the traditional areas of improvement for generational CPU upgrades. Performance improvements are almost a given with a new generation CPU. Increasingly, howeer, the more difficult challenges for data centers and performance-intensive computing are energy efficiency and managing heat. Energy costs have spiked in many parts of the world and “performance per watt” is what many companies are looking for. AMD’s 4th-gen EPYC™ CPU runs a little hotter than its predecessor, but its performance gains far outpace the thermal rise, making for much greater performance per watt. It’s a trade-off that makes sense, especially for performance-intensive computing, such HPC and technical computing applications.

In addition to the energy efficiency and heat dissipation concerns, Dr. Su and Mr. Liang discuss the importance of the AMD EPYC™ roadmap. You’ll learn one or two nuances about AMD’s plans. SMC is ready with 15 products that leverage the Genoa, AMD’s fourth generation EPYC™ CPU. This under 15-minute video recorded on November 15, 2022, will bring you up to date on all things AMD EPYC™. Click the link to see the video:

Supermicro & AMD CEOs Video – The Future of Data Center Computing

 

 

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

Featured content

Match CPU Options to Your Apps and Workloads to Maximize Efficiency

The CPU package is configurable at time of purchase with various options that you can match up to the specific characteristics of your workloads. Ask yourself the three questions the story poses.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In a previous post, Performance-Intensive Computing explored the benefits of making your applications and workloads more parallel. Chief among the paybacks may be being able to take advantage of the latest innovations in performance-intensive computing.

 

Although it isn’t strictly a parallel approach, the CPU package is configurable at the time of purchase with various options that you can match up to the specific characteristics of your workloads. The goal of this story is to outline how to match up the appropriate features to purchase the best processors for your particular application collection. For starters: You should be asking yourself these three questions:

 

Question 1. Does your application require a great deal of memory and storage? Memory-bound apps are typically found when an application has to manipulate a large amount of data. To alleviate potential bottlenecks, purchase a CPU with the largest possible onboard caches to avoid swapping data from storage. Apps such as Reveal and others used in the oil and gas industry will typically require large onboard CPU caches to help prevent memory bottlenecks as data moves in and out of the processor.

 

Question 2. Do you have the right amount and type of storage for your data requirements? Storage has a lot of different parameters and how it interacts with the processor and your application isn’t one-size-fits-all. Performance-Intensive Computing has previously written about specialized file systems such as the one developed and sold by WekaIO that can aid in onboarding and manipulating large data collections.

 

Question 3. Does your application spend a lot of time communicating across networks, or is your application bound by the limits of your processor? For either of these situations, it might mean you might need CPUs with more cores and/or higher-processing clock speeds. This is the case, for example, with molecular dynamic apps such as Gromacs and Lammps. These situations might call for parts such as AMD’s Threadripper.

 

As you can see, figuring out the right kind of CPU – and its supporting chipsets – is a lot more involved than just purchasing the highest clock speed and largest number of cores. Knowing your data and applications will guide you to buying CPU hardware that makes your business more efficient.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Choosing the Right AI Infrastructure for Your Needs

Featured content

Choosing the Right AI Infrastructure for Your Needs

AI architecture must scale effectively without sacrificing cost efficiency. One size does not fit all.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Building an agile, cost-effective environment that delivers on a company’s present and long-term AI strategies can be a challenge, and the impact of decisions made around that architecture will have an outsized effect on performance.

 

“AI capabilities are probably going to be 10%-15% of the entire infrastructure,” says Ashish Nadkarni, IDC group vice president and general manager, infrastructure systems, platforms and technologies. “But the amount the business relies on that infrastructure, the dependence on it, will be much higher. If that 15% doesn’t behave in the way that is expected, the business will suffer.”

 

Experts like Nadkarni note that companies can, and should, avail themselves of cloud-based options to test and ramp up AI capabilities. But as workloads increase over time, the costs associated with cloud computing can rise significantly, especially when workloads scale or the enterprise expands its usage, making on-premises architecture a valid alternative worth consideration.

 

No matter the industry, to build a robust and effective AI infrastructure, companies must first accurately diagnose their AI needs. What business challenges are they trying to solve? What forms of high-performance computing power can deliver solutions? What type of training is required to deliver the right insights from data? And what’s the most cost-effective way for a company to support AI workloads at scale and over time? Cloud may be the answer to get started, but for many companies on-prem solutions are viable alternatives.

 

“It’s a matter of finding the right configuration that delivers optimal performance for [your] workloads,” says Michael McNerney, vice president of marketing and network security at Supermicro, a leading provider of AI-capable, high-performance servers, management software and storage systems. “How big is your natural language processing or computer vision model, for example? Do you need a massive cluster for AI training? How critical is it to have the lowest latency possible for your AI inferencing? If the enterprise does not have massive models, does it move down the stack into smaller models to optimize infrastructure and cost on the AI side as well as in compute, storage and networking?”

 

Get perspective on these and other questions about selecting the right AI infrastructure for your business in the Nov. 20, 2022, Wall Street Journal paid program article:

 

Investing in Infrastructure

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Events


Find AMD & Supermicro Elsewhere

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Featured content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • GigaIO

Today’s data center has numerous challenges: provisioning hardware and cloud workloads, balancing the needs of performance-intensive applications across compute, storage and network resources, and having a consistent monitoring and analytics framework to feed intelligent systems management. Plus, you may have the need to deploy or re-deploy all these resources as needs shift, moment to moment.

Supermicro has created its own tool to assist with these decisions to monitor and manage this broad IT portfolio, called the SuperCloud Composer (SCC). It combines a standardized web-based interface using an Open Distributed Infrastructure Management interface with a unified dashboard based on the RedFish message bus and service agents.

SCC can track the various resources and assign them to different pools with its own predictive analytics and telemetry. It delivers a single intelligent management solution that covers both existing on-premises IT equipment as well as a more software-defined cloud collection. Additional details can be found in this SuperCloud Composer white paper.

SuperCloud Composer makes the use of a cluster-level PCIe network using the FabreX software from GigaIO Networks. It has the capability to flexibly scale up and out storage systems while using the lowest latency paths available.

It also supports Weka.IO cluster members, which can be deployed across multiple systems simultaneously. See our story The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs.

SCC can create automated installation playbooks in Ansible, including a software boot image repository that can quickly deploy new images across the server infrastructure. It has a fast-deploy feature that allows a new image to be deployed within seconds.

SuperCloud Composer offers a robust analytics engine that collects historical and up-to-date analytics stored in an indexed database within its framework. This data can produce a variety of charts, graphs and tables so that users can better visualize what is happening with their server resources. Each end-user is provided with analytic capable charting represented by IOPS, network, telemetry, thermal, power, composed node status, storage allocation and system status.

Last but not least, SCC also has both network provisioning and storage fabric provisioning features where build plans are pushed to data or fabric switches either as single-threaded or multithreaded operations, such that multiple switches can be updated simultaneously by shared or unique build plan templates.

For more information, watch this short SCC explainer video. Or schedule an online demo of SCC and request a free 90-day trial of the software.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Events


Find AMD & Supermicro Elsewhere

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Featured content

AMD’s Infinity Guard Selected by Google Cloud for Confidential Computing

Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols. More on the release of the AMD EPYC™ 9004 series processors in this part three of a four-part series..

Learn More about this topic
  • Applications:
  • Featured Technologies:

 
 
Google Cloud has been working over the past several years with AMD on developing new on-chip security protocols that have seen further innovation with the release of the AMD EPYC™ 9004 series processors. These have a direct benefit for performance-intensive computing applications, particularly for supporting higher-density virtual machines (VMs) and using technologies that can protect data flows from leaving the confines of what Google calls confidential VMs as well as further isolating VM hypervisors. They offer a collection of N2D and C2D instances that support these confidential VMs.
 
“Product security is always our top focus,” said AMD CTO Mark Papermaster. “We are continuously investing and collaborating in the security of these technologies.” 
 
Royal Hansen, VP of engineering for Google Cloud said: “Our customers expect the most trustworthy computing experience on the planet. Google and AMD have a long history and a variety of relationships with the deepest experts on security and chip development. This was at the core of our going to market with AMD’s security solutions for datacenters.”
 
The two companies also worked together on this security analysis.
 
Called Infinity Guard collectively, the security technologies theyv'e been working on involve four initiatives:
 
1. Secure encrypted virtualization provides each VM with its own unique encryption key known only to the processor.
 
2. Secure nested paging complements this virtualization to protect each VM from any malicious hypervisor attacks and provide for an isolated and trusted environment.
 
3. AMD’s secure boot along with the Trusted Platform Module attestation of the confidential VMs happen every time a VM boots, ensuring its integrity and to mitigate any persistent threats.
 
4. AMD’s secure memory encryption and integration into the memory channels speed performance.
 
These technologies are combined and communicate using the AMD Infinity Fabric pathways to deliver breakthrough performance along with better secure communications.
 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Are Your App Workloads Running in Parallel?

Featured content

Are Your App Workloads Running in Parallel?

Learn More about this topic
  • Applications:

To be effective at delivering performance-intensive applications, it pays to split up your workloads and run them simultaneously, a.k.a., in parallel. In the past, we didn’t really think about the resources required to run workloads, because many business computers were all-purpose machines. There was also a tendency to run loads serially to avoid bogging down due to heavy CPU utilization, heavy I/0 and so on.

 

But computers have become much more capable of late. What were once thought of as “desktop” computers have approached the arena once occupied by minicomputers and mainframes. Like the larger systems, they serve multiple concurrent users and higher-demanding applications. As a result, we need to think more carefully about how their various components – processor, memory, storage and network connections – interact, find and eliminate the bottlenecks between these components to make them useful for higher-end workloads.
 

Straighten out Bottlenecks


One way to eliminate bottlenecks is to break your apps into smaller, more digestible pieces that can run concurrently. As the new processors employ more cores and more sophisticated components, this means that more of your code can be consumed by the entire CPU package. This is the inherent nature of parallel processing, and why the world’s fastest supercomputers now routinely span thousands (and some in the millions) of cores.


A company called Weka has developed a file system designed to provide higher-speed data ingestion and more appropriate for machine learning and advanced mathematical modeling applications. Understanding the particular type of data storage – whether it is a parallel file system such as Weka, more scratch space for computations or better backups – can make a big difference in overall performance.


But it is also important how your apps work across the network. Is there a lot of back-and-forth between clients and servers, or sending a small chunk of data and waiting for a reply? This introduces a lot of downtime for the app, and these “wait states” should be identified and potentially eliminated.
 

Offload Workloads


Does your application do a lot of calculation? As discussed in an earlier story appearing on Performance-Intensive Computing, complementary processors, such as co-processors and GPUs, can be a big performance boost so long the processor can move on to its next task, working in parallel, instead of waiting for data returned from the offloaded computation.

 

Working in parallel can be a challenge when your apps frequently pause to wait for data from another process or are highly monolithic designed to run in a serial fashion. Such apps may be challenging to rewrite to take advantage cloud native or parallel operations. At some point, you are going to have to make that break and put in the programming effort to modernize your apps, but only you or your company can decide when it’s right to do that.

 

But if you can modify your workloads for this parallel structure and your hardware was designed to support it, you will see big benefits.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Unlocking the Value of the Cloud for Mid-size Enterprises

Featured content

Unlocking the Value of the Cloud for Mid-size Enterprises

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Microsoft Azure

Organizations around the world are requiring new options for their next-generation computing environments. Mid-size organizations, in particular, are facing increasing pressure to deliver cost-effective, high-performance solutions within their hyperconverged infrastructures (HCI). Recent collaboration between Supermicro, Microsoft Azure and AMD, leveraging their collective technologies, has created a fresh approach that lets enterprises maintain performance at a lower operational cost while helping to reduce the organization’s carbon footprint in support of sustainability initiatives. This cost-effective, 1U system (a 2U version is available) offers both power, flexibility and modularity in large-scale GPU deployments.

The results of the collaboration combine the latest technologies, supporting multiple CPU, GPU, storage and networking options optimized to deliver uniquely configured and highly scalable systems. The product can be optimized for SQL and Oracle databases, VDI, productivity applications and database analytics. This white paper explores why this universal GPU architecture is an intriguing and cost-effective option for CTOs and IT administrators who are planning to rapidly implement hybrid cloud, data center modernization, branch office/edge networking or Kubernetes deployments at scale.

Get the 7-page white paper that provides the detail to assess the solution for yourself, including the new Azure Stack HCI certified system, specifications, cost justification and more.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages