Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

What’s inside Supermicro’s new Petascale storage servers?

Featured content

What’s inside Supermicro’s new Petascale storage servers?

Supermicro has a new class of storage servers that support E3.S Gen 5 NVMe drives. They offer up to 256TB of high-throughput, low-latency storage in a 1U enclosure, and up to half a petabyte in a 2U.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has introduced a new class of storage servers that support E3.S Gen 5 NVMe drives. These storage servers offer up to 256TB of high-throughput, low-latency storage in a 1U enclosure, and up to half a petabyte in a 2U.

Supermicro has designed these storage servers to be used with large AI training and HPC clusters. Those workloads require that unstructured data, often in extremely large quantities, be delivered quickly to the system’s CPUs and GPUs.

To do this, Supermicro has developed a symmetrical architecture that reduces latency. It does so in 2 ways. One, by ensuring that data travels the shortest possible signal path. And two, by providing the maximum airflow over critical components, allowing them to run as fast and cool as possible.

1U and 2U for you 

Supermicro’s new lineup of optimized storage systems includes 1U servers that support up to 16 hot-swap E3.S drives. An alternate configuration could be up to eight E3.S drives, plus four E3.S 2T 16.8mm bays for CMM and other emerging modular devices.

(CMM is short for Chassis Management Module. These devices provide management and control of the chassis, including basic system health, inventory information and basic recovery operations.)

The E3.S form factor calls for a short and thin NVMe SSD drive that is 76mm high, 112.75mm long, and 7.5mm thick.

In the 2U configuration, Supermicro’s servers support up to 32 hot-swap E3.S drives. A single-processor system, it support the latest 4th Gen AMD EPYC processors.

Put it all together, and you can have a standard rack that stores up to an impressive 20 petabytes of data for high-throughput NVMe over fabrics (NVMe-oF) configurations.

30TB drives coming

When new 30TB drives become available—a move expected later this year—the new Supermicro storage servers will be able to handle them. Those drives will bring the storage total to 1 petabyte in a compact 2U server.

Two storage-drive vendors working closely with Supermicro are Kioxia America and Solidigm, both of which make E3.S solid-state drives (SSDs). Kioxia has announced a 30.72TB SSD called the Kioxia CD8P Series. And Solidigm says its D5-P5336 SSD will ship in an E3.S form factor with up to 30.72TB in the first half of 2024.

The new Supermicro Petascale storage servers are shipping now in volume worldwide.

Learn more about the Supermicro E3.S Petascale All-Flash NVMe Storage Systems.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Can liquid-cooled servers help your customers?

Featured content

Can liquid-cooled servers help your customers?

Liquid cooling can offer big advantages over air cooling. According to a new Supermicro solution guide, these benefits include up to 92% lower electricity costs for a server’s cooling infrastructure, and up to 51% lower electricity costs for an entire data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The previous thinking was that liquid cooling was only for supercomputers and high-end gaming PCs. No more.

Today, many large-scale cloud, HPC, analytics and AI servers combine CPUs and GPUs in a single enclosure, generating a lot of heat. Liquid cooling can carry away the heat that’s generated, often with less overall cost and more efficiently than air.

According to a new Supermicro solution guide, liquid’s advantages over air cooling include:

  • Up to 92% lower electricity costs for a server’s cooling infrastructure
  • Up to 51% lower electricity costs for the entire data center
  • Up to 55% less data center server noise

What’s more, the latest liquid cooling systems are turnkey solutions that support the highest GPU and CPU densities. They’re also fully validated and tested by Supermicro under demanding workloads that stress the server. And unlike some other components, they’re ready to ship to you and your customers quickly, often in mere weeks.

What are the liquid-cooling components?

Liquid cooling starts with a cooling distribution unit (CDU). It incorporates two modules: a pump that circulates the liquid coolant, and a power supply.

Liquid coolant travels from the CDU through flexible hoses to the cooling system’s next major component, the coolant distribution manifold (CDM). It’s a unit with distribution hoses to each of the servers.

There are 2 types of CDMs. A vertical manifold is placed on the rear of the rack, is directly connected via hoses to the CDU, and delivers coolant to another important component, the cold plates. The second type, a horizontal manifold, is placed on the front of the rack, between two servers; it’s used with systems that have inlet hoses on the front.

The cold plates, mentioned above, are placed on top of the CPUs and GPUs in place of their typical heat sinks. With coolant flowing through their channels, they keep these components cool.

Two valuable CDU features are offered by Supermicro. First, the company’s CDU has a cooling capacity of 100kW, which enables very high rack compute densities. Second, Supermicro’s CDU features a touchscreen for monitoring and controlling the rack operation via a web interface. It’s also integrated with the company’s Super Cloud Composer data-center management software.

What does it work on?

Supermicro offers several liquid-cooling configurations to support different numbers of servers in different size racks.

Among the Supermicro servers available for liquid cooling is the company’s GPU systems, which can combine up to eight Nvidia GPUs and AMD EPYC 9004 series CPUs. Direct-to-chip (D2C) coolers are mounted on each processor, then routed through the manifolds to the CDU. 

D2C cooling is also a feature of the Supermicro SuperBlade. This system supports up to 20 blade servers, which can be powered by the latest AMD EPYC CPUs in an 8U chassis. In addition, the Supermicro Liquid Cooling solution is ideal for high-end AI servers such as the company’s 8-GPU 8125GS-TNHR.

To manage it all, Supermicro also offers its SuperCloud Composer’s Liquid Cooling Consult Module (LCCM). This tool collects information on the physical assets and sensor data from the CDU, including pressure, humidity, and pump and valve status.

This data is presented in real time, enabling users to monitor the operating efficiency of their liquid-cooled racks. Users can also employ SuperCloud Composer to set up alerts, manage firmware updates, and more.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: Green Computing, Part 3 – Why you should reduce, reuse & recycle

Featured content

Tech Explainer: Green Computing, Part 3 – Why you should reduce, reuse & recycle

The new 3Rs of green computing are reduce, reuse and recycle. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

To help your customers meet their environmental, social and governance (ESG) goals, it pays to focus on the 3 Rs of green computing—reduce, reuse and recycle.

Sure, pursuing these goals can require some additional R&D and reorganization. But tech titans such as AMD and Supermicro are helping.

AMD, Supermicro and their vast supply chains are working to create a new virtuous circle. More efficient tech is being created using recycled materials, reused where possible, and then once again turned into recycled material.

For you and your customers, the path to green computing can lead to better corporate citizenship as well as higher efficiencies and lower costs.

Green server design

New disaggregated server technology is now available from manufacturers like Supermicro. This tech makes it possible for organizations of every size to increase their energy efficiency, better utilize data-center space, and reduce capital expenditures.

Supermicro’s SuperBlade, BigTwin and EDSFF SuperStorage are exemplars of disaggregated server design. The SuperBlade multi-node server, for instance, can house up to 20 server blades and 40 CPUs. And it’s available in 4U, 6U and 8U rack enclosures.

These efficient designs allow for larger, more efficient shared fans and power supplies. And along with the chassis itself, many elements can remain in service long past the lifespans of the silicon components they facilitate. In some cases, an updated server blade can be used in an existing chassis.

Remote reprogramming

Innovative technologies like adaptive computing enable organizations to adopt a holistic approach to green computing at the core, the edge and in end-user devices.

For instance, AMD’s adaptive computing initiative offers the ability to optimize hardware based on applications. Then your customers can get continuous updates after production deployment, adapting to new requirements without needing new hardware.

The key to adaptive computing is the Field Programmable Gate Array (FPGA). It’s essentially a blank canvas of hardware, capable of being configured into a multitude of different functions. Even after an FPGA has been deployed, engineers can remotely access the component to reprogram various hardware elements.

The FPGA reprogramming process can be as simple as applying security patches and bug fixes—or as complex as a wholesale change in core functionality. Either way, the green computing bona fides of adaptive computing are the same.

What’s more, adaptive tech like FPGAs significantly reduces e-waste. This helps to lower an organization’s overall carbon footprint by obviating the manufacturing and transportation necessary to replace hardware already deployed.

Adaptive computing also enables organizations to increase energy efficiency. Deploying cutting-edge tech like the AMD Instinct MI250X Accelerator to complete AI training or inferencing can significantly reduce the overall electricity needed to complete a task.

Radical recycling

Even in organizations with the best green computing initiatives, elements of the hardware infrastructure will eventually be ready for retirement. When the time comes, these organizations have yet another opportunity to go green—by properly recycling.

Some servers can be repurposed for other, less-demanding tasks, extending their lifespan. For example, a system that had been used for HPC applications that may no longer have the required FP64 performance could be repurposed to host a database or email application.

Quite a lot of today’s computer hardware can be recycled. This includes glass from monitors; plastic and aluminum from cases; copper in power supplies; precious metals used in circuitry; even the cardboard, wood and other materials used in packaging.

If that seems like too much work, there are now third-party organizations that will oversee your customers’ recycling efforts for a fee. Later, if all goes according to plan, these recycled materials will find their way back into the manufacturing supply chain.

Tech suppliers are working to make recycling even easier. For example, AMD is one of the many tech leaders whose commitment to environmental sustainability extends across its entire value chain. For AMD, that includes using environmentally preferable packing materials, such as recycled materials and non-toxic dyes.

Are you 3R?

Your customers understand that establishing and adhering to ESG goals is more than just a good idea. In fact, it’s vital to the survival of humanity.

Efforts like those of AMD and Supermicro are helping to establish a green computing revolution—and not a moment too soon.

In other words, pursuing green computing’s 3 Rs will be well worth the effort.

Also read:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Featured content

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Supermicro’s H13 Petascale Storage Systems is a compact 1U rackmount system powered by the AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

 

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers can now implement Supermicro Petascale Storage, an all-Flash NVMe storage system powered by the latest 4th gen AMD EPYC 9004 series processors.

The Supermicro system has been specifically designed for AI, HPC, private and hybrid cloud, in-memory computing and software-defined storage.

Now Supermicro is offering the first of these systems. It's the Supermicro H13 Petascale Storage System. This compact 1U rackmount system is powered by an AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

For organizations with data-storage requirements approaching petascale capacity, the Supermicro system was designed with a new chassis and motherboard that support a single AMD EPYC processor, 24 DIMM slots for up to 6TB of main memory, and 16 hot-swap ES.3 slots. That's the Enterprise and Datacenter Standard Form Factor (EDSFF), part of the E3 family of SSD form factors designed for specific use cases. ES.3 is short and thin. It uses 25W and 7.5mm-wide storage media designed with a PCIe 5.0 interface.

The Supermicro Petascale Storage system can deliver more than 200 GB/sec. bandwidth and over 25 million input-output operations per second (IOPS) from a half-petabyte of storage.

Here's why 

Why might your customers need such a storage system? Several reasons, depending on what sorts of workloads they run:

  •  Training AI/ML applications requires massive amounts of data for creating reliable models.
  • HPC projects use and generate immense amounts of data, too. That's needed for real-world simulations, such as predicting the weather or simulating a car crash.
  • Big-data environments need susbstantial datasets. These gain intelligence from real-world observations ranging from sensor inputs to business transactions.
  • Enterprise applications need to locate large amounts of data close to computing over NVMe-over-Fabrics (NVMeoF) speeds.

Also, the Supermicro H13 Petascale Storage System offers significant performance, capacity, throughput and endurance--all while keeping excellent power efficiencies.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

Interview: How NEC Germany keeps up with the changing HPC market

Featured content

Interview: How NEC Germany keeps up with the changing HPC market

In an interview, Oliver Tennert, director of HPC marketing and post-sales at NEC Germany, explains how the company keeps pace with a fast-developing market.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • NEC Germany

The market for high performance computing (HPC) is changing, meaning system integrators that serve HPC customers need to change too.

To learn more, PIC managing editor Peter Krass spoke recently with Oliver Tennert, NEC Germany’s director of HPC marketing and post-sales. NEC Germany works with hardware vendors that include AMD processors and Supermicro servers. This interview has been lightly edited for clarity.

First, please tell me about NEC Germany and its relationship with parent company NEC Corp.?

I work for NEC Germany, which is a subsidary of NEC Europe. Our parent company, NEC Corp., is a Japanese company with a focus on telecommunications, which is still a major part of our business. Today NEC has about 100,000 employees around the world.

HPC as a business within NEC is done primarily by NEC Germany and our counterparts at NEC Corp. in Japan. The Japanese operation covers HPC in Asia, and we cover EMEA, mainly Europe.

What kinds of HPC workloads and applications do your customers run?

It’s probably 60:40 — that is, about 60% of our customers are in academia, including universities, research facilities, and even DWD, Germany’s weather-forecasting service. The remaining 40% are industrial, including automotive and engineering companies. 

The typical HPC use cases of our customers come in two categories. The most important HPC category of course is simulation. That can mean simulating physical processes. For example, what does a car crash look like under certain parameters? These simulations are done in great detail.

Our other important HPC category is data analytics. For example, that could mean genomic analysis.

How do you work with AMD and Supermicro?

To understand this, you first have to understand how NEC’s HPC business works. For us, there are two aspects to the business.

One, we’ve got our own vector technology. Our NEC vector engine is a PCIe card designed and produced in Japan. The latest incarnation of our vector supercomputer is the NEC SX-Aurora TSUBASA. It was designed to run applications that are both vectorizable and profit from high bandwidth to main memory. One of our big customers in this area is the German weather service, DWD.

The other part of the business is what we call “pizza boxes,” the x86 architecture. For this, we need industry-standard servers, including processors from AMD and servers from Supermicro.

For that second part of the business, what is NEC’s role?

The answer has to do with how the HPC business works operationally. If a customer intends to purchase a new HPC cluster, typically they need expert advice on designing an optimized HPC environment. What they do know is the application they run. And what they want to know is, ‘How do we get the best, most optimized system for this application?’

This implies doing a lot of configuration. Essentially, we optimize the design based on many different components. Even if we know that an AMD processor is the best for a particular task, still, there are dozens of combinations of processor SKUs and server model types which offer different price/performance ratios. The same applies to certain data-storage solutions. For HPC, storage is more than just picking an SSD. What’s needed is a completely different kind of technology.

Configuring and setting up such a complex solution takes a lot of expertise. We’re being asked to run benchmarks. That means the customer says, ‘Here’s my application, please run it on some specific configurations, and tell me which one offers the best price/performance ratio.’ This takes a lot of time and resources. For example, you need the systems on hand to just try it out. And the complete tender process—from pre-sales discussions to actual ordering and delivery—can take anywhere from weeks to months.

And this is just to bid, right? After all this work, you still might not get the order?

Yes, that can happen. There are lots of factors that influence your chances. In general, if you have a good working relationship with a private customer, it’s easier. They have more discretion than academic or public customers. For public bids, everything must be more transparent, because it’s more strictly regulated. Normally, that means you have more work, because you have to test more setups. Your competition will be doing the same.

When working with the second group, the private industry customers, do customer specify parts from specific vendors, such as AMD and Supermicro?

It depends on the factors that will influence the customer’s final selection. Price and performance, that’s one thing. Power consumption is another. Then, sometimes, it’s the vendors. Also, certain projects are more attractive to certain vendors because of market visibility—so-called lighthouse projects. That can have an influence on the conditions we get from vendors. Vendors also honor the amount of effort we have put in to getting the customer in the first place. So there are all sorts of external factors that can influence the final system design.

Also, today, the majority of HPC solutions are similar from an architectural point of view. So the difference between competing vendors is to take all the standard components and optimize from these, instead of providing a competing architecture. As a result, the soft skills—such as the ability to implement HPC solutions in an efficient and professional way—also have a large influence on the final order.

How about power consumption and cooling? Are these important considerations for your HPC customers?

It’s become absolutely vital. As a rule of thumb, we can say that the larger an HPC project is going to be, the more likely that it is going to be cooled by liquid.

In the past, you had a server room that you cooled with air conditioning. But those times are nearly gone. Today, when you think of a larger HPC installation—say, 1,000 or 2,000 nodes—you’re talking about a megawatt of power being consumed, or even more. And that also needs to be cooled.

The challenge in cooling a large environment is to get the heat away from the server and out of the room to somewhere else, whether outside or to a larger cooling system. This cannot be done by traditional cooling with air. Air is too inefficient for transporting heat. Water is much better. It’s a more efficient means for moving heat from Point A to Point B.

How are you cooling HPC systems with liquid?

There are a few ways to do this. There’s cold-water cooling, mainly indirect. You bring in water with what’s known as an “inlet temperature” of about 10 C and it cools down the air inside the server racks, with the heat getting carried away with the water now at about 15 or 20 C. The issue is, first you need energy just to cool the water down to 10 C. Also, there’s not much you can do with water at 15 or 20 C. It’s too warm for cooling anything else, but too cool for heating a room.

That’s why the new approach is to use hot-water cooling, mainly direct. It sounds like a paradox. But what might seem hot to a human being is in fact pretty cool for a CPU. For a CPU, an ambient temperature of 50 or 60 C is fine; it would be absolutely not fine for a human being. So if you have an inlet temperature for water of, say, 40 or 45 C, that will cool the CPU, which runs at an internal temperature of 80 or 90 C. The outbound temperature of the water is then maybe 50 C. Then it becomes interesting. At that temperature, you can heat a building. You can reuse the heat, rather than just throwing it away. So this kind of infrastructure is becoming more important and more interesting.

Looking ahead, what are some of your top projects for the future?

Public customers such as research universities have to replace their HPC systems every three to five years. That’s the normal cycle. In that time the hardware becomes obsolete, especially as the vendors optimize their power consumption to performance ratio more and more. So it’s a steady flow of new projects. For our industrial customers, the same applies, though the procurement cycle may vary.

We’re also starting to see the use of computational HPC capacity from the cloud. Normally, when people think of cloud, they think of public clouds from Amazon, Microsoft, etc. But for HPC, there are interim approaches as well. A decade ago, there was the idea of a dedicated public cloud. Essentially, this meant a dedicated capacity that was for the customer’s exclusive use, but was owned by someone other than the customer. Now, between the dedicated cloud and public cloud, there are all these shades of grey. In the past two years, we’ve implemented several larger installations of this “grey-shaded” cloud approach. So more and more, we’re entering the service-oriented market.

There is a larger trend away from customers wanting to own a system, and toward customers just wanting to utilize capacity. For vendors with expertise in HPC, they have to change as well. Which means a change in the business and the way they have to work with customers. It boils down to, Who owns the hardware? And what does the customer buy, hardware or just services? That doesn’t make you a public-cloud provider. It just means you take over responsibility for this particular customer environment. You have a different business model, contract type, and set of responsibilities.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro H13 JumpStart remote access program adds latest AMD EPYC processors

Featured content

Supermicro H13 JumpStart remote access program adds latest AMD EPYC processors

Get remote access to the next generation of AMD-powered servers from Supermicro.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro’s H13 JumpStart Remote Access program—which lets you use Supermicro servers before you buy—now includes the latest Supermicro H13 systems powered by 4th gen AMD EPYC 9004 processors.

These include servers using the two new AMD EPYC processor series introduced in June. One, previously codenamed Bergamo, is optimized for cloud-native workloads. The other, previously codenamed Genoa-X, is equipped with AMD 3D V-Cache technology and is optimized for technical computing.

Supermicro’s free H13 JumpStart program lets you and your customers validate, test and benchmark workloads remotely on Supermicro H13 systems powered by these new AMD processors.

The latest Supermicro H13 systems deliver performance and density with some cool technologies. These include AMD EPYC processors with up to 128 “Zen 4c” cores per socket, DDR5 memory, PCIe 5.0, and CXL 1.1 peripherals support.

Those AMD Zen 4c cores are designed for the sweet spot of both density and power efficiency. Compared with AMD’s previous generation (Zen 4), the new design offers substantially improved performance per watt.

Get started

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account. Then you can run your test, try new features, and benchmark your application.

Once you’re done, Supermicro will ask you to complete a quick survey for your feedback on the program. That’s it.

The H13 JumpStart program now offers 3 server configurations. These include Supermicro’s dual-processor 2U Hyper (AS -2025HS-TNR); single-processor 2U Cloud DC (AS -2015CS-TNR); and single-processor 2U Hyper-U (AS -2115HS-TNR).

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Interview: How German system integrator SVA serves high performance computing with AMD and Supermicro

Featured content

Interview: How German system integrator SVA serves high performance computing with AMD and Supermicro

In an interview, Bernhard Homoelle, head of the HPC competence center at German system integrator SVA, explains how his company serves customers with help from AMD and Supermicro. 

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • SVA System Vertrieb Alexander GmbH

SVA System Vertrieb Alexander GmbH, better known as SVA, is among the leading IT system integrators of Germany. Headquartered in Wiesbaden, the company employs more than 2,700 people in 27 branch offices. SVA’s customers include organizations in automotive, financial services and healthcare.

To learn more about how SVA works jointly with Supermicro and AMD on advanced technologies, PIC managing editor Peter Krass spoke recently with Bernhard Homoelle, head of SVA’s high performance computing (HPC) competence center (pictured above). Their interview has been lightly edited.

For readers outside of Germany, please tell us about SVA?

First of all, SVA is an owner-operated system integrator. We offer high-quality products, we sell infrastructure, we support certain types of implementations, and we offer operational support to help our customers achieve optimum solutions.

We work with partners to figure out what might be the best solution for our customers, rather than just picking one vendor and trying to convince the customer they should use them. Instead, we figure out what is really needed. Then we go in the direction where the customer can really have their requirements met. The result is a good relationship with the customer, even after a particular deal has been closed.

Does SVA focus on specific industries?

While we do support almost all the big industries—automotive, transportation, public sector, healthcare and more—we are not restricted to any specific vertical. Our main business is helping customers solve their daily IT problems, deal with the complexity of new IT systems, and implement new things like AI and even quantum computing. So we’re open to new solutions. We also offer training with some of our partners.

Germany has a robust auto industry. How do you work with these clients?

In general, they need huge HPC clusters and machine learning. For example, autonomous driving demands not only more computing power, but also more storage. We’re talking about petabytes of data, rather than terabytes. And this huge amount of data needs to be stored somewhere and finally processed. That puts pressure on the infrastructure—not just on storage, but also on the network infrastructure as well as on the compute side. For their way into cloud, some these customers are saying, “Okay, offer me HPC as a Service.”

How do you work with AMD and Supermicro?

It’s a really good relationship. We like working with them because Supermicro has all these various types of servers for individual needs. Customers are different, and therefore they have their own requirements. Figuring out what might be the best server for them is difficult if you have limited types of servers available. But with Supermicro, you can get what you have in mind. You don’t have to look for special implementations because they have these already at hand.

We’re also partnering with AMD, and we have access to their benchmark labs, so we can get very helpful information. We start with discussions with the customer to figure out their needs. Typically, we pick up an application from the customer and then use it as a kind of benchmark. Next, we put it on a cluster with different memory, different CPUs, and look for the best solution in terms of performance for their particular application. Based on the findings, we can recommend a specific CPU, number of cores, memory type and size, and more.

With HPC applications, core memory bandwidth is almost as important as the number of cores. AMD’s new Genoa-X processors should help to overcome some of these limitations. And looking ahead, I’m keen to see what AMD will offer with the Instinct MI300.

Are there special customer challenges you’re solving with Supermicro and AMD solutions?

With HPC workloads, our academic customers say, “This is the amount of money available, so how many servers can you really give us for this budget?” Supermicro and AMD really help here with reasonable prices. They’re a good choice for price/performance.

With AI and machine learning, the real issue is software tools. It really depends what kinds of models you can use and how easy it is to use the hardware with those models.

This discussion is not easy, because for many of our customers today, AI means Nvidia. But I really recommend alternatives, and AMD is bringing some alternatives that are great. They offer a fast time to solution, but they also need to be easy to switch to.

How about "green" computing? Is this an important issue for your customers now?

Yes, more and more we’re seeing customers ask for this green computing approach. Typically, a customer has a thermal budget and a power-price budget. They may say, “In five years, the expenses paid for power should not exceed a certain limit.”

In Europe, we also have a supply-chain discussion. Vendors must increasingly provide proof that they’re taking care in their supply chain with issues including child labor and working conditions. This is almost mandatory, especially in government calls. If you’re unable to answer these questions, you’re out of the bid.

With green computing, we see that the power needed for CPUs and GPUs is going up and up. Five years ago, the maximum a CPU could burn was 200W, but now even 400W might not be enough. Some GPUs are as high as 700W, and there are super-chips beyond even that.

All this makes it difficult to use air-cooled systems. Customers can use air conditioning to a certain extent, but there’s only so much air you can press through the rack. Then you need either on-chip water cooling or some kind of immersion cooling. This can help in two dimensions: saving energy and getting density — you can put the components closer together, and you don’t need the big heat sink anymore.

One issue now is that each vendor offers a different cooling infrastructure. Some of our customers run multi-vendor data centers, so this could create a compatibility issue. That’s one reason we’re looking into immersion cooling. We think we could do some of our first customer implementations in 2024.

Looking ahead, what do you see as a big challenge?

One area is that we want to help customers get easier access to their HPC clusters. That’s done on the software side.

In contrast to classic HPC users, machine learning and AI engineers are not that interested in Linux stuff, compiler options or any other infrastructure details. Instead, they’d like to work on their frameworks. The challenge is getting them to their work as easily as possible—so that they can just log in, and they’re in their development environment. That way, they won’t have to care about what sort of operating system is underneath or what kind of scheduler, etc., is running.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Bergamo: a deeper dive into AMD’s new EPYC processor for cloud-native workloads

Featured content

Bergamo: a deeper dive into AMD’s new EPYC processor for cloud-native workloads

Bergamo is AMD’s first-ever server processor designed specifically for cloud-native workloads. Learn how it works.  

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bergamo is the former codename for AMD’s new 4th gen EPYC 97X4 processors optimized for cloud-native workloads, which the company introduced earlier this month.

AMD is responding to the increasingly specialized nature of data center workloads by optimizing its server processors for specific workloads. This month AMD introduced two examples: Bergamo (97X4) for cloud and Genoa-X (9XX4X) for technical computing.

The AMD EPYC 97X4 processors are AMD’s first-ever designed specifically for cloud-native workloads. And they’re shipping now in volume to AMD’s hyperscale customers that include Facebook parent company Meta and partners including Supermicro.

Speaking of Supermicro, that company this week announced that the new AMD EPYC 97X4 processors can now be included in its entire line of Supermicro H13 AMD-based systems.

Zen mastery

The main difference between the AMD EPYC 97X4 and AMD’s general-purpose Genoa series processors comes down to the core chiplet. The 97X4 CPUs use a new design called “Zen 4c.” It’s an update on the AMD Zen 4 core used in the company’s Genoa processors.

Where AMD’s original Zen 4 was designed for the highest performance per core, the new Zen 4c has been designed for a sweet spot of both density and power efficiency.

As AMD CEO Lisa Su explained during the company’s recent Data Center and AI Technology Premier event, AMD achieved this by starting with the same RTL design as Zen 4. AMD engineers then optimized this physical layout for power and area. They also redesigned the L3 cache hierarchy for greater throughput.

The result: a design that takes up about 35% less area yet offers substantially better performance per watt.

Because the start from the Zen 4’s design, the new 97X4 processors are both software- and platform-compatible with Genoa. The idea is that end users can mix and match 97X4- and Genoa-based servers, depending on their specific workloads and computing needs.

Basic math

Another difference is that where Genoa processors offer up to 96 cores per socket, the new 97X4 processors offer up to 128.

Here’s how it’s done: Each AMD 97X4 system-on-chip (SoC) contains 8 core complex dies (CCDs). In turn, each CCD contains 16 Zen 4c cores. So 8 CCDs x 16 cores = a total of 128 cores.

As the table below shows, courtesy of AMD, there are three SKUs for the new EPYC 97X5 series processors:

For security, all 3 SKUs support AMD Infinity Guard, a suite of hardware-level security features, and AMD Infinity Architecture, which lets system builders and cloud architects get maximum power while still ensuring security.

Are your customers looking for servers to handle their cloud-native applications? Tell them to look into the new AMD EPYC 97X4 processors.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Why your AI systems can benefit from having both a GPU and CPU

Featured content

Why your AI systems can benefit from having both a GPU and CPU

Like a hockey team with players in different positions, an AI system with both a GPU and CPU is a necessary and winning combo. This mix of processors can bring you and your customers both the lower cost and greater energy efficiency of a CPU and the parallel processing power of a GPU. With this team approach, your customers should be able to handle any AI training and inference workloads that come their way.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Sports teams win with a range of skills and strengths. A hockey side can’t win if everyone’s playing goalie. The team also needs a center and wings to advance the puck and score goals, as well as defensive players to block the opposing team’s shots.

The same is true for artificial intelligence systems. Like a hockey team with players in different positions, an AI system with both a GPU and CPU is a necessary and winning combo.

This mix of processors can bring you and your customers both the lower cost and greater energy efficiency of a CPU and the parallel processing power of a GPU. With this team approach, your customers should be able to handle any AI training and inference workloads that come their way.

In the beginning

One issue: Neither CPUs nor GPUs were originally designed for AI. In fact, both designs predate AI by many years. Their origins still define how they’re best used, even for AI.

GPUs were initially designed for computer graphics, virtual reality and video. Getting pixels to the screen is a task where high levels of parallelization speed things up. And GPUs are good at parallel processing. This has allowed them to be adapted for HPC and AI workloads, which analyze and learn from large volumes of data. What’s more, GPUs are often used to run HPC and AI workloads simultaneously.

GPUs are also relatively expensive. For example, Nvidia’s new H100 has an estimated retail price of around $25,000 per GPU. Your customers may incur additional costs from cooling—GPUs generate a lot of heat. GPUs also use a lot of power, which can further raise your customer’s operating costs.

CPUs, by contrast, were originally designed to handle general-purpose computing. A modern CPU can run just about any type of calculation, thanks to its encompassing instruction set.

A CPU processes data sequentially, rather than in parallel, and that’s good for linear and complex calculations. Compared with GPUs, a comparable CPU generally is less expensive, needs less power and runs cooler.

In today’s cost-conscious environment, every data center manager is trying to get the most performance per dollar. Even a high-performing CPU has a cost advantage over comparable GPUs that can be extremely important for your customers.

Team players

Just as a hockey team doesn’t rely on its goalie to score points, smart AI practitioners know they can’t rely on their GPUs to do all types of processing. For some jobs, CPUs are still better.

Due to a CPU’s larger memory capacity, they’re ideal for machine learning training and inference, as long as the scale is relatively small. CPUs are also good for training small neural networks, data preparation and feature extraction.

CPUs offer other advantages, too. They’re generally less expensive than GPUs. In today’s cost-conscious environment, where every data center manager is trying to get the most performance per dollar, that’s extremely important. CPUs also run cooler than GPUs, requiring less (and less expensive) cooling.

GPUs excel in two main areas of AI: machine learning and deep learning (ML/DL). Both involve the analysis of gigabytes—or even terabytes—of data for image and video processing. For these jobs, the parallel processing capability of a GPU is a perfect match.

AI developers can also leverage a GPU’s parallel compute engines. They can do this by instructing the processor to partition complex problems into smaller, more manageable sub-problems. Then they can use libraries that are specially tuned to take advantage of high levels of parallelism.

Theory into practice

That’s the theory. Now let’s look at how some leading AI tech providers are putting the team approach of CPUs and GPUs into practice.

Supermicro offers its Universal GPU Systems, which combine Nvidia GPUs with CPUs from AMD, including the AMD EPYC 9004 Series.

An example is Supermicro’s H13 GPU server, with one model being the AS 8215GS-TNHR. It packs an Nvidia HGX H100 multi-GPU board, dual-socket AMD EPYC 9004 series CPU, and up to 6TB of DDR5 DRAM memory.

For truly large-scale AI projects, Supermicro offers SuperBlade systems designed for distributed, midrange AI and ML training. Large AI and ML workloads can require coordination among multiple independent servers, and the Supermicro SuperBlades are designed to do just that. Supermicro also offers rack-scale, plug-and-play AI solutions powered by the company’s GPUs and turbocharged with liquid cooling.

The Supermicro SuperBlade is available with a single AMD EYPC 7003/7002 series processors with up to 64 cores. You also get AMD 3D V-Cache, up to 2TB of system memory per node, and a 200Gbps InfiniBand HDR switch. Within a single 8U enclosure, you can install up to 20 blades.

Looking ahead, AMD plans to soon ship its Instinct MI300A, an integrated data-center accelerator that combines three key components: AMD Zen 4 CPUs, AMD CDNA3 GPUs, and high-bandwidth memory (HBM) chiplets. This new system is designed specifically for HPC and AI workloads.

Also, the AMD Instinct MI300A’s high data throughput lets the CPU and GPU work on the same data in memory simultaneously. AMD says this CPU-GPU partnership will help users save power, boost performance and simplify programming.

Truly, a team effort.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research roundup: AI edition

Featured content

Research roundup: AI edition

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

That’s some of the latest AI research from leading market watchers. And here’s your research roundup.

The AI priority

Nearly three-quarters (73%) of companies are prioritizing AI over all other digital investments, finds a new report from consultants Accenture. For these AI projects, the No. 1 focus area is improving operational resilience; it was cited by 90% of respondents.

Respondents to the Accenture survey also say the business benefits of AI are real. While only 9% of companies have achieved maturity across all 6 areas of AI operations, they averaged 1.4x higher operating margins than others. (Those 6 areas, by the way, are AI, data, processes, talent, collaboration and stakeholder experiences.)

Compared with less-mature AI operations, these companies also drove 42% faster innovation, 34% better sustainability and 30% higher satisfaction scores.

Accenture’s report is based on its recent survey of 1,700 executives in 12 countries and 15 industries. About 7 in 10 respondents held C-suite-level job titles.

The AI market

It’s no surprise that the AI market is big and growing rapidly. But just how big and how rapidly might surprise you.

How big? The global market for all AI products and services, worth some $428 billion last year, is on track to top $515 billion this year, predicts market watcher Fortune Business Insights.

How fast? Looking ahead to 2030, Fortune Insights expects the global AI market that year to hit $2.03 trillion. If so, that would mark a compound annual growth rate (CAGR) of nearly 22%.

What’s driving this big, rapid growth? Several factors, says Fortune, including the surge in the number of applications, increased partnering and collaboration, a rise in small-scale providers, and demand for hyper-personalized services.

The AI impact

What, me worry? About six in 10 Americans (62%) believe AI will have a major impact on workers in general. But only 28% believe AI will have a major effect on them personally.

So finds a recent poll by Pew Research of more than 11,000 U.S. adults.

Digging a bit deeper, Pew found that nearly a third of respondents (32%) believe AI will hurt workers more than help; the same percentage believe AI will equally help and hurt; about 1 in 10 respondents (13%) believe AI will help more than hurt; and roughly 1 in 5 of those answering (22%) aren’t sure.

Respondents also widely oppose the use of AI to augment regular management duties. Nearly three-quarters of Pew’s respondents (71%) oppose the use of AI for making a final hiring decision. Six in 10 (61%) oppose the use of AI for tracking workers’ movements while they work. And nearly as many (56%) oppose the use of AI for monitoring workers at their desks.

Facial-recognition technology fared poorly in the survey, too. Fully 7 in 10 respondents were opposed to using the technology to analyze employees’ facial expressions. And over half (52%) were opposed to using facial recognition to track how often workers take breaks. However, a small majority (45%) favored the use of facial recognition to track worker attendance; about a third (35%) were opposed and one in five (20%) were unsure.

The AI risk

Probably the hottest form of AI right now is generative AI, as exemplified by the ChatGPT chatbot. But given the technology’s risks around security, privacy, bias and misinformation, some experts have called for a pause or even a halt on its use.

Because that’s unlikely to happen, one industry watcher is calling for new safeguards. “Organizations need to act now to formulate an enterprisewide strategy for AI trust, risk and security management,” says Avivah Litan, a VP and analyst at Gartner.

What should you do? Two main things, Litan says.

First, monitor out-of-the-box usage of ChatGPT. Use your existing security controls and dashboards to catch policy violations. Also, use your firewalls to block unauthorized use, your event-management systems to monitor logs for violations, and your secure web gateways to monitor disallowed API calls.

Second, for prompt engineering usage—which uses tools to create, tune and evaluate prompt inputs and outputs—take steps to protect the sensitive data used to engineer prompts. A good start, Litan says, would be to store all engineered prompts as immutable assets.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages