Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

AMD and Supermicro: Pioneering AI Solutions

Featured content

AMD and Supermicro: Pioneering AI Solutions

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bringing AMD Instinct to the Forefront

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions. Their shared vision? To enable organizations to unlock the full potential of AI workloads, from training massive language models to accelerating complex simulations.

The AMD Instinct MI300 Series: Changing The AI Acceleration Paradigm

At the heart of this collaboration lies the AMD Instinct MI300 Series—a family of accelerators designed to redefine performance boundaries. These accelerators combine high-performance AMD EPYC™ 9004 series CPUs with the powerful AMD InstinctTM MI300X GPU accelerators and 192GB of HBM3 memory, creating a formidable force for AI, HPC, and technical computing.

Supermicro’s H13 Generation of GPU Servers

Supermicro’s H13 generation of GPU Servers serves as the canvas for this technological masterpiece. Optimized for leading-edge performance and efficiency, these servers integrate seamlessly with the AMD Instinct MI300 Series. Let’s explore the highlights:

8-GPU Systems for Large-Scale AI Training:

  • Supermicro’s 8-GPU servers, equipped with the AMD Instinct MI300X OAM accelerator, offer raw acceleration power. The AMD Infinity Fabric™ Links enable up to 896GB/s of peak theoretical P2P I/O bandwidth, while the 1.5TB HBM3 GPU memory fuels large-scale AI models.
  • These servers are ideal for LLM Inference and training language models with trillions of parameters, minimizing training time and inference latency, lowering the TCO and maximizing throughput.

Benchmarking Excellence

But what about real-world performance? Fear not! Supermicro’s ongoing testing and benchmarking efforts have yielded remarkable results. The continued engagement between AMD and Supermicro performance teams enabled Supermicro to test pre-release ROCm versions with the latest performance optimizations and publicly released optimization like Flash Attention 2 and vLLM. The Supermicro AMD-based system AS -8125GS-TNMR2 showcases AI inference prowess, especially on models like Llama-2 70B, Llama-2 13B, and Bloom 176B. The performance? Equal to or better than AMD’s published results from the Dec. 6 Advancing AI event.

Image - Blog - AMD and Supermicro Pioneering AI Solutions

Charles Liang’s Vision

In the words of Charles Liang, President and CEO of Supermicro:

“We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators. Our proven architecture allows for fully integrated liquid cooling solutions, giving customers a competitive advantage.”

Conclusion

The AMD-Supermicro partnership isn’t just about hardware and software stacks; it’s about pushing boundaries, accelerating breakthroughs, and shaping the future of AI. So, as we raise our virtual glasses, let’s toast to innovation, collaboration, and the relentless pursuit of performance and excellence.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

10 best practices for scaling the CSP data center — Part 1

Featured content

10 best practices for scaling the CSP data center — Part 1

Cloud service providers, here are best practices—courtesy of Supermicro—to help you design and deploy rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 10 best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 1: First standardize, then scale

First, select a configuration of compute, storage and networking. Then scale these configurations up and down into setups you designate as small, medium and large.

Later, you can deploy these standard configurations at various data centers with different numbers of users, workload sizes and growth estimates.

Best Practice No. 2: Optimize the configuration

Good as Best Practice No. 1 is, it may not work if you handle a very wide range of workloads. If that’s the case, then you may want to instead optimize the configuration.

Here’s how. First, run the software on the rack configuration to determine the best mix of CPUs, including cores, memory, storage and I/O. Then consider setting up different sets of optimized configurations.

For example, you might send AI training workloads to GPU-optimized servers. But a database application on a standard 2-socket CPU system.

Best Practice No. 3: Plan for tech refreshes 

When it comes to technology, the only constant is change itself. That doesn’t mean you can just wait around for the latest, greatest upgrade. Instead, do some strategic planning.

That might mean talking with key suppliers about their road maps. What are their plans for transitions, costs, supply chains and more?

Also consider that leading suppliers now let you upgrade some server components without having to replace the entire chassis. That reduces waste. That could also help you get more power from your current racks, servers and power requirements.

Best Practice No. 4: Look for new architectures

New architectures can help you increase power at lower cost. For example, AMD and Supermicro offer data-center accelerators that let you run AI workloads on a mix of GPUs and CPUs, a less costly alternative to all-GPU setups.

To find out if you could benefit from new architectures, talk with your suppliers about running proof-of-concept (PoC) trials of their new technologies. In other words, try before you buy.

Best Practice No. 5: Create a support plan

Sure, you need to run 24x7, but that doesn’t mean you have to pay third parties for all of that. Instead, determine what level of support you can provide in-house. For what remains, you can either staff up or outsource.

When you do outsource, make sure your supplier has tested your software stack before. You want to be sure that, should you have a problem, the supplier will be able to respond quickly and correctly.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

10 best practices for scaling the CSP data center — Part 2

Featured content

10 best practices for scaling the CSP data center — Part 2

Cloud service providers, here are more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 5 more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 6: Design at the data-center level

Consider your entire data center as a single unit, complete with its range of both strengths and weaknesses. This will help you tackle such macro-level issues as the separation of hot and cold aisles, forced air cooling, and the size of chillers and fans.

If you’re planning an entirely new data center, remember to include a discussion of cooling tech. Why? Because the physical infrastructure needed for an air-cooled center is quite different than that needed for liquid cooling.

Best Practice No. 7: Understand & consider liquid cooling

We’re approaching the limits of air cooling. A new approach, one based on liquid cooling, promises to keep processors and accelerators running within their design limits.

Liquid cooling can also reduce a data center’s Power Usage Effectiveness (PUE) ratio, a measure of how much energy is used by a center’s computing equipment. This cooling tech can also minimize the need for HVAC cooling power.

Best Practice No. 8: Measure what matters

You can’t improve what you don’t measure. So make sure you are measuring such important factors as your data center’s CPU, storage and network utilization.

Good tools are available that can take these measurements at the cluster level. These tools can also identify both bottlenecks and levels of component over- or under-use.

Best Practice No. 9: Manage jobs better

A CSP’s data center is typically used simultaneously by many customers. That pretty much means using a job-management scheduler tool.

One tricky issue is over-demand. That is, what do you do if you lack enough resources to satisfy all requests for compute, storage or networking? A job scheduler can help here, too.

Best Practice No. 10: Simplify your supply chain

Sure, competition across the industry is a good thing, driving higher innovation and lower prices. But within a single data center, standardizing on just a single supplier could be the winning ticket.

This approach simplifies ordering, installation and support. And if something should go wrong, then you’ll have only the proverbial “one throat to choke.”

Can you still use third-party hardware as appropriate? Sure. And with a single main supplier, that integration should be simpler, too.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Data-center service providers: ready for transformation?

Featured content

Data-center service providers: ready for transformation?

An IDC researcher argues that providers of data-center hosting services face new customer demands that require them to create new infrastructure stacks. Key elements will include rack-scale integration, accelerators and new CPU cores. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

If your organization provides data-center hosting services, brace yourself. Due to changing customer demands, you’re about to need an entirely new infrastructure stack.

So argues Chris Drake, a senior research director at market watcher IDC, in a recently published white paper sponsored by Supermicro and AMD, The Power of Now: Accelerate the Datacenter.

In his white paper, Drake asserts that this new data center infrastructure stack will include new CPU cores, accelerated computing, rack-scale integration, a software-defined architecture, and the use of a micro-services application environment.

Key drivers

That’s a challenging list. So what’s driving the need for this new infrastructure stack? According to Drake, changing customer requirements.

More specifically, a growing need for hosted IT requirements. For reasons related to cost, security and performance, many IT shops are choosing to retain proprietary workloads on premises and in private-cloud environments.

While some of these IT customers have sufficient capacity in their data centers to host these workloads on prem, many don’t. They’ll rely instead on service providers for a range of hosted IT requirements. To meet this demand, Drake says, service providers will need to modernize.

Another driver: growing customer demand for raw compute power, a direct result of their adoption of new, advanced computing tools. These include analytics, media streaming, and of course the various flavors of artificial intelligence, including machine learning, deep learning and generative AI.

IDC predicts that spending on servers ranging in price from $10K to $250K will rise from a global total of $50.9 billion in 2022 to $97.4 billion in 2027. That would mark a 5-year compound annual growth rate of nearly 14%.

Under the hood

What will building this new infrastructure stack entail? Drake points to 5 key elements:

  • Higher-performing CPU cores: These include chiplet-based CPU architectures that enable the deployment of composable hardware architectures. Along with distributed and composable hardware architectures, these can enable more efficient use of shared resources and more scalable compute performance.
  • Accelerated computing: Core CPU processing will increasingly be supplemented by hardware accelerators, including those for AI. They’ll be needed to support today’s—and tomorrow’s—increasingly diverse range of high-performance and data-intensive workloads.
  • Rack-scale integration: Pre-tested racks can facilitate faster deployment, integration and expansion. They can also enable a converged-infrastructure approach to building and scaling a data center.
  • Software-defined data center technology: In this approach, virtualization concepts such as abstraction and pooling are extended to a data center’s compute, storage, networking and other resources. The benefits include increased efficiency, better management and more flexibility.
  • A microservices application architecture: This approach divides large applications into smaller, independently functional units. In so doing, it enables a highly modular and agile way for applications to be developed, maintained and upgraded.

Plan for change

Rome wasn’t built in a day. Modernizing a data center will take time, too.

To help service providers implement a successful modernization, Drake of IDC offers this 6-point action plan:

1. Develop a transformation road map: Aim to strike a balance between harnessing new technology opportunities on the one hand and being realistic about your time frames, costs and priorities on the other.

2. Work with a full-stack portfolio vendor: You want a solution that’s tailored for your needs, not just an off-the-rack package. “Full stack” here means a complete offering of servers, hardware accelerators, storage and networking equipment—as well as support services for all of the above.

3. Match accelerators to your workloads: You don’t need a Formula 1 race car to take the kids to school. Same with your accelerators. Sure, you may have workloads that require super-low latency and equally high thruput. But you’re also likely to be supporting workloads that can take advantage of more affordable CPU-GPU combos. Work with your vendors to match their hardware with your workloads.

4. Seek suppliers with the right experience: Work with tech vendors that know what you need. Look for those with proven track records of helping service providers to transform and scale their infrastructures.

5. Select providers with supply-chain ownership: Ideally, your tech vendors will fully own their supply chains for boards, systems and rack designs such as liquid-cooling systems. That includes managing the vertical integration needed to combine these elements. The right supplier could help you save costs and get to market faster.

6. Create a long-term plan: Plan for the short term, but also look ahead into the future. Technology isn’t sitting still, and neither should you. Plan for technology refreshes. Ask your vendors for their road maps, and review them. Decide what you can support in-house versus what you’ll probably need to hand off to partners.

Now do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

At MWC, Supermicro intros edge server, AMD demos tech advances

Featured content

At MWC, Supermicro intros edge server, AMD demos tech advances

Learn what Supermicro and AMD showed at the big mobile world conference in Barcelona. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

This year’s MWC Barcelona, held Feb. 27 - 29, was a really big show. Over 101,000 people attended from 205 countries and territories. More than 2,700 organizations either exhibited, partnered or sponsored. And over 1,100 subject-matter experts made presentations.

Among those many exhibitors were Supermicro and AMD.

Supermicro showed off the company’s new AS -1115SV, a cost-optimized, single-AMD-processor server for the edge data center.

And AMD offered demos on AI engines, cryptography for quantum computing and more.

Supermicro AS -1115SV

Okay, Supermicro’s full SKU for this system is A+ Server AS -1115SV-WTNRT. That’s a mouthful, but the essence is simple: It’s a 1U short-depth server, powered by a single AMD processor, and designed for the edge data center.

The single CPU in question is an AMD EPYC 8004 Series processor with up to 64 cores. Memory maxes out at 576 GB of DDR5, and you also get 3 PCIe 5.0 x16 slots and up to 10 hot-swappable 2.5-inch drive bays.

The server’s intended applications include virtualization, firewall, edge computing, cloud services, and database/storage. Supermicro says the server’s high efficiency and low power envelope make it ideal for both telco and edge applications.

AMD’s MWC demos

AMD gave a slew of demos AMD from its MWC booth. Here are three:

  • 5G advanced & AI integrated on the same device: To meet today’s requirements, both 5G advanced and 6G wireless communication systems require that intensive signal processing and novel AI algorithms can be implemented on the same device and AI engine. AMD demo’d its AI Engines, power-efficient, general-purpose processors that can be programmed to address both signal-processing and AI requirements in future wireless systems.
  • High-performance quantum safe cryptography​: Quantum computing threatens the security of existing asymmetric or public-key cryptographic algorithms. This demo showed some powerful alternatives on AMD devices: Kyber, Dilithum and PQShield.
  • GreenRAN 5G on EPYC 8004 Series processors: GreenRAN is an open RAN (radio access network) solution from Parallel Wireless. It’s designed to operate seamlessly across various general-purpose CPUs—including, as this demo showed, the AMD 8004 EPYC family.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Featured content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Supermicro is now letting you validate, test and benchmark AI workloads on its AMD-based H13 systems right from your browser. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has added new AI-workload-optimized GPU systems to its popular H13 JumpStart program. This means you and your customers can validate, test and benchmark AI workloads on a Supermicro H13 system right from your PC’s browser.

The JumpStart program offers remote sessions to fully configured Supermicro systems with SSH, VNC, and web IPMI. These systems feature the latest AMD EPYC 9004 Series Processors with up to 128 ‘Zen 4c’ cores per socket, DDR5 memory, PCIe 5.0, and CXL 1.1 peripherals support.

In addition to previously available models, Supermicro has added the H13 4U GPU System with dual AMD EPYC 9334 processors and Nvidia L40S AI-focused universal GPUs. This H13 configuration is designed for heavy AI workloads, including applications that leverage machine learning (ML) and deep learning (DL).

3 simple steps

The engineers at Supermicro know the value of your customer’s time. So, they made it easy to initiate a session and get down to business. The process is as simple as 1, 2, 3:

  • Select a system: Go to the main H13 JumpStart page, then scroll down and click one of the red “Get Access” buttons to browse available systems. Then click “Select Access” to pick a date and time slot. On the next page, select the configuration and press “Schedule” and then “Confirm.”
  • Sign In: log in with a Supermicro SSO account to access the JumpStart program. If you or your customers don’t already have an account, creating a new account is both free and easy.
  • Initiate secure access: When the scheduled time arrives, begin the session by visiting the JumpStart page. Each server will include documentation and instructions to help you get started quickly.

So very secure

Security is built into the program. For instance, the server is not on a public IP address. Nor is it directly addressable to the Internet. Supermicro sets up the jump server as a proxy, and this provides access to only the server you or your customer are authorized to test.

And there’s more. After your JumpStart session ends, the server is manually secure-erased, the BIOS and firmware are re-flashed, and the OS is reinstalled with new credentials. That way, you can be sure any data you’ve sent to the H13 system will disappear once the session ends.

Supermicro is serious about its security policies. However, the company still warns users to keep sensitive data to themselves. The JumpStart program is meant for benchmarking, testing and validation only. In their words, “processing sensitive data on the demo server is expressly prohibited.”

Keep up with the times

Supermicro’s expertly designed H13 systems are at the core of the JumpStart program, with new models added regularly to address typical workloads.

In addition to the latest GPU systems, the program also features hardware focused on evolving data center roles. This includes the Supermicro H13 CloudDC system, an all-in-one rackmount platform for cloud data centers. Supermicro CloudDC systems include single AMD EPYC 9004 series processors and up to 10 hot-swap NVMe/SATA/SAS drives.

You can also initiate JumpStart sessions on Supermicro Hyper Servers. These multi-use machines are optimized for tasks including cloud, 5G core, edge, telecom and hyperconverged storage.

Supermicro Hyper Servers included in the company’s JumpStart program offer single or dual processor configurations featuring AMD EPYC 9004 processors and up to 8TB of DDR5 memory in a 1U or 2U form factor.

Helping your customers test and validate a Supermicro H13 system for AI is now easy. Just get a JumpStart.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD CTO: ‘AI across our entire portfolio’

Featured content

AMD CTO: ‘AI across our entire portfolio’

In a presentation for industry analysts, AMD chief technology officer Mark Papermaster laid out the company’s vision for artificial intelligence everywhere — from PC and edge endpoints to the largest hypervisor servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The current buildout of the artificial intelligence infrastructure is an event as big as the original launch of the internet.

AI, now mainly an expense, will soon be monetized. Thousands of AI applications are coming.

And AMD plans to embed AI across its entire product portfolio. That will include components and software on everything from PCs and edge sensors to the largest servers used by the big cloud hypervisors.

These were among the comments of Mark Papermaster, AMD’s executive VP and CTO, during a recent fireside chat hosted by stock research firm Arete Research. During the hour-long virtual presentation, Papermaster answered questions from moderator Brett Simpson of Arete and attending stock analysts. Here are the highlights.

The overall AI market

AMD has said it believes the total addressable market (TAM) for AI through 2027 is $400 billion. “That surprised a lot of people,” Papermaster said, but AMD believes a huge AI infrastructure is needed.

That will begin with the major hyperscalers. AWS, Google Cloud and Microsoft Azure are among those looking at massive AI buildouts.

But there’s more. AI is not only in the domain of these massive clusters. Individual businesses will be looking for AI applications that can drive productivity and enhance the customer experience.

The models for these kinds of AI systems are typically smaller. They can be run on smaller clusters, too, whether on-premises or in the cloud.

AI will also make its way into endpoint devices. They’ll include PCs, embedded devices, and edge sensors.

Also, AI is more than just compute. AI systems also require robust memory, storage and networking.

“We’re thrilled to bring AI across our entire product portfolio,” Papermaster said.

Looking at the overall AI market, AMD expects to see a compound annual growth rate of 70%. “I know that seems huge,” Papermaster said. “But we are investing to capture that growth.”

AI pricing

Pricing considerations need to take into account more than just the price of a GPU, Papermaster argued. You really have to look at the total cost of ownership (TCO).

The market is operating with an underlying premise: Demand for AI compute is insatiable. That will drive more and more compute into a smaller area, delivering more efficient power per FLOP, the most common measure of AI compute performance.

Right now, the AI compute model is dominated by a single player. But AMD is now bringing the competition. That includes the recently announced MI300 accelerator. But as Papermaster pointed out, there’s more, too. “We have the right technology for the right purpose,” he said.

That includes using not only GPUs, but also (where appropriate) CPUs. These workloads can include AI inference, edge computing, and PCs. In this way, user organizations can better manage their overall CapEx spend.

As moderator Simpson reminded him, Papermaster is fond of saying that customers buy road maps. So naturally he was asked about AMD’s plans for the AI future. Papermaster mainly deferred, saying more details will be forthcoming. But he also reminded attendees that AMD’s investments in AI go back several years and include its ROCm software enablement stack.

Training vs. inference

Training and inference are currently the two biggest AI workloads. Papermaster believes we’ll see the AI market bifurcate along their two lines.

Training depends on raw computational power in a vast cluster. For example, the popular ChatGPT generative AI tool uses a model with over a trillion parameters. That’s where AMD’s MI300 comes into play, Papermaster said, “because it scales up.”

This trend will continue, because for large language models (LLMs), the issue is latency. How quickly can you get a response? That requires not only fast processors, but also equally fast memory.

More specific inferencing applications, typically run after training is completed, are a different story, Papermaster said, adding: “Essentially, it’s ‘I’ve trained my model; now I want to organize it.’” These workloads are more concise and less demanding of both power and compute, meaning they can run on more affordable GPU-CPU combinations.

Power needs for AI

User organizations face a challenge: While running an AI system requires a lot of power, many data centers are what Papermaster called “power-gated.” In other words, they’re unable to drive up compute capacity to AI levels using current technology.

AMD is on the case. In 2020, the company committed itself to driving a 30x improvement in power efficiency for its products by 2025. Papermaster said the company is still on track to deliver that.

To do so, he added, AMD is thinking in terms of “holistic design.” That means not just hardware, but all the way through an application to include the entire stack.

One promising area involves AI workloads that can use AI approximation. These are applications that, unlike HPC workloads, do not need incredible levels of accuracy. As a result, performance is better for lower-precision arithmetic than it is for high-precision. “Not all AI models are created equally,” Papermaster said. “You’ll need smaller models, too.”

AMD is among those who have been surprised by the speed of AI adoption. In response, AMD has increased its projection of AI sales this year from $2 billion to $3.5 billion, what Papermaster called the fastest ramp AMD has ever seen.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

Featured content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

The Supermicro SuperBlade powered by AMD EPYC processors provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads. They're valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems. 
 
Learn More about this topic
  • Applications:
  • Featured Technologies:

If you have engineering customers, take note. Supermicro and AMD have partnered with Ansys Inc. to create an advanced HPC platform for engineering simulation software.

The Supermicro SuperBlade, powered by AMD EPYC processors, provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads.

This makes the Supermicro system especially valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems.

The power of simulation

As you may know, engineers design the objects that make up our daily lives—everything from iPhones to airplane wings. Simulation software from Ansys enables them to do it faster, more efficiently and less expensively, resulting in highly optimized products.

Product development requires careful consideration of physics and material properties. Improperly simulating the impact of natural physics on a theoretical structure could have dramatic, even life-threatening consequences.

How bad could it get? Picture the wheels coming off a new car on the highway.

That’s why it’s so important for engineers to have access to the best simulation software operating on the best-designed hardware.

And that’s what makes the partnership of Supermicro, AMD and Ansys so valuable.The result of this partnership is a software/hardware platform that can run complex structural simulations without sacrificing either quality or efficiency.

Wanted: right tool for the job

Product simulations can lead to vital developments, whether artificial heart valves that save lives or green architectures that battle climate change.

Yet complex simulation software is extremely resource-intensive. Running a simulation on under-equipped hardware can be a frustrating and costly exercise in futility.

Even with modern, well-equipped systems, users of simulation software can encounter a myriad of roadblocks. These are often due to inadequate processor frequency and core density, insufficient memory capacity and bandwidth, and poorly optimized I/O.

Best-of-breed simulation software like Ansys Fluent, Mechanical, CFX, and LS-DYNA demands a cutting-edge turnkey hardware solution that can keep up, no matter what.

That’s one super blade

In the case of Supermicro’s SuperBlade, that solution leverages some of the world’s most advanced computing tech to ensure stability and efficiency.

The SuperBlade’s 8U enclosure can be equipped with up to 20 compute blades. Each blade may contain up to 2TB of DDR4 memory, two hot-swap drives, AMD Instinct accelerators and 3rd gen AMD EPYC 7003 processors.

The AMD processors include up to 64 cores and 768 MB of L3 cache. All told, the SuperBlade enclosure can contain a total of 1,280 CPU cores.

Optimized I/O comes in the form of 1G, 10G, 25G or 100G Ethernet or 200G InfiniBand. And each node can house up to 2 additional low-profile PCIe 4.0 x16 expansion cards.

The modular design of SuperBlade enables Ansys users to run simultaneous jobs on multiple nodes in parallel. The system is so flexible, users can assign any number of jobs to any set of nodes.

As an added benefit, different blades can be used in the same chassis. This allows workloads to be assigned to wherever the maximum performance can be achieved.

For instance, a user could launch a four-node parallel job on four nodes and simultaneously two 8-node parallel jobs on the remaining 16 nodes. Alternatively, an engineer could run five 4-node parallel jobs on 20 nodes or ten 2-node parallel jobs on 20 nodes.

The bottom line

Modern business leaders must act as both engineers and accountants. With a foot planted firmly on either side, they balance the limitless possibilities of design with the limited cash flow at their discretion.

The Supermicro SuperBlade helps make that job a little easier. Supermicro, AMD and Ansys have devised a way to give your engineering customers the tools they need, yet still optimize data-center footprint, power requirements and cooling systems.

The result is a lower total cost of ownership (TCO), and with absolutely no compromise in quality.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Get a better Google on-prem cloud with Supermicro SuperBlade

Featured content

Get a better Google on-prem cloud with Supermicro SuperBlade

Supermicro SuperBlade servers powered by AMD EPYC processors are ideal for managing cloud-native workloads--and for connecting to the wealth of services the Google Cloud Platform provides.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Everyone’s moved to the public cloud, right? No, not quite.

Sure, many organizations have moved to the cloud for application development and a place to run applications. And why not, since the benefits can include faster time to market, greater efficiencies, increased scalability and lower costs.

Yet many organizations have too many IT systems and processes to “lift and shift” them to the cloud all at once. Instead, their journey to the cloud will likely take months or even years.

In the meantime, some are adopting on-premises clouds. This approach gives them dedicated, bare metal servers, or servers that can be set up with cloud services and capabilities.

One popular approach to an on-premises cloud is Google GDC Virtual. Formerly known as Google Anthos on-prem and bare metal, this solution extends Google’s cloud capabilities and services to an organization’s on-prem data center.

Your customers can use Google GDC Virtual to run new, modernized applications, bring in AI and machine learning workloads, and modernize on-premises applications.

All this should be especially interesting to your customers if they already use the Google Distributed Cloud (GDC). This portfolio of products now includes GDC Virtual, extending Google’s cloud infrastructure and services to the edge and corporate data centers.

More help is here now from Supermicro SuperBlade servers powered by AMD EPYC processors. They’re ideal for managing cloud-native workloads. And for connecting to the wealth of services the Google Cloud Platform provides.

These servers include a bare metal option that delivers many cloud benefits to self-managed Supermicro SuperBlade servers. This offers your customers Bare Metal as a Service (BMaaS) for workloads that include AI inferencing, visual computing, big data and high-performance computing (HPC).

Why on-prem cloud?

With the public cloud such a popular, common solution, why might your customers prefer to run an on-prem cloud? The reasons include:

  • Data security, compliance and sovereignty requirements. For example, privacy regulations may prohibit your customer from running an application in the public cloud.
  • Monolithic application design. Some legacy application architectures don’t align with cloud pricing models.
  • Demand for networking with very low latency. Highly transactional systems, such as those used by banks, benefit from being physically close to their users, data and next-hop processors in the application flow.
  • Protect legacy investments: Your customer may have already spent a small fortune on on-prem servers, networking gear and storage devices. For them, shifting from CapEx to OpEx—normally one of the big benefits of moving to the cloud—may not be an option.

Using GDC Virtual, your customers can deploy both traditional and cloud-native apps. A single GDC Virtual cluster can support deployments across multiple cloud platforms, including not only Google Cloud, but also AWS and Microsoft Azure.

Super benes

If all this sounds like a good option for your customers, you should also consider Supermicro servers. They’re ideal for managing cloud-native workloads when used as control panel nodes and worker nodes to create a GDC Virtual hybrid cluster.

Here are some of the main benefits your customers can enjoy by using Supermicro SuperBlade servers powered by AMD EPYC processors:

  • Hardware-agnostic: Your customers can leverage existing on-prem SuperBlade servers to drive data-center efficiency.
  • No hypervisor layer overhead: Deploying GDC Virtual on SuperBlade reduces complexity.
  • Rapid deployment: GDC Virtual enables rapid cloud-native application development and delivery. So both developers and dev-ops teams can benefit from increased productivity.
  • Easy manageability: SuperBlade manageability, coupled with GDC Virtual management, enables increased operational efficiency. A dashboard lets you monitor what’s going on.

Under the hood

Supermicro SuperBlade servers are powered by AMD EPYC 7003 Series processors with AMD 3D V-Cache tech. These CPUs, built around AMD’s “Zen 3” core, contain up to 64 cores per socket.

Supermicro offers three AMD-powered SuperBlade models: SAS, SATA and GPU-accelerated. These can be mixed in a single 8U enclosure, a feature SMC calls “private cloud in a box.” Each server supports up to 40 single-width GPUs or 20 double-width GPUs.

Each server also contains at least one Chassis Management Module (CMM). This lets sys admins remotely manage and monitor server blades, power supplies, cooling fans and networking switches.

Another Supermicro SuperBlade feature is SuperCloud Composer (SCC). It provides a unified dashboard for administering software-defined data centers.

Have customers who want the benefits of the cloud, but without moving to the cloud? Suggest that they adopt an on-premises cloud. And tell them how they can do that by running Google GDC on Supermicro SuperBlade servers powered by AMD EPYC processors.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Looking to accelerate AI? Start with the right mix of storage

Featured content

Looking to accelerate AI? Start with the right mix of storage

Learn More about this topic
  • Applications:
  • Featured Technologies:

That’s right, storage might be the solution to speeding up your AI systems.

Why? Because today’s AI and HPC workloads demand a delicate storage balance. On the one hand, they need flash storage for high performance. On the other, they also need object storage for data that, though large, is used less frequently.

Supermicro and AMD are here to help with a reference architecture that’s been tested and validated at customer sites.

Called the Scale-Out Storage Reference Architecture, it offers a way to deliver massive amounts of data at high bandwidth and low latency to data-intensive applications. The architecture also defines how to manage data life-cycle concerns, including migration and cold-storage retention.

At a high level, Supermicro’s reference architecture address three important demands for AI and HPC storage:

  • Data lake: It needs to be large enough for all current and historical data.
  • All-flash storage tier: Caches input for application servers and deliver high bandwidth to meet demand.
  • Specialized application servers: Offering support that ranges from AMD EPYC high-core-count CPUs to GPU-dense systems.

Tiers for less tears

At this point, you might be wondering how one storage system can provide both high performance and vast data stores. The answer: Supermicro’s solution offers a storage architecture in 3 tiers:

  • All flash: Stores active data that needs the highest speeds of storage and access. This typically accounts for just 10% to 20% of an organization’s data. For the highest bandwidth networking, clusters are connected with either 400 GbE or 400 Gbps InfiniBand. This tier is supported by the Weka data platform, a distributed parallel file system that connects to the object tier.
  • Object: Long-term, capacity-optimized storage. Essentially, it acts as a cache for the application tier. These systems offer high-density drives with relatively low bandwidth and networking typically in the 100 GbE range. This tier managed by Quantum ActiveScale Object Storage Software, a scalable, always-on, long-term data repository.
  • Application: This is where your data-intensive workloads, such as machine-learning training, reside. This tier uses 400 Gbps InfiniBand networking to access data in the all-flash tier.

What’s more, the entire architecture is modular, meaning you can adjust the capacity of the tiers depending on customer needs. This can also be adjusted to deploy different kinds of products — for example, open-source vs. commercial software.

To give you an idea of what’s possible, here’s a real-life example. One of the world’s largest semiconductor makers has deployed the Supermicro reference architecture. Its goal: use AI to automate the detection of chip-wafer defects. Using the reference architecture, the company was able to fill a software installation with 25 PB of data in just 3 weeks, according to Supermicro.

Storage galore

Supermicro offers more than just the reference architecture. The company also offers storage servers powered by the latest AMD EPYC processors. These servers can deliver flash storage that is ideal for active data. And they can handle high-capacity storage on physical discs.

That includes the Supermicro Storage A+ Server ASG-2115S-NE332R. It’s a 2U rackmount device powered by an AMD EPYC 9004 series processor with 3D V-Cache technology.

This storage server has 32 bays for E3.S hot-swap NVM3 drives. (E3.S is a form factor designed to optimize the flash density of SSD drives.) The server’s total storage capacity comes to an impressive 480 TB. It also offers native PCIe 5 performance.

Of course, every organization has unique workloads and requirements. Supermicro can help you here, too. Its engineering team stand ready to help you size, design and implement a storage system optimized to meet your customers’ performance and capacity demands.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages