Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: Why the Rack is Now the Unit

Featured content

Tech Explainer: Why the Rack is Now the Unit

Today’s rack scale solutions can include just about any standard data center component. They can also save your customers money, time and manpower.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are your data center customers still installing single servers and storage devices instead of full-rack solutions? If so, they need to step up their game. Today, IT infrastructure management is shifting toward rack scale integrations. Increasingly, the rack is the unit.

A rack scale solution can include just about any standard data center component. A typical build combines servers, storage devices, network switches and other rack products like power-management and cooling systems. Some racks are loaded with the same type of servers, making optimization and maintenance easier.

With many organizations developing and deploying resource-intensive AI-enabled applications, opting for fully integrated turnkey solutions that help them become more productive faster makes sense. Supermicro is at the vanguard of this movement.

The Supermicro team is ready and well-equipped to design, assemble, test, configure and deploy rack scale solutions. These solutions are ideal for modern datacenter workloads, including AI, deep learning, big data and vSAN.

Why rack scale?

Rack scale solutions let your customers bypass the design, construction and testing of individual servers. Instead of spending precious time and money building, integrating and troubleshooting IT infrastructure, rack scale and cluster-level solutions arrive preconfigured and ready to run.

Supermicro advertises plug-and-play designs. That means your customers need only plug in and connect to their networks, power and optional liquid cooling. After that, it’s all about getting more productivity faster.

Deploying rack scale solutions could enable your customers to reduce or redeploy IT staff, help them optimize their multicloud deployments, and lower their environmental impact and operating costs.

Supermicro + AMD processors = lower costs

Every organization wants to save time and money. Your customers may also need to adhere to stringent environmental, social and governance (ESG) policies to reduce power consumption and battle climate change.

Opting for AMD silicon helps increase efficiency and lower costs. Supermicro’s rack scale solutions feature 4th generation AMD EPYC server processors. These CPUs are designed to shrink rack space and reduce power consumption in your customers’ data center.

AMD says its EPYC-series processors can:

  • Run resource-intensive workloads with fewer servers
  • Reduce operational and energy costs
  • Free up precious data center space and power, then re-allocate this capacity for new workloads and services

Combined with a liquid-cooling system, Supermicro’s AMD-powered rack scale solutions can help reduce your customer’s IT operating expenses by more than 40%.

More than just the hardware

The right rack scale solution is about more than just hardware. Your customers also need a well-designed, fully integrated solution that has been tested and certified before it leaves the factory.

Supermicro provides value-added services beyond individual components to create a rack scale solution greater than the sum of its parts.

You and your customers can collaborate with Supermicro product managers to determine the best platform and components. That includes selecting optimum power supplies and assessing network topology architecture and switches.

From there, Supermicro will optimize server, storage and switch placement at rack scale. Experienced hardware and software engineers will design, build and test the system. They’ll also install mission-critical software benchmarked to your customer’s requirements.

Finally, Supermicro performs strenuous burn-in tests and delivers thoroughly tested L12 clusters to your customer’s chosen site. It’s a one-stop shop that empowers your customers to maximize productivity from day one.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Featured content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Supermicro and Vast Data are jointly offering an AMD-based turnkey solution that promises to simplify and accelerate AI and data pipelines.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro and Vast Data are collaborating to deliver a turnkey, full-stack solution for creating and expanding AI deployments.

This joint solution is aimed at hyperscalers, cloud service providers (CSPs) and large, data-centric enterprises in fintech, adtech, media and entertainment, chip design and high-performance computing (HPC).

Applications that can benefit from the new joint offering include enterprise NAS and object storage; high-performance data ingestion; supercomputer data access; scalable data analysis; and scalable data processing.

Vast, founded in 2016, offers a software data platform that enterprises and CSPs use for data-intensive computing. The platform is based on a distributed systems architecture, called DASE, that allows a system to run read and write operations at any scale. Vast’s customers include Pixar, Verizon and Zoom.

By collaborating with Supermicro, Vast hopes to extend its market. Currently, Vast sells to infrastructure providers at a variety of scales. Some of its largest customers have built 400 petabyte storage systems, and a few are even discussing systems that would store up to 2 exabytes, according to John Mao, Vast’s VP of technology alliances.

Supermicro and Vast have engaged with many of the same CSPs separately, supporting various parts of the solution. By formalizing this collaboration, they hope to extend their reach to new customers while increasing their sell-through to current customers.

Vast is also looking to the Supermicro alliance to expand its global reach. While most of Vast’s customers today are U.S.-based, Supermicro operates in over 100 countries worldwide. Supermicro also has the infrastructure to integrate, test and ship 5,000 fully populated racks per month from its manufacturing plants in California, Netherlands, Malaysia and Taiwan.

There’s also a big difference in size. Where privately held Vast has about 800 employees, publicly traded Supermicro has more than 5,100.

Rack solution

Now Vast and Supermicro have developed a new converged system using Supermicro’s Hyper A+ servers with AMD EPYC 9004 processors. The solution combines 2 separate Vast servers. 

This converged system is well suited to large service providers, where the typical Supermicro-powered Vast rack configuration will start at about 2PB, Mao adds.

Rack-scale configurations can cut costs by eliminating the need for single-box redundancy. This converged design makes the system more scalable and more cost-efficient.

Under the hood

One highlight of the joint project: It puts Vast’s DASE architecture on Supermicro’s  industry-standard servers. Each server will have both the compute and storage functions of a Vast cluster.

At the same time, the architecture is disaggregated via a high-speed Ethernet NVMe fabric. This allows each node to access all drives in the cluster.

The Vast platform architecture uses a series of what the company calls an EBox. Each EBox, in turn, contains 2 kinds of storage servers in a container environment: CNode (short for Compute Node) and DNode (short for Data Node). In a typical EBox, one CNode interfaces with client applications and writes directly to two DNode containers.

In this configuration, Supermicro’s storage servers can act as a hardware building block to scale Vast to hundreds of petabytes. It supports Vast’s requirement for multiple tiers of solid-state storage media, an approach that’s unique in the industry.

CPU to GPU

At the NAB Show, held recently in Las Vegas, Supermicro’s demos included storage servers, each powered by a single-socket AMD EPYC 9004 Series processor.

With up to 128 PCIe Gen 5 lanes, the AMD processor empowers the server to connect more SSDs via NVMe with a single CPU. The Supermicro storage server also lets users move data directly from storage to GPU memory supporting Nvidia’s GPU Direct storage protocol, essentially bypassing a GPU cluster’s CPU using RDMA.

If you or your customers are interested in the new Vast solution, get in touch with your local Supermicro sales rep or channel partner. Under the terms of the new partnership, Supermicro is acting as a Vast integrator and OEM. It’s also Vast’s only rack-scale partner.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How CSPs can accelerate the data center

Featured content

How CSPs can accelerate the data center

A new webinar, now available on demand, offers cloud service providers an overview of new IDC research, outlines roadblocks, and offers guidelines for future success.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Are you a cloud services provider—or a CSP wannabe—wondering how to expand your data center in ways that will both keep your customers happy and help you turn a profit?

If so, a recent webinar sponsored by Supermicro and AMD can help. Entitled Accelerate Your Cloud: Best Practices for CSPs, it was moderated by Wendell Wenjen, director of storage market development at Supermicro. Best of all, you can now view this webinar on demand.

Here’s a taste of what you’ll see:

IDC research on CSP buying plans

The webinar’s first speaker is Ashish Nadkarni, group VP and GM of worldwide infrastructure research at IDC. He summarizes new IDC research on technology adoption trends and strategies among service providers.

Sales growth, IDC says, is coming mainly in 4 areas: Infrastructure as a Service (IaaS), hardware (both servers and storage), software and IT services. The good news, Nadkarni adds, is that all 4 can be offered by service providers.

Data centers remain important, Nadkarni says. Not everyone wants to use the public cloud, and not every workload belongs there.

IDC expects that 5 key technologies will be immune to budget cuts:

  • AI and automation
  • Security, risk and compliance
  • Optimization of IT infrastructure and IT operations
  • Back-office applications (HR, SCM and ERP)
  • Customer experience initiatives (for example, chatbots)

Generative AI dominates the conversation, Nadkarni said, and for good reason: IDC expects that this year, GenAI will double the productive use of unstructured data, helping workers discover new insights and knowledge.

Supply-chain issues remain a daunting challenge, IDC finds. Delays can hurt a CSP’s ability to deliver projects, increase the cost of delivering services, and even impair service quality. Owning the supply chain will remain vital.

Other tactics for change, Nadkarni said, include offering a transformation road map; working with a full-stack portfolio provider; and developing a long-term vision for why customers will want to do business with you.

10 steps to data-center scaling

Next up in the webinar is Sim Upadhyayula, VP of solutions enablement at Supermicro. He offered a list of 10 essential steps for scaling a CSP data center.

Topping his list: standardize and scale. There’s no way you can know exactly which workloads will dominate in the future. So be modular. That way, you can scale in smaller increments, keeping customers happy while controlling your costs.

Next on the list: optimize for applications. Unlike big enterprises, most CSPs cannot afford to build application silos. Instead, leading providers will develop an architecture that can cater to all. That means using standard hardware that can later be optimized for specific workloads.

Common challenges

Suresh Andani, AMD’s senior director of product management for server cloud, is up next. He discusses 3 key CSP challenges:

  • Market disruption: Caused by a changing ISV landscape, and by increasing power and cooling costs.
  • Aging infrastructure: Service providers with older systems find them costly to maintain, unable to keep pace with customers’ business demands, and vulnerable to increasingly dangerous security threats.
  • Expanding demands: Customers keep raising the bar on core workloads AI, cloud-native applications, digital transformation, the hybrid workforce and security enhancements.

During the webinar’s concluding roundtable discussion, Andani also emphasized the importance of marrying the right infrastructure with your workloads. That way, he said, CSPs can operate efficiently, making the most of their power and compute cycles.

“Work with your vendors to provide the best compute solutions,” Andani of AMD advised. “Later you can offer a targeted infrastructure for high performance compute, another for enterprise workloads, another for gaming, and another for rendering.”

Lean on your providers, he added, to provide the right solution, whether your target is performance or cost.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

10 best practices for scaling the CSP data center — Part 1

Featured content

10 best practices for scaling the CSP data center — Part 1

Cloud service providers, here are best practices—courtesy of Supermicro—to help you design and deploy rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 10 best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 1: First standardize, then scale

First, select a configuration of compute, storage and networking. Then scale these configurations up and down into setups you designate as small, medium and large.

Later, you can deploy these standard configurations at various data centers with different numbers of users, workload sizes and growth estimates.

Best Practice No. 2: Optimize the configuration

Good as Best Practice No. 1 is, it may not work if you handle a very wide range of workloads. If that’s the case, then you may want to instead optimize the configuration.

Here’s how. First, run the software on the rack configuration to determine the best mix of CPUs, including cores, memory, storage and I/O. Then consider setting up different sets of optimized configurations.

For example, you might send AI training workloads to GPU-optimized servers. But a database application on a standard 2-socket CPU system.

Best Practice No. 3: Plan for tech refreshes 

When it comes to technology, the only constant is change itself. That doesn’t mean you can just wait around for the latest, greatest upgrade. Instead, do some strategic planning.

That might mean talking with key suppliers about their road maps. What are their plans for transitions, costs, supply chains and more?

Also consider that leading suppliers now let you upgrade some server components without having to replace the entire chassis. That reduces waste. That could also help you get more power from your current racks, servers and power requirements.

Best Practice No. 4: Look for new architectures

New architectures can help you increase power at lower cost. For example, AMD and Supermicro offer data-center accelerators that let you run AI workloads on a mix of GPUs and CPUs, a less costly alternative to all-GPU setups.

To find out if you could benefit from new architectures, talk with your suppliers about running proof-of-concept (PoC) trials of their new technologies. In other words, try before you buy.

Best Practice No. 5: Create a support plan

Sure, you need to run 24x7, but that doesn’t mean you have to pay third parties for all of that. Instead, determine what level of support you can provide in-house. For what remains, you can either staff up or outsource.

When you do outsource, make sure your supplier has tested your software stack before. You want to be sure that, should you have a problem, the supplier will be able to respond quickly and correctly.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

10 best practices for scaling the CSP data center — Part 2

Featured content

10 best practices for scaling the CSP data center — Part 2

Cloud service providers, here are more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 5 more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 6: Design at the data-center level

Consider your entire data center as a single unit, complete with its range of both strengths and weaknesses. This will help you tackle such macro-level issues as the separation of hot and cold aisles, forced air cooling, and the size of chillers and fans.

If you’re planning an entirely new data center, remember to include a discussion of cooling tech. Why? Because the physical infrastructure needed for an air-cooled center is quite different than that needed for liquid cooling.

Best Practice No. 7: Understand & consider liquid cooling

We’re approaching the limits of air cooling. A new approach, one based on liquid cooling, promises to keep processors and accelerators running within their design limits.

Liquid cooling can also reduce a data center’s Power Usage Effectiveness (PUE) ratio, a measure of how much energy is used by a center’s computing equipment. This cooling tech can also minimize the need for HVAC cooling power.

Best Practice No. 8: Measure what matters

You can’t improve what you don’t measure. So make sure you are measuring such important factors as your data center’s CPU, storage and network utilization.

Good tools are available that can take these measurements at the cluster level. These tools can also identify both bottlenecks and levels of component over- or under-use.

Best Practice No. 9: Manage jobs better

A CSP’s data center is typically used simultaneously by many customers. That pretty much means using a job-management scheduler tool.

One tricky issue is over-demand. That is, what do you do if you lack enough resources to satisfy all requests for compute, storage or networking? A job scheduler can help here, too.

Best Practice No. 10: Simplify your supply chain

Sure, competition across the industry is a good thing, driving higher innovation and lower prices. But within a single data center, standardizing on just a single supplier could be the winning ticket.

This approach simplifies ordering, installation and support. And if something should go wrong, then you’ll have only the proverbial “one throat to choke.”

Can you still use third-party hardware as appropriate? Sure. And with a single main supplier, that integration should be simpler, too.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Data-center service providers: ready for transformation?

Featured content

Data-center service providers: ready for transformation?

An IDC researcher argues that providers of data-center hosting services face new customer demands that require them to create new infrastructure stacks. Key elements will include rack-scale integration, accelerators and new CPU cores. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

If your organization provides data-center hosting services, brace yourself. Due to changing customer demands, you’re about to need an entirely new infrastructure stack.

So argues Chris Drake, a senior research director at market watcher IDC, in a recently published white paper sponsored by Supermicro and AMD, The Power of Now: Accelerate the Datacenter.

In his white paper, Drake asserts that this new data center infrastructure stack will include new CPU cores, accelerated computing, rack-scale integration, a software-defined architecture, and the use of a micro-services application environment.

Key drivers

That’s a challenging list. So what’s driving the need for this new infrastructure stack? According to Drake, changing customer requirements.

More specifically, a growing need for hosted IT requirements. For reasons related to cost, security and performance, many IT shops are choosing to retain proprietary workloads on premises and in private-cloud environments.

While some of these IT customers have sufficient capacity in their data centers to host these workloads on prem, many don’t. They’ll rely instead on service providers for a range of hosted IT requirements. To meet this demand, Drake says, service providers will need to modernize.

Another driver: growing customer demand for raw compute power, a direct result of their adoption of new, advanced computing tools. These include analytics, media streaming, and of course the various flavors of artificial intelligence, including machine learning, deep learning and generative AI.

IDC predicts that spending on servers ranging in price from $10K to $250K will rise from a global total of $50.9 billion in 2022 to $97.4 billion in 2027. That would mark a 5-year compound annual growth rate of nearly 14%.

Under the hood

What will building this new infrastructure stack entail? Drake points to 5 key elements:

  • Higher-performing CPU cores: These include chiplet-based CPU architectures that enable the deployment of composable hardware architectures. Along with distributed and composable hardware architectures, these can enable more efficient use of shared resources and more scalable compute performance.
  • Accelerated computing: Core CPU processing will increasingly be supplemented by hardware accelerators, including those for AI. They’ll be needed to support today’s—and tomorrow’s—increasingly diverse range of high-performance and data-intensive workloads.
  • Rack-scale integration: Pre-tested racks can facilitate faster deployment, integration and expansion. They can also enable a converged-infrastructure approach to building and scaling a data center.
  • Software-defined data center technology: In this approach, virtualization concepts such as abstraction and pooling are extended to a data center’s compute, storage, networking and other resources. The benefits include increased efficiency, better management and more flexibility.
  • A microservices application architecture: This approach divides large applications into smaller, independently functional units. In so doing, it enables a highly modular and agile way for applications to be developed, maintained and upgraded.

Plan for change

Rome wasn’t built in a day. Modernizing a data center will take time, too.

To help service providers implement a successful modernization, Drake of IDC offers this 6-point action plan:

1. Develop a transformation road map: Aim to strike a balance between harnessing new technology opportunities on the one hand and being realistic about your time frames, costs and priorities on the other.

2. Work with a full-stack portfolio vendor: You want a solution that’s tailored for your needs, not just an off-the-rack package. “Full stack” here means a complete offering of servers, hardware accelerators, storage and networking equipment—as well as support services for all of the above.

3. Match accelerators to your workloads: You don’t need a Formula 1 race car to take the kids to school. Same with your accelerators. Sure, you may have workloads that require super-low latency and equally high thruput. But you’re also likely to be supporting workloads that can take advantage of more affordable CPU-GPU combos. Work with your vendors to match their hardware with your workloads.

4. Seek suppliers with the right experience: Work with tech vendors that know what you need. Look for those with proven track records of helping service providers to transform and scale their infrastructures.

5. Select providers with supply-chain ownership: Ideally, your tech vendors will fully own their supply chains for boards, systems and rack designs such as liquid-cooling systems. That includes managing the vertical integration needed to combine these elements. The right supplier could help you save costs and get to market faster.

6. Create a long-term plan: Plan for the short term, but also look ahead into the future. Technology isn’t sitting still, and neither should you. Plan for technology refreshes. Ask your vendors for their road maps, and review them. Decide what you can support in-house versus what you’ll probably need to hand off to partners.

Now do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

At MWC, Supermicro intros edge server, AMD demos tech advances

Featured content

At MWC, Supermicro intros edge server, AMD demos tech advances

Learn what Supermicro and AMD showed at the big mobile world conference in Barcelona. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

This year’s MWC Barcelona, held Feb. 27 - 29, was a really big show. Over 101,000 people attended from 205 countries and territories. More than 2,700 organizations either exhibited, partnered or sponsored. And over 1,100 subject-matter experts made presentations.

Among those many exhibitors were Supermicro and AMD.

Supermicro showed off the company’s new AS -1115SV, a cost-optimized, single-AMD-processor server for the edge data center.

And AMD offered demos on AI engines, cryptography for quantum computing and more.

Supermicro AS -1115SV

Okay, Supermicro’s full SKU for this system is A+ Server AS -1115SV-WTNRT. That’s a mouthful, but the essence is simple: It’s a 1U short-depth server, powered by a single AMD processor, and designed for the edge data center.

The single CPU in question is an AMD EPYC 8004 Series processor with up to 64 cores. Memory maxes out at 576 GB of DDR5, and you also get 3 PCIe 5.0 x16 slots and up to 10 hot-swappable 2.5-inch drive bays.

The server’s intended applications include virtualization, firewall, edge computing, cloud services, and database/storage. Supermicro says the server’s high efficiency and low power envelope make it ideal for both telco and edge applications.

AMD’s MWC demos

AMD gave a slew of demos AMD from its MWC booth. Here are three:

  • 5G advanced & AI integrated on the same device: To meet today’s requirements, both 5G advanced and 6G wireless communication systems require that intensive signal processing and novel AI algorithms can be implemented on the same device and AI engine. AMD demo’d its AI Engines, power-efficient, general-purpose processors that can be programmed to address both signal-processing and AI requirements in future wireless systems.
  • High-performance quantum safe cryptography​: Quantum computing threatens the security of existing asymmetric or public-key cryptographic algorithms. This demo showed some powerful alternatives on AMD devices: Kyber, Dilithum and PQShield.
  • GreenRAN 5G on EPYC 8004 Series processors: GreenRAN is an open RAN (radio access network) solution from Parallel Wireless. It’s designed to operate seamlessly across various general-purpose CPUs—including, as this demo showed, the AMD 8004 EPYC family.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Featured content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Supermicro is now letting you validate, test and benchmark AI workloads on its AMD-based H13 systems right from your browser. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has added new AI-workload-optimized GPU systems to its popular H13 JumpStart program. This means you and your customers can validate, test and benchmark AI workloads on a Supermicro H13 system right from your PC’s browser.

The JumpStart program offers remote sessions to fully configured Supermicro systems with SSH, VNC, and web IPMI. These systems feature the latest AMD EPYC 9004 Series Processors with up to 128 ‘Zen 4c’ cores per socket, DDR5 memory, PCIe 5.0, and CXL 1.1 peripherals support.

In addition to previously available models, Supermicro has added the H13 4U GPU System with dual AMD EPYC 9334 processors and Nvidia L40S AI-focused universal GPUs. This H13 configuration is designed for heavy AI workloads, including applications that leverage machine learning (ML) and deep learning (DL).

3 simple steps

The engineers at Supermicro know the value of your customer’s time. So, they made it easy to initiate a session and get down to business. The process is as simple as 1, 2, 3:

  • Select a system: Go to the main H13 JumpStart page, then scroll down and click one of the red “Get Access” buttons to browse available systems. Then click “Select Access” to pick a date and time slot. On the next page, select the configuration and press “Schedule” and then “Confirm.”
  • Sign In: log in with a Supermicro SSO account to access the JumpStart program. If you or your customers don’t already have an account, creating a new account is both free and easy.
  • Initiate secure access: When the scheduled time arrives, begin the session by visiting the JumpStart page. Each server will include documentation and instructions to help you get started quickly.

So very secure

Security is built into the program. For instance, the server is not on a public IP address. Nor is it directly addressable to the Internet. Supermicro sets up the jump server as a proxy, and this provides access to only the server you or your customer are authorized to test.

And there’s more. After your JumpStart session ends, the server is manually secure-erased, the BIOS and firmware are re-flashed, and the OS is reinstalled with new credentials. That way, you can be sure any data you’ve sent to the H13 system will disappear once the session ends.

Supermicro is serious about its security policies. However, the company still warns users to keep sensitive data to themselves. The JumpStart program is meant for benchmarking, testing and validation only. In their words, “processing sensitive data on the demo server is expressly prohibited.”

Keep up with the times

Supermicro’s expertly designed H13 systems are at the core of the JumpStart program, with new models added regularly to address typical workloads.

In addition to the latest GPU systems, the program also features hardware focused on evolving data center roles. This includes the Supermicro H13 CloudDC system, an all-in-one rackmount platform for cloud data centers. Supermicro CloudDC systems include single AMD EPYC 9004 series processors and up to 10 hot-swap NVMe/SATA/SAS drives.

You can also initiate JumpStart sessions on Supermicro Hyper Servers. These multi-use machines are optimized for tasks including cloud, 5G core, edge, telecom and hyperconverged storage.

Supermicro Hyper Servers included in the company’s JumpStart program offer single or dual processor configurations featuring AMD EPYC 9004 processors and up to 8TB of DDR5 memory in a 1U or 2U form factor.

Helping your customers test and validate a Supermicro H13 system for AI is now easy. Just get a JumpStart.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

Featured content

For Ansys engineering simulations, check out Supermicro's AMD-powered SuperBlade

The Supermicro SuperBlade powered by AMD EPYC processors provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads. They're valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems. 
 
Learn More about this topic
  • Applications:
  • Featured Technologies:

If you have engineering customers, take note. Supermicro and AMD have partnered with Ansys Inc. to create an advanced HPC platform for engineering simulation software.

The Supermicro SuperBlade, powered by AMD EPYC processors, provides exceptional memory bandwidth, floating-point performance, scalability and density for technical computing workloads.

This makes the Supermicro system especially valuable to your customers who use Ansys software to create complex simulations that help solve real-world problems.

The power of simulation

As you may know, engineers design the objects that make up our daily lives—everything from iPhones to airplane wings. Simulation software from Ansys enables them to do it faster, more efficiently and less expensively, resulting in highly optimized products.

Product development requires careful consideration of physics and material properties. Improperly simulating the impact of natural physics on a theoretical structure could have dramatic, even life-threatening consequences.

How bad could it get? Picture the wheels coming off a new car on the highway.

That’s why it’s so important for engineers to have access to the best simulation software operating on the best-designed hardware.

And that’s what makes the partnership of Supermicro, AMD and Ansys so valuable.The result of this partnership is a software/hardware platform that can run complex structural simulations without sacrificing either quality or efficiency.

Wanted: right tool for the job

Product simulations can lead to vital developments, whether artificial heart valves that save lives or green architectures that battle climate change.

Yet complex simulation software is extremely resource-intensive. Running a simulation on under-equipped hardware can be a frustrating and costly exercise in futility.

Even with modern, well-equipped systems, users of simulation software can encounter a myriad of roadblocks. These are often due to inadequate processor frequency and core density, insufficient memory capacity and bandwidth, and poorly optimized I/O.

Best-of-breed simulation software like Ansys Fluent, Mechanical, CFX, and LS-DYNA demands a cutting-edge turnkey hardware solution that can keep up, no matter what.

That’s one super blade

In the case of Supermicro’s SuperBlade, that solution leverages some of the world’s most advanced computing tech to ensure stability and efficiency.

The SuperBlade’s 8U enclosure can be equipped with up to 20 compute blades. Each blade may contain up to 2TB of DDR4 memory, two hot-swap drives, AMD Instinct accelerators and 3rd gen AMD EPYC 7003 processors.

The AMD processors include up to 64 cores and 768 MB of L3 cache. All told, the SuperBlade enclosure can contain a total of 1,280 CPU cores.

Optimized I/O comes in the form of 1G, 10G, 25G or 100G Ethernet or 200G InfiniBand. And each node can house up to 2 additional low-profile PCIe 4.0 x16 expansion cards.

The modular design of SuperBlade enables Ansys users to run simultaneous jobs on multiple nodes in parallel. The system is so flexible, users can assign any number of jobs to any set of nodes.

As an added benefit, different blades can be used in the same chassis. This allows workloads to be assigned to wherever the maximum performance can be achieved.

For instance, a user could launch a four-node parallel job on four nodes and simultaneously two 8-node parallel jobs on the remaining 16 nodes. Alternatively, an engineer could run five 4-node parallel jobs on 20 nodes or ten 2-node parallel jobs on 20 nodes.

The bottom line

Modern business leaders must act as both engineers and accountants. With a foot planted firmly on either side, they balance the limitless possibilities of design with the limited cash flow at their discretion.

The Supermicro SuperBlade helps make that job a little easier. Supermicro, AMD and Ansys have devised a way to give your engineering customers the tools they need, yet still optimize data-center footprint, power requirements and cooling systems.

The result is a lower total cost of ownership (TCO), and with absolutely no compromise in quality.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

Get a better Google on-prem cloud with Supermicro SuperBlade

Featured content

Get a better Google on-prem cloud with Supermicro SuperBlade

Supermicro SuperBlade servers powered by AMD EPYC processors are ideal for managing cloud-native workloads--and for connecting to the wealth of services the Google Cloud Platform provides.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Everyone’s moved to the public cloud, right? No, not quite.

Sure, many organizations have moved to the cloud for application development and a place to run applications. And why not, since the benefits can include faster time to market, greater efficiencies, increased scalability and lower costs.

Yet many organizations have too many IT systems and processes to “lift and shift” them to the cloud all at once. Instead, their journey to the cloud will likely take months or even years.

In the meantime, some are adopting on-premises clouds. This approach gives them dedicated, bare metal servers, or servers that can be set up with cloud services and capabilities.

One popular approach to an on-premises cloud is Google GDC Virtual. Formerly known as Google Anthos on-prem and bare metal, this solution extends Google’s cloud capabilities and services to an organization’s on-prem data center.

Your customers can use Google GDC Virtual to run new, modernized applications, bring in AI and machine learning workloads, and modernize on-premises applications.

All this should be especially interesting to your customers if they already use the Google Distributed Cloud (GDC). This portfolio of products now includes GDC Virtual, extending Google’s cloud infrastructure and services to the edge and corporate data centers.

More help is here now from Supermicro SuperBlade servers powered by AMD EPYC processors. They’re ideal for managing cloud-native workloads. And for connecting to the wealth of services the Google Cloud Platform provides.

These servers include a bare metal option that delivers many cloud benefits to self-managed Supermicro SuperBlade servers. This offers your customers Bare Metal as a Service (BMaaS) for workloads that include AI inferencing, visual computing, big data and high-performance computing (HPC).

Why on-prem cloud?

With the public cloud such a popular, common solution, why might your customers prefer to run an on-prem cloud? The reasons include:

  • Data security, compliance and sovereignty requirements. For example, privacy regulations may prohibit your customer from running an application in the public cloud.
  • Monolithic application design. Some legacy application architectures don’t align with cloud pricing models.
  • Demand for networking with very low latency. Highly transactional systems, such as those used by banks, benefit from being physically close to their users, data and next-hop processors in the application flow.
  • Protect legacy investments: Your customer may have already spent a small fortune on on-prem servers, networking gear and storage devices. For them, shifting from CapEx to OpEx—normally one of the big benefits of moving to the cloud—may not be an option.

Using GDC Virtual, your customers can deploy both traditional and cloud-native apps. A single GDC Virtual cluster can support deployments across multiple cloud platforms, including not only Google Cloud, but also AWS and Microsoft Azure.

Super benes

If all this sounds like a good option for your customers, you should also consider Supermicro servers. They’re ideal for managing cloud-native workloads when used as control panel nodes and worker nodes to create a GDC Virtual hybrid cluster.

Here are some of the main benefits your customers can enjoy by using Supermicro SuperBlade servers powered by AMD EPYC processors:

  • Hardware-agnostic: Your customers can leverage existing on-prem SuperBlade servers to drive data-center efficiency.
  • No hypervisor layer overhead: Deploying GDC Virtual on SuperBlade reduces complexity.
  • Rapid deployment: GDC Virtual enables rapid cloud-native application development and delivery. So both developers and dev-ops teams can benefit from increased productivity.
  • Easy manageability: SuperBlade manageability, coupled with GDC Virtual management, enables increased operational efficiency. A dashboard lets you monitor what’s going on.

Under the hood

Supermicro SuperBlade servers are powered by AMD EPYC 7003 Series processors with AMD 3D V-Cache tech. These CPUs, built around AMD’s “Zen 3” core, contain up to 64 cores per socket.

Supermicro offers three AMD-powered SuperBlade models: SAS, SATA and GPU-accelerated. These can be mixed in a single 8U enclosure, a feature SMC calls “private cloud in a box.” Each server supports up to 40 single-width GPUs or 20 double-width GPUs.

Each server also contains at least one Chassis Management Module (CMM). This lets sys admins remotely manage and monitor server blades, power supplies, cooling fans and networking switches.

Another Supermicro SuperBlade feature is SuperCloud Composer (SCC). It provides a unified dashboard for administering software-defined data centers.

Have customers who want the benefits of the cloud, but without moving to the cloud? Suggest that they adopt an on-premises cloud. And tell them how they can do that by running Google GDC on Supermicro SuperBlade servers powered by AMD EPYC processors.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages