Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

How AMD and Supermicro are working together to help you deliver AI

Featured content

How AMD and Supermicro are working together to help you deliver AI

AMD and Supermicro are jointly offering high-performance AI alternatives with superior price and performance.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When it comes to building AI systems for your customers, a certain GPU provider with a trillion-dollar valuation isn’t the only game in town. You should also consider the dynamic duo of AMD and Supermicro, which are jointly offering high-performance AI alternatives with superior price and performance.

Supermicro’s Universal GPU systems are designed specifically for large-scale AI and high-performance computing (HPC) applications. Some of these modular designs come equipped with AMD’s Instinct MI250 Accelerator and have the option of being powered by dual AMD EPYC processors.

AMD, with a newly formed AI group led by Victor Peng, is working hard to enable AI across many environments. The company has developed an open software stack for AI, and it has also expanded its partnerships with AI software and framework suppliers that now include the PyTorch Foundation and Hugging Face.

AI accelerators

In addition, AMD’s Instinct MI300A data-center accelerator is due to ship in this year’s fourth quarter. It’s the successor to AMD’s MI200 series, based on the company’s CDNA 2 architecture and first multi-die CPU, which powers some of today’s fastest supercomputers.

The forthcoming Instinct MI300A is based on AMD’s CDNA 3 architecture for AI and HPC workloads, which uses 5nm and 6nm process tech and advanced chiplet packaging. Under the MI300A’s hood, you’ll find 24 processor cores with Zen 4 tech, as well as 128GB of HBM3 memory that’s shared by the CPU and GPU. And it supports AMD ROCm 5, a production-ready, open source HPC and AI software stack.

Earlier this month, AMD introduced another member of the series, the AMD Instinct MI300X. It replaces three Zen 4 CPU chiplets with two CDNA 3 chiplets to create a GPU-only system. Announced at AMD’s recent Data Center and AI Technology Premier event, the MI300X is optimized for large language models (LLMs) and other forms of AI.

To accommodate the demanding memory needs of generative AI workloads, the new AMD Instinct MI300X also adds 64GB of HBM3 memory, for a new total of 192GB. This means the system can run large models directly in memory, reducing the number of GPUs needed, speeding performance, and reducing the user’s total cost of ownership (TCO).

AMD also recently introduced the AMD Instinct Platform, which puts eight MI300X systems and 1.5TB of memory in a standard Open Compute Project (OCP) infrastructure. It’s designed to drop into an end user’s current IT infrastructure with only minimal changes.

All this is coming soon. The AMD MI300A started sampling with select customers earlier this quarter. The MI300X and Instinct Platform are both set to begin sampling in the third quarter. Production of the hardware products is expected to ramp in the fourth quarter.

KT’s cloud

All that may sound good in theory, but how does the AMD + Supermicro combination work in the real world of AI?

Just ask KT Cloud, a South Korea-based provider of cloud services that include infrastructure, platform and software as a service (IaaS, PaaS, SaaS). With the rise of customer interest in AI, KT Cloud set out to develop new XaaS customer offerings around AI, while also developing its own in-house AI models.

However, as KT embarked on this AI journey, the company quickly encountered three major challenges:

  • The high cost of AI GPU accelerators: KT Cloud would need hundreds of thousands of new GPU servers.
  • Inefficient use of GPU resources in the cloud: Few cloud providers offer GPU virtualization due to overhead. As a result, most cloud-based GPUs are visible to only 1 virtual machine, meaning they cannot be shared by multiple users.
  • Difficulty using large GPU clusters: KT is training Korean-language models using literally billions of parameters, requiring more than 1,000 GPUs. But this is complex: Users would need to manually apply parallelization strategies and optimizations techniques.

The solution: KT worked with Moreh Inc., a South Korean developer of AI software, and AMD to design a novel platform architecture powered by AMD’s Instinct MI250 Accelerators and Moreh’s software.

The entire AI software stack was developed by Moreh from PyTorch and TensorFlow APIs to GPU-accelerated primitive operations. This overcomes the limitations of cloud services and large AI model training.

Users do not need to insert or modify even a single line of existing source code for the MoAI platform. They also do not need to change the method of running a PyTorch/TensorFlow program.

Did it work?

In a word, yes. To test the setup, KT developed a Korean language model with 11 billion parameters. Training was then done on two machines: one using Nvidia GPUs, the other being the AMD/Moreh cluster equipped with AMD Instinct MI250 accelerators, Supermicro Universal GPU systems, and the Moreh AI platform software.

Compared with the Nvidia system, the Moreh solution with AMD Instinct accelerators showed 116% throughput (as measured by tokens trained per second), and 2.05x higher cost-effectiveness (measured as throughput per dollar).

Other gains are expected, too. “With cost-effective AMD Instinct accelerators and a pay-as-you-go pricing model, KT Cloud expects to be able to reduce the effective price of its GPU cloud service by 70%,” says JooSung Kim, VP of KT Cloud.

Based on this test, KT built a larger AMD/Moreh cluster of 300 nodes—with a total of 1,200 AMD MI250 GPUs—to train the next version of the Korean language model with 200 billion parameters.

It delivers a theoretical peak performance of 434.5 petaflops for fp16/bf16 (a native 16-bit format for mixed-precision training) matrix operations. That should make it one of the top-tier GPU supercomputers in the world.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: Green Computing, Part 2 — Holistic strategies

Featured content

Tech Explainer: Green Computing, Part 2 — Holistic strategies

Holistic green computing strategies can help both corporate and individual users make changes for the better.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Green computing allows us to align the technology that powers our lives with the sustainability goals necessary to battle the climate crisis.

In Part 1 of our Tech Explainer on green computing, we looked at data-center architecture best practices and component-level green engineering. Now we’ll investigate holistic green computing strategies that can help both corporate and individual users change for the better.

Green manufacturing and supply chain

The manufacturing process can account for up to 70% of the natural resources used in the lifecycle of a PC, server or other digital device. And an estimated 76% of all global trade passes through a supply chain. So it’s more important than ever to reform processes that could harm the environment.

AMD’s efforts to advance environmental sustainability in partnership with its suppliers is a step in the right direction. The AMD Supply Chain is currently on track to ensure two important goals: that 80% of its suppliers source renewable energy, and that 100% make public their emissions-reduction goals, both by 2025.

To reduce the environmental impact of IT manufacturing, tech providers are replacing the toxic chemicals used in computer manufacturing with alternatives that are more environmentally friendly.

Materials such as the brominated flame retardants found in plastic casings are giving way to eco-friendly, non-toxic silicone compounds. Traditional non-recyclable plastic parts are being replaced by parts made from both bamboo and recyclable plastics, such as polycarbonate resins. And green manufacturers are working to eliminate other toxic chemicals, including lead in solder and cadmium and selenium in circuit boards.

Innovation in green manufacturing can identify and improve hundreds, if not thousands, of industry-standard practices. No matter how small an improvement is when employed to create millions of devices, it can make a big difference.

Green enterprise

Today’s enterprise data-center managers are working to maximize server performance while also minimizing their environmental impact. Leading-edge green methodologies include two important moves: reducing power usage at the server level and extending hardware lifecycles to create less waste.

Supermicro, an authority on energy-efficient data center design, is empowering this movement by creating new servers engineered for green computing.

One such server is Supermicro’s 4-node BigTwin. The BigTwin features disaggregated server architecture that reduces e-waste by enabling subsystem upgrades.

As technology improves, IT managers can replace components like the CPU, GPU and memory. This extends the life of the chassis, power supplies and cooling systems that might otherwise end up in a landfill.

Twin and Blade server architectures are more efficient because they share power supplies and fans. This can significantly lower their power usage, making them a better choice for green data centers.

The upgraded components that go into these servers now include high-efficiency processors like the AMD EPYC 9654. The infographic below, courtesy of AMD, shows how 4th Gen AMD EPYC processors can power 2,000 virtual machines using up to 35% fewer servers than the competition:

EPYC green infographic

As shown, the potential result is up to 29% less energy consumed annually. That kind of efficiency can save an estimated 35 tons of carbon dioxide—the equivalent of 38 acres of U.S. forest carbon sequestration every year.

Green data centers also employ advanced cooling systems. For instance, Supermicro’s servers include optional liquid cooling. Using fluid to carry heat away from critical components allows IT managers to lower fan speeds inside each server and reduce HVAC usage in data centers.

Deploying efficient cooling systems like these lowers a data center’s Power Usage Effectiveness (PUE), thus reducing carbon emissions from power generation.

Changing for the better, together

No single person, corporation or government can stave off the worst effects of climate crisis. If we are to win this battle, we must work together.

Engineers, industrial designers and data scientists have their work cut out for them. By fueling the evolution of green computing, they—and their corporate managers—can provide us with the tools we need to go green and safeguard our environment for generations to come.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: Green Computing, Part 1 - What does the data center demand?

Featured content

Tech Explainer: Green Computing, Part 1 - What does the data center demand?

The ultimate goal of Green Computing is net-zero emissions. To get there, organizations can and must innovate, conducting an ongoing campaign to increase efficiency and reduce waste.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The Green Computing movement has begun in earnest and not a moment too soon. As humanity faces the existential threat of climate crisis, technology needs to be part of the solution. Green computing is a big step in the right direction.

The ultimate goal of Green Computing is net-zero emissions. It’s a symbiotic relationship between technology and nature in which both SMBs and enterprises can offset carbon emissions, drastically reduce pollution, and reuse/recycle the materials that make up their products and services.

To get there, the tech industry will need to first take a long, hard look at the energy it uses and the waste it produces. Using that information, individual organizations can and must innovate, conducting an ongoing campaign to increase efficiency and reduce waste.

It’s a lofty goal, sure. But after all the self-inflicted damage we’ve done since the dawn of the Industrial Revolution, we simply have no choice.

The data-center conundrum

All digital technology requires electricity to operate. But data centers use more than their share.

Here’s a startling fact: Each year, the world’s data centers gobble up at least 200 terawatts of energy. That’s roughly 2% of all the electricity used on this planet annually.

What’s more, that figure is likely to increase as new, power-hungry systems are brought online and new data centers are opened. And the number of global data centers could grow from 700 in 2021 to as many as 1,200 by 2026, predicts Supermicro.

At that rate, data-center energy consumption could account for up to 8% of global energy usage by 2030. That’s why tech leaders including AMD and Supermicro are rewriting the book on green computing best practices.

A Supermicro white paper, Green Computing: Top 10 Best Practices For A Green Data Center, suggests specific actions you and your customers can take now to reduce the environmental impact of your data centers:

  • Right-size systems to match workload requirements
  • Share common scalable infrastructure
  • Operate at higher ambient temperature
  • Capture heat at the source via aisle containment and liquid cooling
  • Optimize key components (i.e., CPU, GPU, SSD, etc.) for workload performance per watt
  • Optimize hardware refresh cycle to maintain efficiency
  • Optimize power delivery
  • Utilize virtualization and power management
  • Source renewable energy and green manufacturing
  • Consider climate impact when making site selection

Green components

Rethinking data-center architectures is an excellent way to leverage green computing from a macro perspective. But to truly make a difference, the industry needs to consider green computing at the component level.

This is one area where AMD is leading the charge. Its mission: increase the energy efficiency of its CPUs and hardware accelerators. The rest of the industry should follow suit.

In 2021 AMD announced its goal to deliver a 30x increase in energy efficiency for both AMD EPYC CPUs and AMD Instinct accelerators for AI and HPC applications running on accelerated compute nodes—and to do so by 2025.

Taming AI energy usage

The golden age of AI has begun. New machine learning algorithms will give life to a population of hyper-intelligent robots that will forever alter the nature of humanity. If AI’s most beneficent promises come to fruition, it could help us live, eat, travel, learn and heal far better than ever before.

But the news isn’t all good. AI has a dark side, too. Part of that dark side is its potential impact on our climate crisis.

Researchers at the University of Massachusetts, Amherst, illustrated this point by performing a life-cycle assessment for training several large AI models. Their findings, published by Supermicro, concluded that training a single AI model can emit more than 626,000 pounds of carbon dioxide. That’s approximately 5 times the lifetime emissions of your average American car.

A comparison like that helps put AMD’s environmental sustainability goals in perspective. Affecting a 30x energy efficiency increase in the components that power AI could bring some much-needed light to AI’s dark side.

In fact, if the whole technology sector produces practical innovations similar to those from AMD and Supermicro, we might have a fighting chance in the battle against climate crisis.

Continued…

Part 2 of this 3-part series will take a closer look at the technology behind green computing—and the world-saving innovations we could see soon.

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Bergamo: a deeper dive into AMD’s new EPYC processor for cloud-native workloads

Featured content

Bergamo: a deeper dive into AMD’s new EPYC processor for cloud-native workloads

Bergamo is AMD’s first-ever server processor designed specifically for cloud-native workloads. Learn how it works.  

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bergamo is the former codename for AMD’s new 4th gen EPYC 97X4 processors optimized for cloud-native workloads, which the company introduced earlier this month.

AMD is responding to the increasingly specialized nature of data center workloads by optimizing its server processors for specific workloads. This month AMD introduced two examples: Bergamo (97X4) for cloud and Genoa-X (9XX4X) for technical computing.

The AMD EPYC 97X4 processors are AMD’s first-ever designed specifically for cloud-native workloads. And they’re shipping now in volume to AMD’s hyperscale customers that include Facebook parent company Meta and partners including Supermicro.

Speaking of Supermicro, that company this week announced that the new AMD EPYC 97X4 processors can now be included in its entire line of Supermicro H13 AMD-based systems.

Zen mastery

The main difference between the AMD EPYC 97X4 and AMD’s general-purpose Genoa series processors comes down to the core chiplet. The 97X4 CPUs use a new design called “Zen 4c.” It’s an update on the AMD Zen 4 core used in the company’s Genoa processors.

Where AMD’s original Zen 4 was designed for the highest performance per core, the new Zen 4c has been designed for a sweet spot of both density and power efficiency.

As AMD CEO Lisa Su explained during the company’s recent Data Center and AI Technology Premier event, AMD achieved this by starting with the same RTL design as Zen 4. AMD engineers then optimized this physical layout for power and area. They also redesigned the L3 cache hierarchy for greater throughput.

The result: a design that takes up about 35% less area yet offers substantially better performance per watt.

Because the start from the Zen 4’s design, the new 97X4 processors are both software- and platform-compatible with Genoa. The idea is that end users can mix and match 97X4- and Genoa-based servers, depending on their specific workloads and computing needs.

Basic math

Another difference is that where Genoa processors offer up to 96 cores per socket, the new 97X4 processors offer up to 128.

Here’s how it’s done: Each AMD 97X4 system-on-chip (SoC) contains 8 core complex dies (CCDs). In turn, each CCD contains 16 Zen 4c cores. So 8 CCDs x 16 cores = a total of 128 cores.

As the table below shows, courtesy of AMD, there are three SKUs for the new EPYC 97X5 series processors:

For security, all 3 SKUs support AMD Infinity Guard, a suite of hardware-level security features, and AMD Infinity Architecture, which lets system builders and cloud architects get maximum power while still ensuring security.

Are your customers looking for servers to handle their cloud-native applications? Tell them to look into the new AMD EPYC 97X4 processors.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

Featured content

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

AMD strengthens its commitment to the cloud and enterprise data centers with new "Bergamo" CPUs, "Genoa-X" cache, Instinct accelerators.

Learn More about this topic
  • Applications:
  • Featured Technologies:

This week AMD strengthened its already strong commitment to the cloud and enterprise markets. The company announced several new products and partnerships at its Data Center and AI Technology Premier event, which was held in San Francisco and simultaneously broadcast online.

“We’re focused on pushing the envelope in high-performance and adaptive computing,” AMD CEO Lisa Su told the audience, “creating solutions to the world’s most important challenges.”

Here’s what’s new:

Bergamo: That’s the former codename for the new 4th gen AMD EPYC 97X4 processors. AMD’s first processor designed specifically for cloud-native workloads, it packs up to 128 cores per socket using AMD’s new Zen 4c design to deliver lots of power/watt. Each socket contains 8 chiplets, each with up to 16 Zen 4c cores; that’s twice as many cores as AMD’s earlier Genoa processors (yet the two lines are compatible). The entire lineup is available now.

Genoa-X: Another codename, this one is for AMD’s new generation of AMD 3D V-Cache technology. This new product, designed specifically for technical computing such as engineering simulation, now supports over 1GB of L3 cache on a 96-core CPU. It’s paired with the new 4th gen AMD EPYC processor, including the high-performing Zen4 core, to deliver high performance/core.

“A larger cache feeds the CPU faster with complex data sets, and enables a new dimension of processor and workload optimization,” said Dan McNamara, an AMD senior VP and GM of its server business.

In all, there are 4 new Genoa-X SKUs, ranging from 16 to 96 cores, and all socket-compatible with AMD’s Genoa processors.

Genoa: Technically, not new, as this family of data-center CPUs was introduced last November. But what is new is AMD’s new focus for the processors on AI, data-center consolidation and energy efficiency.

AMD Instinct: Though AMD had already introduced its Instinct MI300 Series accelerator family, the company is now revealing more details.

This includes the introduction of the AMD Instinct MI300X, an advanced accelerator for generative AI based on AMD’s CDNA 3 accelerator architecture. It will support up to 192GB of HBM3 memory to provide the compute and memory efficiency needed for large language model (LLM) training and inference for generative AI workloads.

AMD also introduced the AMD Instinct Platform, which brings together eight MI300X accelerators into an industry-standard design for the ultimate solution for AI inference and training. The MI300X is sampling to key customers starting in Q3.

Finally, AMD also announced that the AMD Instinct MI300A, an APU accelerator for HPC and AI workloads, is now sampling to customers.

Partner news: Mark your calendar for June 20. That’s when Supermicro plans to explore key features and use cases for its Supermicro 13 systems based on AMD EPYC 9004 series processors. These Supermicro systems will feature AMD’s new Zen 4c architecture and 3D V-Cache tech.

This week Supermicro announced that its entire line of H13 AMD-based systems are now available with support for the 4th gen AMD EPYC processors with Zen 4c architecture and V-Cache technology.

That includes Supermicro’s new 1U and 2U Hyper-U servers designed for cloud-native workloads. Both are equipped with a single AMD EPYC processor with up to 128 cores.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Absolute Hosting finds the sweet spot with AMD-powered Supermicro servers

Featured content

Absolute Hosting finds the sweet spot with AMD-powered Supermicro servers

Absolute Hosting, a South African provider of hosting services to small and midsize businesses, sought to upgrade its hardware, improve its performance, and lower its costs. The company achieved all three goals with AMD-powered Supermicro servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Some brands are so strong, customers ask for them by name. They ask for a Coke when thirsty, click on Amazon.com when shopping online, visit a Tesla showroom when thinking of buying an electric car.

For Absolute Hosting Ltd., a South Africa-based provider of hosting and other digital services for small and midsize businesses (SMBs), it’s not one brand, but two: Supermicro and AMD. More specifically, the combination of Supermicro servers powered by AMD EPYC processors.

“Clients who have switched over to us have been amazed by the performance of our AMD EPYC-powered servers,” says Jade Benson, the founder of Absolute Hosting and now its managing director.

Benson and his colleagues find the Supermicro-AMD brand so powerful, they offer it by name. Check out Absolute Hosting's website, and you’ll see the AMD and Supermicro brands called out by name.

SMB specialists

It wasn’t always the case. Back in 2011, when Benson founded Absolute Hosting, the company served local South African tech resellers. Five years later, in 2016, the company shifted its focus to offering hosting and virtual server services to local SMBs.

One of its hosting services is virtual private servers. VPS hosting provides dedicated resources to each customer’s website, allowing for more control, customization and scalability than they’d get with shared hosting. That makes VPS hosting ideal for businesses that require lots of resources, enjoy high traffic, or need a great deal of control over their hosting environment.

Today Absolute Hosting owns about 100 physical servers and manages roughly 300 VPS servers for clients. The company also supplies its 5,000 clients with other hosting services, including Linux web, WordPress and email.

‘We kept seeing AMD’

Absolute Hosting’s shift to AMD-powered Supermicro servers was driven by its own efforts to refresh and upgrade its hardware, improve its performance and lower its own costs. Initially, the company rented dedicated servers from a provider that relied exclusively on Supermicro hardware.

“So when we decided to purchase our own hardware, we made it a requirement to use Supermicro,” Benson says. “And we kept seeing AMD as the recommended option.”

The new servers were a quick success. Absolute Hosting tested them with key benchmarks, including Cinebench, a cross-platform test suite, and Passmark, which compares the performance of CPUs. And it found them leading for every test application.

Absolute Hosting advertised the new offering on social media and quickly had enough business for 100 VPS servers. The company ran a public beta for customers and allowed the local IT community to conduct their own stress tests.

“The feedback we received was phenomenal,” Benson says. “Everyone was blown away.”

Packing a punch

Absolute Hosting’s solution is based on Supermicro’s AS-2115GT-HNTF GrandTwin server. It packs four hot-pluggable nodes into a 2U rackmount form factor.

Each node includes an AMD EPYC CPU; 12 memory slots for up to 3TB of DDR5 memory; flexible bays for storage or I/O; and up to four hot-swap 2.5-inch NVMe/SATA storage drives.

Absolute Hosting currently uses the AMD EPYC 7003 Series processors. But the Supermicro server now supports the 4th gen AMD EPYC 9004 Series processors, and Benson plans to move to them soon.

Benson considers the AMD-powered Supermicro servers a serious competitive advantage. “There are only a few people we don’t tell about AMD,” he says. “That’s our competitors.”

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

How Ahrefs speeds SEO services with huge compute, memory & storage

Featured content

How Ahrefs speeds SEO services with huge compute, memory & storage

Ahrefs, a supplier of search engine optimization tools, needed more robust tech to serve its tens of thousands of customers and crawl billions of web pages daily. The solution: More than 600 Supermicro Hyper servers powered by AMD processors and loaded with huge memory and storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering how to satisfy customers who need big—really big—compute and storage? Take a tip from Ahrefs Ltd.

This company, based in Singapore, is a 10-year-old provider of search engine optimization (SEO) tools.

Ahrefs has a web crawler that processes up to 8 billion pages a day. That makes Ahrefs one of the world’s biggest web crawlers, up there with Google and Bing, according to internet hub Cloudflare Radar.

What’s more, Ahrefs’ business has been booming. The company now has tens of thousands of users.

That’s good news. But it also meant that to serve these customers, Ahrefs needed more compute power and storage capacity. And not just a little more. A lot.

Ahrefs also realized that its current generation of servers and CPUs couldn’t meet this rising demand. Instead, the company needed something new and more powerful.

Gearing up

For Ahrefs, that something new is its recent order of more than 600 Supermicro servers. Each system is equipped with dual      4th generation AMD EPYC 9004 Series processor, a whopping 1.5 TB of DDR5 memory, and a massive 120+ TB of storage.

More specifically, Ahrefs selected Supermicro’s AS-2125HS-TNR servers. They’re powered by dual AMD EPYC 9554 processors, each with 64 cores and 128 threads, running at a base clock speed of 3.1 GHz and an all-core boost speed of 3.75 GHz.

For Ahrefs’ configuration, each Supermicro server also contains eight NVMe 15.3 TB SSD storage devices, for a storage total of 122 TB. Also, each server communicates with the Ahrefs data network via two 100 Gbps ports.

Did it work?

Yes. Ahrefs’ response times got faster, even as its volume increased. The company can now offer more services to more customers. And that means more revenue.

Ahrefs’ founder and CEO, Dimitry Gerasimenko, puts it this way: “Supermicro’s AMD-based servers were an ideal fit for our business.”

How about you? Have customers who need really big compute and storage? Tell them about Ahrefs, and point them to these resources:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Try before you buy with Supermicro’s H13 JumpStart remote access program

Featured content

Try before you buy with Supermicro’s H13 JumpStart remote access program

The Supermicro H13 JumpStart Remote Access program lets you and your customers test data-center workloads on Supermicro systems based on 4th Gen AMD EPYC 9004 Series processors. Even better, the program is free.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You and your customers can now try out systems based on 4th Gen AMD EPYC 9004 Series processors at no cost with the Supermicro remote access program.

Called H13 JumpStart, the free program offers remote access to Supermicro’s top-end H13 systems.

Supermicro’s H13 systems are designed for today’s advanced data-center workloads. They feature 4th Gen AMD EPYC 9004 Series processors with up to 96 Zen 4 cores per socket, DDR5 memory, PCIe 5.0, and support for Compute Express Link (CXL) 1.1+ peripherals.

The H13 JumpStart program lets you and your customers validate, test and benchmark workloads on either of two Supermicro systems:

●      Hyper AS-2025HS-TNR: Features dual AMD EPYC processors, 24 DIMMS, up to 3 accelerator cards, AIOM network adapter, and 12 hot-swap NVMe/SAS/SATA drive bays.

●      CloudDC AS-2015CS-TNR: Features a single AMD processor, 12 DIMMS, 4 accelerator cards, dual AIOM network adapters, and a 240GB solid state drive.

Simple startup

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account.

Run your test. Once you’re done, Supermicro will also ask you to complete a quick survey for your feedback on the program.

Other details

The JumpStart program does have a few limitations. One is the number of sessions you can have open at once. Currently, it’s limited to 1 VNC (virtual network computing), 1 SSH (secure shell), and 1 IPMI (intelligent platform management interface) session per user.

Also, the JumpStart test server is not directly addressable to the internet. However, the servers can reach out to the internet to get files.

You should test with JumpStart using anonymized data only. That’s because the Supermicro server’s security policies may differ from those of your organization.

But rest assured, once you’re done with your JumpStart demo, the server storage is manually erased, the BIOS and firmware are reflashed, and the OS is re-installed with new credentials. So your data and personal information are completely removed.

Get started

Ready to get a jump-start with Supermicro’s H13 JumpStart Remote Access program? Apply now to secure access.

Want to learn more about Supermicro’s H13 system portfolio? Check out a 5-part video series featuring Linus Sebastian of Linus Tech Tips. He takes a deep dive into how these Supermicro systems run faster and greener. 

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Featured content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Do you know how the latest IT market research could help you and your business?

Learn More about this topic
  • Applications:

It’s time to consider performance intensive computing as a service. Get ready for a modest spending surge. And be on the lookout for new data-center components.

Those are takeaways from the latest in IT market research and analysis. And here’s your tech partner’s roundup.

Performance intensive computing: now as a service

If you don’t offer cloud-based performance intensive computing as a service, you might want to consider doing so. The market, already big, is growing fast.

Sales of performance intensive computing as a service (PICaaS) will rise from $22.3 billion worldwide in 2021 to $103 billion by 2027, predicts market watcher IDC. That’s a compound annual growth rate (CAGR) of nearly 28%.

With PICaaS, customers use public cloud services to run the mathematically intensive computations needed for AI, HPC, big data analytics, and engineering and technical applications.

Driving the market are two factors, IDC says. One, performance intensive computing is going mainstream and is increasingly mission critical. And two, a growing number of businesses define themselves as digital.

What can you do to get ready for this market? Among other tactics, IDC recommends that suppliers formulate an end-to-end bundled PICaaS offering, demonstrate a secure cloud infrastructure, and become trusted advisors of hybrid development models.

Strong IT spending — this year and next

What kind of year will 2023 shape up to be? If your customers are like most, pretty good. Overall IT spending will rise this year by 5.5%, reaching a grand total of $4.6 trillion, predicts analyst firm Gartner, and some segments will rise by much more.

But what about sales dips, tech layoffs and other financial issues? “Macroeconomic headwinds are not slowing digital transformation,” insists Gartner analyst John-David Lovelock. “IT spending will remain strong.”

On the hardware front, Gartner expects data center systems sales worldwide this year to rise by less than 4%. Next year looks better with a projected rise of about 6%.

IT services are in demand. Sales will rise by just over 9% this year, Gartner forecasts, and by about 10% next year.

Devices such as PCs and smartphones are a weak point, with sales projected to drop by nearly 5% this year after tumbling nearly 11% last year. Next year, sales should pick up, Gartner expects, rising an impressive 11%.

New components coming to customer data centers

Have you and your data-center customers spoken yet about three components—SmartNICs, data processing units (DPUs) and infrastructure processing units (IPUs)?

If not, you probably will soon, according to ABI Research. Demand for these components is being driven by two factors: specialized workloads such as AI, IoT and 5G; and the rise of cloud hyperscalers such as AWS, Azure and Google Cloud.

“Organizations are exploring the feasibility of running specific applications that require high processing power on public-cloud data centers to ensure business continuity,” says ABI analyst Yih-Khai Wong.

Big opportunities include networks, cloud platforms and security. For example, AMD’s Xilinx Alveo line of adaptable accelerator cards includes the industry’s first software-defined, hardware-accelerated SmartNIC.

To be sure, the shift is still in its early stages. But Wong says servers equipped by default with SmartNICs, DPUs or IPUs are coming “sooner rather than later.”

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

How rackscale integration can help your customers get productive faster

Featured content

How rackscale integration can help your customers get productive faster

Supermicro’s rack integration and deployment service can help your customers get productive sooner.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

How would your key data-center customers like to improve their server performance, speed their rate of innovation, and lower their organization’s environmental impact—all while getting productive sooner?

Those are among the key benefits of Supermicro’s rack integration and deployment service. It’s basically a one-stop shop for a defined process with experts to design and create an effective and efficient cloud and enterprise hardware solution.

Supermicro’s dedicated team can provide everything from early design to onsite integration. That includes design, assembly, configuration, testing and delivery.

Hardware covered by Supermicro’s rack integration service includes servers, storage, switches and rack products. That includes systems based on the latest 4th Generation AMD EPYC server processors. Supermicro’s experts can also work closely with your customer to design a test plan that includes application loading, performance tuning and testing.

All these can be used for a wide range of optimized solutions. These include AI and deep learning, big data and Hadoop refreshes, and vSAN.

Customers of Supermicro’s rackscale systems can also opt for liquid cooling. This can reduce your customer’s operating expenses by more than 40%. And by lowering fan speeds, liquid cooling can further reduce their power needs, delivering a PUE (power usage effectiveness metric) of close to 1.0. All that typically provides an ROI in just 1 year, according to Supermicro.

Five-phase integration

When your customers work with Supermicro on rack integration, they’ll get support through 5 phases:

  • Design: Supermicro learns your customer’s business problems and requirements, develops a proof-of-concept to validate the solution, then selects the most suitable hardware and works with your customer on power requirements and budgets. Then it creates a bill of materials, followed by a detailed rack-level engineering diagram.
  • Assembly: Supermicro technicians familiar with the company’s servers assemble the system, either on your customer’s site or pre-shipment at a Supermicro facility. This includes all nodes, racks, cabling and third-party equipment.
  • Configuration: Each server’s BIOS is updated, optimized and tested. Firmware gets updated, too. OSes and custom images are pre-installed or deployed to specific nodes as needed.
  • Testing: This includes a performance analysis, a check for multi-vendor compatibility, and full rack burn-in testing for a standard 8 hours.
  • Logistics: Supermicro ships the complete system to your customer’s site, can install it, and provides ongoing customer service.

Big benes

For your customers, the benefits of working with Supermicro and AMD can include better performance per watt and per dollar, faster time to market with IT innovation, a reduced environmental impact, and lower costs.

Further, once the system is installed, Supermicro’s support can significantly reduce lead times to fix system issues. The company keeps the whole process from L6 to L12 in-house, and it maintains a vast inventory of spare parts on campus.

Wherever your customers are located, Supermicro likely has an office nearby. With a global footprint, Supermicro operates across the U.S., EMEA and Taiwan. Supermicro has invested heavily in rack-integration testing facilities, too. These centers are now being expanded to test rack-level air and liquid cooling.

For your customers with cloud-based systems, there are additional benefits. These include optimizing the IT environment for their clouds, and meeting co-location requirements.

There’s business for channel partners, too. You can add specific software to the rack system. And you can work with your customer on training and more.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Pages