Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

Featured content

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

AMD strengthens its commitment to the cloud and enterprise data centers with new "Bergamo" CPUs, "Genoa-X" cache, Instinct accelerators.

Learn More about this topic
  • Applications:
  • Featured Technologies:

This week AMD strengthened its already strong commitment to the cloud and enterprise markets. The company announced several new products and partnerships at its Data Center and AI Technology Premier event, which was held in San Francisco and simultaneously broadcast online.

“We’re focused on pushing the envelope in high-performance and adaptive computing,” AMD CEO Lisa Su told the audience, “creating solutions to the world’s most important challenges.”

Here’s what’s new:

Bergamo: That’s the former codename for the new 4th gen AMD EPYC 97X4 processors. AMD’s first processor designed specifically for cloud-native workloads, it packs up to 128 cores per socket using AMD’s new Zen 4c design to deliver lots of power/watt. Each socket contains 8 chiplets, each with up to 16 Zen 4c cores; that’s twice as many cores as AMD’s earlier Genoa processors (yet the two lines are compatible). The entire lineup is available now.

Genoa-X: Another codename, this one is for AMD’s new generation of AMD 3D V-Cache technology. This new product, designed specifically for technical computing such as engineering simulation, now supports over 1GB of L3 cache on a 96-core CPU. It’s paired with the new 4th gen AMD EPYC processor, including the high-performing Zen4 core, to deliver high performance/core.

“A larger cache feeds the CPU faster with complex data sets, and enables a new dimension of processor and workload optimization,” said Dan McNamara, an AMD senior VP and GM of its server business.

In all, there are 4 new Genoa-X SKUs, ranging from 16 to 96 cores, and all socket-compatible with AMD’s Genoa processors.

Genoa: Technically, not new, as this family of data-center CPUs was introduced last November. But what is new is AMD’s new focus for the processors on AI, data-center consolidation and energy efficiency.

AMD Instinct: Though AMD had already introduced its Instinct MI300 Series accelerator family, the company is now revealing more details.

This includes the introduction of the AMD Instinct MI300X, an advanced accelerator for generative AI based on AMD’s CDNA 3 accelerator architecture. It will support up to 192GB of HBM3 memory to provide the compute and memory efficiency needed for large language model (LLM) training and inference for generative AI workloads.

AMD also introduced the AMD Instinct Platform, which brings together eight MI300X accelerators into an industry-standard design for the ultimate solution for AI inference and training. The MI300X is sampling to key customers starting in Q3.

Finally, AMD also announced that the AMD Instinct MI300A, an APU accelerator for HPC and AI workloads, is now sampling to customers.

Partner news: Mark your calendar for June 20. That’s when Supermicro plans to explore key features and use cases for its Supermicro 13 systems based on AMD EPYC 9004 series processors. These Supermicro systems will feature AMD’s new Zen 4c architecture and 3D V-Cache tech.

This week Supermicro announced that its entire line of H13 AMD-based systems are now available with support for the 4th gen AMD EPYC processors with Zen 4c architecture and V-Cache technology.

That includes Supermicro’s new 1U and 2U Hyper-U servers designed for cloud-native workloads. Both are equipped with a single AMD EPYC processor with up to 128 cores.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Absolute Hosting finds the sweet spot with AMD-powered Supermicro servers

Featured content

Absolute Hosting finds the sweet spot with AMD-powered Supermicro servers

Absolute Hosting, a South African provider of hosting services to small and midsize businesses, sought to upgrade its hardware, improve its performance, and lower its costs. The company achieved all three goals with AMD-powered Supermicro servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Some brands are so strong, customers ask for them by name. They ask for a Coke when thirsty, click on Amazon.com when shopping online, visit a Tesla showroom when thinking of buying an electric car.

For Absolute Hosting Ltd., a South Africa-based provider of hosting and other digital services for small and midsize businesses (SMBs), it’s not one brand, but two: Supermicro and AMD. More specifically, the combination of Supermicro servers powered by AMD EPYC processors.

“Clients who have switched over to us have been amazed by the performance of our AMD EPYC-powered servers,” says Jade Benson, the founder of Absolute Hosting and now its managing director.

Benson and his colleagues find the Supermicro-AMD brand so powerful, they offer it by name. Check out Absolute Hosting's website, and you’ll see the AMD and Supermicro brands called out by name.

SMB specialists

It wasn’t always the case. Back in 2011, when Benson founded Absolute Hosting, the company served local South African tech resellers. Five years later, in 2016, the company shifted its focus to offering hosting and virtual server services to local SMBs.

One of its hosting services is virtual private servers. VPS hosting provides dedicated resources to each customer’s website, allowing for more control, customization and scalability than they’d get with shared hosting. That makes VPS hosting ideal for businesses that require lots of resources, enjoy high traffic, or need a great deal of control over their hosting environment.

Today Absolute Hosting owns about 100 physical servers and manages roughly 300 VPS servers for clients. The company also supplies its 5,000 clients with other hosting services, including Linux web, WordPress and email.

‘We kept seeing AMD’

Absolute Hosting’s shift to AMD-powered Supermicro servers was driven by its own efforts to refresh and upgrade its hardware, improve its performance and lower its own costs. Initially, the company rented dedicated servers from a provider that relied exclusively on Supermicro hardware.

“So when we decided to purchase our own hardware, we made it a requirement to use Supermicro,” Benson says. “And we kept seeing AMD as the recommended option.”

The new servers were a quick success. Absolute Hosting tested them with key benchmarks, including Cinebench, a cross-platform test suite, and Passmark, which compares the performance of CPUs. And it found them leading for every test application.

Absolute Hosting advertised the new offering on social media and quickly had enough business for 100 VPS servers. The company ran a public beta for customers and allowed the local IT community to conduct their own stress tests.

“The feedback we received was phenomenal,” Benson says. “Everyone was blown away.”

Packing a punch

Absolute Hosting’s solution is based on Supermicro’s AS-2115GT-HNTF GrandTwin server. It packs four hot-pluggable nodes into a 2U rackmount form factor.

Each node includes an AMD EPYC CPU; 12 memory slots for up to 3TB of DDR5 memory; flexible bays for storage or I/O; and up to four hot-swap 2.5-inch NVMe/SATA storage drives.

Absolute Hosting currently uses the AMD EPYC 7003 Series processors. But the Supermicro server now supports the 4th gen AMD EPYC 9004 Series processors, and Benson plans to move to them soon.

Benson considers the AMD-powered Supermicro servers a serious competitive advantage. “There are only a few people we don’t tell about AMD,” he says. “That’s our competitors.”

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Research roundup: AI edition

Featured content

Research roundup: AI edition

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

That’s some of the latest AI research from leading market watchers. And here’s your research roundup.

The AI priority

Nearly three-quarters (73%) of companies are prioritizing AI over all other digital investments, finds a new report from consultants Accenture. For these AI projects, the No. 1 focus area is improving operational resilience; it was cited by 90% of respondents.

Respondents to the Accenture survey also say the business benefits of AI are real. While only 9% of companies have achieved maturity across all 6 areas of AI operations, they averaged 1.4x higher operating margins than others. (Those 6 areas, by the way, are AI, data, processes, talent, collaboration and stakeholder experiences.)

Compared with less-mature AI operations, these companies also drove 42% faster innovation, 34% better sustainability and 30% higher satisfaction scores.

Accenture’s report is based on its recent survey of 1,700 executives in 12 countries and 15 industries. About 7 in 10 respondents held C-suite-level job titles.

The AI market

It’s no surprise that the AI market is big and growing rapidly. But just how big and how rapidly might surprise you.

How big? The global market for all AI products and services, worth some $428 billion last year, is on track to top $515 billion this year, predicts market watcher Fortune Business Insights.

How fast? Looking ahead to 2030, Fortune Insights expects the global AI market that year to hit $2.03 trillion. If so, that would mark a compound annual growth rate (CAGR) of nearly 22%.

What’s driving this big, rapid growth? Several factors, says Fortune, including the surge in the number of applications, increased partnering and collaboration, a rise in small-scale providers, and demand for hyper-personalized services.

The AI impact

What, me worry? About six in 10 Americans (62%) believe AI will have a major impact on workers in general. But only 28% believe AI will have a major effect on them personally.

So finds a recent poll by Pew Research of more than 11,000 U.S. adults.

Digging a bit deeper, Pew found that nearly a third of respondents (32%) believe AI will hurt workers more than help; the same percentage believe AI will equally help and hurt; about 1 in 10 respondents (13%) believe AI will help more than hurt; and roughly 1 in 5 of those answering (22%) aren’t sure.

Respondents also widely oppose the use of AI to augment regular management duties. Nearly three-quarters of Pew’s respondents (71%) oppose the use of AI for making a final hiring decision. Six in 10 (61%) oppose the use of AI for tracking workers’ movements while they work. And nearly as many (56%) oppose the use of AI for monitoring workers at their desks.

Facial-recognition technology fared poorly in the survey, too. Fully 7 in 10 respondents were opposed to using the technology to analyze employees’ facial expressions. And over half (52%) were opposed to using facial recognition to track how often workers take breaks. However, a small majority (45%) favored the use of facial recognition to track worker attendance; about a third (35%) were opposed and one in five (20%) were unsure.

The AI risk

Probably the hottest form of AI right now is generative AI, as exemplified by the ChatGPT chatbot. But given the technology’s risks around security, privacy, bias and misinformation, some experts have called for a pause or even a halt on its use.

Because that’s unlikely to happen, one industry watcher is calling for new safeguards. “Organizations need to act now to formulate an enterprisewide strategy for AI trust, risk and security management,” says Avivah Litan, a VP and analyst at Gartner.

What should you do? Two main things, Litan says.

First, monitor out-of-the-box usage of ChatGPT. Use your existing security controls and dashboards to catch policy violations. Also, use your firewalls to block unauthorized use, your event-management systems to monitor logs for violations, and your secure web gateways to monitor disallowed API calls.

Second, for prompt engineering usage—which uses tools to create, tune and evaluate prompt inputs and outputs—take steps to protect the sensitive data used to engineer prompts. A good start, Litan says, would be to store all engineered prompts as immutable assets.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

How Ahrefs speeds SEO services with huge compute, memory & storage

Featured content

How Ahrefs speeds SEO services with huge compute, memory & storage

Ahrefs, a supplier of search engine optimization tools, needed more robust tech to serve its tens of thousands of customers and crawl billions of web pages daily. The solution: More than 600 Supermicro Hyper servers powered by AMD processors and loaded with huge memory and storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Wondering how to satisfy customers who need big—really big—compute and storage? Take a tip from Ahrefs Ltd.

This company, based in Singapore, is a 10-year-old provider of search engine optimization (SEO) tools.

Ahrefs has a web crawler that processes up to 8 billion pages a day. That makes Ahrefs one of the world’s biggest web crawlers, up there with Google and Bing, according to internet hub Cloudflare Radar.

What’s more, Ahrefs’ business has been booming. The company now has tens of thousands of users.

That’s good news. But it also meant that to serve these customers, Ahrefs needed more compute power and storage capacity. And not just a little more. A lot.

Ahrefs also realized that its current generation of servers and CPUs couldn’t meet this rising demand. Instead, the company needed something new and more powerful.

Gearing up

For Ahrefs, that something new is its recent order of more than 600 Supermicro servers. Each system is equipped with dual      4th generation AMD EPYC 9004 Series processor, a whopping 1.5 TB of DDR5 memory, and a massive 120+ TB of storage.

More specifically, Ahrefs selected Supermicro’s AS-2125HS-TNR servers. They’re powered by dual AMD EPYC 9554 processors, each with 64 cores and 128 threads, running at a base clock speed of 3.1 GHz and an all-core boost speed of 3.75 GHz.

For Ahrefs’ configuration, each Supermicro server also contains eight NVMe 15.3 TB SSD storage devices, for a storage total of 122 TB. Also, each server communicates with the Ahrefs data network via two 100 Gbps ports.

Did it work?

Yes. Ahrefs’ response times got faster, even as its volume increased. The company can now offer more services to more customers. And that means more revenue.

Ahrefs’ founder and CEO, Dimitry Gerasimenko, puts it this way: “Supermicro’s AMD-based servers were an ideal fit for our business.”

How about you? Have customers who need really big compute and storage? Tell them about Ahrefs, and point them to these resources:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Do you know why 64 cores really matters?

Featured content

Do you know why 64 cores really matters?

In a recent test, Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors ran engineering simulations nearly as fast as a dual-processor system, but needed only two-thirds as much power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

More cores per CPU sounds good, but what does it actually mean for your customers?

In the case of certain Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors, it means running engineering simulations with dual-processor performance from a single-socket system. And with further cost savings due to two-thirds lower power consumption.

That’s according to tests recently conducted by MVConcept, a consulting firm that provides hardware and software optimizations. The firm tested two Supermicro systems, the AS-5014A-TT SuperWorkstation and AS-2114GT-DPNR server.

A solution brief based on MVConcept’s testing is now available from Supermicro.

Test setup

For these tests, the Supermicro server and workstation were both tested in two AMD configurations:

  • One with the AMD Ryzen Threadripper PRO 5995WX processor
  • The other with an older, 2nd gen AMD Ryzen Threadripper PRO 3995WX processor

In the tests, both AMD processors were used to run 32-core as well as 64-core operations.

The Supermicro systems were tested running Ansys Fluent, fluid simulation software from Ansys Inc. Fluent models fluid flow, heat, mass transfer and chemical reactions. Benchmarks for the testing included aircraft wing, oil rig and pump.

The results

Among the results: The Supermicro systems delivered nearly dual-CPU performance with a single processor, while also consuming less electricity.

What’s more, the 3rd generation AMD 5995WX CPU delivered significantly better performance than the 2nd generation AMD 3995WX.

Systems with larger cache saw performance improved the most. So a system with L3 cache of 256MB outperformed one with just 128MB.

BIOS settings proved to be especially important for realizing the optimal performance from the AMD Ryzen Threadripper PRO when running the tested applications. Specifically, Supermicro recommends using NPS=4 and SMT=OFF when running Ansys Fluent with AMD Ryzen Threadripper PRO. (NPS = non-uniform memory access (NUMA) per socket; and SMT = symmetric multithreading.)

Another cool factor involves taking advantage of the Supermicro AS-2114GT-DPNR server’s two hot-pluggable nodes. First, one node can be used to pre-process the data. Then the other node can be used to run Ansys Fluid.

Put it all together, and you get a powerful takeaway for your customers: These AMD-powered Supermicro systems offer data-center power on both the desktop and server rack, making them ideal for SMBs and enterprises alike.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Try before you buy with Supermicro’s H13 JumpStart remote access program

Featured content

Try before you buy with Supermicro’s H13 JumpStart remote access program

The Supermicro H13 JumpStart Remote Access program lets you and your customers test data-center workloads on Supermicro systems based on 4th Gen AMD EPYC 9004 Series processors. Even better, the program is free.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You and your customers can now try out systems based on 4th Gen AMD EPYC 9004 Series processors at no cost with the Supermicro remote access program.

Called H13 JumpStart, the free program offers remote access to Supermicro’s top-end H13 systems.

Supermicro’s H13 systems are designed for today’s advanced data-center workloads. They feature 4th Gen AMD EPYC 9004 Series processors with up to 96 Zen 4 cores per socket, DDR5 memory, PCIe 5.0, and support for Compute Express Link (CXL) 1.1+ peripherals.

The H13 JumpStart program lets you and your customers validate, test and benchmark workloads on either of two Supermicro systems:

●      Hyper AS-2025HS-TNR: Features dual AMD EPYC processors, 24 DIMMS, up to 3 accelerator cards, AIOM network adapter, and 12 hot-swap NVMe/SAS/SATA drive bays.

●      CloudDC AS-2015CS-TNR: Features a single AMD processor, 12 DIMMS, 4 accelerator cards, dual AIOM network adapters, and a 240GB solid state drive.

Simple startup

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account.

Run your test. Once you’re done, Supermicro will also ask you to complete a quick survey for your feedback on the program.

Other details

The JumpStart program does have a few limitations. One is the number of sessions you can have open at once. Currently, it’s limited to 1 VNC (virtual network computing), 1 SSH (secure shell), and 1 IPMI (intelligent platform management interface) session per user.

Also, the JumpStart test server is not directly addressable to the internet. However, the servers can reach out to the internet to get files.

You should test with JumpStart using anonymized data only. That’s because the Supermicro server’s security policies may differ from those of your organization.

But rest assured, once you’re done with your JumpStart demo, the server storage is manually erased, the BIOS and firmware are reflashed, and the OS is re-installed with new credentials. So your data and personal information are completely removed.

Get started

Ready to get a jump-start with Supermicro’s H13 JumpStart Remote Access program? Apply now to secure access.

Want to learn more about Supermicro’s H13 system portfolio? Check out a 5-part video series featuring Linus Sebastian of Linus Tech Tips. He takes a deep dive into how these Supermicro systems run faster and greener. 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Featured content

Research roundup: PICaaS rising, IT spending stays strong, new data-center components emerge

Do you know how the latest IT market research could help you and your business?

Learn More about this topic
  • Applications:

It’s time to consider performance intensive computing as a service. Get ready for a modest spending surge. And be on the lookout for new data-center components.

Those are takeaways from the latest in IT market research and analysis. And here’s your tech partner’s roundup.

Performance intensive computing: now as a service

If you don’t offer cloud-based performance intensive computing as a service, you might want to consider doing so. The market, already big, is growing fast.

Sales of performance intensive computing as a service (PICaaS) will rise from $22.3 billion worldwide in 2021 to $103 billion by 2027, predicts market watcher IDC. That’s a compound annual growth rate (CAGR) of nearly 28%.

With PICaaS, customers use public cloud services to run the mathematically intensive computations needed for AI, HPC, big data analytics, and engineering and technical applications.

Driving the market are two factors, IDC says. One, performance intensive computing is going mainstream and is increasingly mission critical. And two, a growing number of businesses define themselves as digital.

What can you do to get ready for this market? Among other tactics, IDC recommends that suppliers formulate an end-to-end bundled PICaaS offering, demonstrate a secure cloud infrastructure, and become trusted advisors of hybrid development models.

Strong IT spending — this year and next

What kind of year will 2023 shape up to be? If your customers are like most, pretty good. Overall IT spending will rise this year by 5.5%, reaching a grand total of $4.6 trillion, predicts analyst firm Gartner, and some segments will rise by much more.

But what about sales dips, tech layoffs and other financial issues? “Macroeconomic headwinds are not slowing digital transformation,” insists Gartner analyst John-David Lovelock. “IT spending will remain strong.”

On the hardware front, Gartner expects data center systems sales worldwide this year to rise by less than 4%. Next year looks better with a projected rise of about 6%.

IT services are in demand. Sales will rise by just over 9% this year, Gartner forecasts, and by about 10% next year.

Devices such as PCs and smartphones are a weak point, with sales projected to drop by nearly 5% this year after tumbling nearly 11% last year. Next year, sales should pick up, Gartner expects, rising an impressive 11%.

New components coming to customer data centers

Have you and your data-center customers spoken yet about three components—SmartNICs, data processing units (DPUs) and infrastructure processing units (IPUs)?

If not, you probably will soon, according to ABI Research. Demand for these components is being driven by two factors: specialized workloads such as AI, IoT and 5G; and the rise of cloud hyperscalers such as AWS, Azure and Google Cloud.

“Organizations are exploring the feasibility of running specific applications that require high processing power on public-cloud data centers to ensure business continuity,” says ABI analyst Yih-Khai Wong.

Big opportunities include networks, cloud platforms and security. For example, AMD’s Xilinx Alveo line of adaptable accelerator cards includes the industry’s first software-defined, hardware-accelerated SmartNIC.

To be sure, the shift is still in its early stages. But Wong says servers equipped by default with SmartNICs, DPUs or IPUs are coming “sooner rather than later.”

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD-based servers support enterprise applications — and break OLTP records

Featured content

AMD-based servers support enterprise applications — and break OLTP records

AMD EPYC server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD EPYC™ server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

AMD EPYC server processors offer a consistent set of features across a range of choices from 8 to 96 cores. This balanced set of resources found in AMD EPYC processors lets your customers right-size server configurations to fit their workloads.

What’s more, these AMD CPUs include models that offer high per-core performance optimized for frequency-sensitive and single-threaded workloads. This can help reduce the TCO for core-based software licenses.

AMD introduced the 4th Generation AMD EPYC processors in late 2022. The first of this generation are the AMD EPYC 9004 series CPUs. They’ve been designed to support performance and efficiency, help keep data secure, and use the latest industry features and architectures.

AMD continues to ship and support the previous 3rd Generation AMD EPYC 7002 and 7003 series processors. These processors power servers that are now available from a long list of leading hardware suppliers, including Supermicro.

Record-breaking

Good as all that may sound, you and your customers still need hard evidence that AMD processors can truly speed up their enterprise applications. Well, a new independent test of AMD-based Supermicro servers has provided just that.

The test was performed by the Telecommunications Technology Association (TTA), an IT standardization association based in Seongnam, South Korea. The TTA tested several Supermicro database and web servers powered by 3rd Gen AMD EPYC 7343 processors.

The results: The Supermicro servers set a world record for performance by a non-cluster system of 507,802 transactions per minute (tpmC).

That test was conducted using the TPC Benchmark, which measures a server’s online transaction processing (OLTP) performance. The tpmC metric measures how many new-order transactions a system can generate in a minute while executing business transactions under specific response-time requirements.

What’s more, when compared with servers based on the previous 2nd Gen AMD EPYC processors, the newer Supermicro servers were 33% faster, as shown in the chart below:

DATA: Telecommunications Technology Association

All that leads the TTA to conclude that Supermicro servers powered by the latest AMD processors “empower organizations to create deployments that deliver data insights faster than ever before.”

Do more:

Note:

1. https://www.tpc.org/1809

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

For Greener Data Centers, Look to Energy-Efficient Components

Featured content

For Greener Data Centers, Look to Energy-Efficient Components

Energy-efficient systems can help your customers lower their data-center costs while supporting a cleaner environment. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Creating a more energy-efficient data center isn’t only good for the environment, but also a great way for your customers to lower their total cost of ownership (TCO).

In many organizations, the IT department is the single biggest consumer of power. Data centers are filled with power-hungry components, including servers, storage devices, air conditioning and cooling systems.

The average data center uses anywhere from 2 to 4 Terawatt hours (TWh) of electricity per year. That works out to nearly 3% of total global energy use, according to Supermicro. Looking ahead, that’s forecast to reach as high as 8% by 2030.

One important measure of data-center efficiency is Power Usage Effectiveness (PUE). It’s calculated by taking the total electricity in a data center and dividing it by the electricity used by center’s IT components. The difference is how much electricity is being used for cooling, lighting and other non-IT components.

The lower a data center’s PUE, the better. The most energy-efficient data centers have a PUE of 1.0 or lower. The average PUE worldwide last year was 1.55, says the Uptime Institute, a benchmarking organization. That marked a slight improvement over 2021, when the average PUE was 1.57.

Costly power

All that power is expensive, too. Among the short list of ways your customers can lower that cost, moving to energy-efficient server CPUs is especially effective.

For example, AMD says that 11 servers based on of its 4th gen AMD EPYC processors can use up to 29% less power a year than the 17 servers based on competitive CPUs required to handle the same workload volume. And that can help reduce an organization’s capital expenditures by up to 46%, according to AMD.

As that example shows, CPUs with more cores can also reduce power needs by handling the same workloads with fewer physical servers.

Yes, a high-core CPU typically consumes more power than one with fewer cores, especially when run at the same frequency. But by handling more workload volume, a high-core CPU lets your customer do the same or more work with fewer racks. That can also reduce the real estate footprint and lower the need for cooling.

Greener tactics

Other tactics can contribute to a greener data center, too.

One approach involves what Supermicro calls a “disaggregated” server architecture. Essentially, this means that a server’s subsystems—including its CPU, memory and storage—can be upgraded without having to replace the entire chassis. For a double benefit, this lowers TCO while reducing E-waste.

Another approach involves designing servers that can share certain resources, such as power supplies and fans. This can lower power needs by up to 10%, Supermicro says.

Yet another approach is designing servers for maximum airflow, another Supermicro feature. This allows the CPU to operate at higher temperatures, reducing the need for air cooling.

It can also lower the load on a server’s fans. That’s a big deal, because a server’s fans can consume up to 15% of its total power.

Supermicro is also designing systems for liquid cooling. This allows a server’s fan to run at a lower speed, reducing its power needs. Liquid cooling can also lower the need for air conditioning, which in turn lowers PUE.

Liquid cooling functions similarly to a car’s radiator system. It’s basically a circular system involving an external “chiller” that cools the liquid and a series of pipes. The liquid is pumped to run through one or more pipes over a server’s CPU and GPU. The heat from those components warms the liquid. Then the now-hot liquid is sent back to the chiller for cooling and then recirculation.

Green vendors

Leading suppliers can help you help your customers go green.

AMD, for one, has pledged itself to delivering a 30x increase in energy efficiency for its processors and accelerators by 2025. That should translate into a 97% reduction in energy use per computation.

Similarly, Supermicro is working hard to help customers create green data centers. The company participates in industry consortia focused on new cooling alternatives and is a leader in the Liquid Cooling Standing Working Group of The Green Grid, a membership organization that fosters energy-efficient data centers.

Supermicro also offers products using its disaggregated rack-scale design approach to offer higher efficiency and lower costs.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages