Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

HBM: Your memory solution for AI & HPC

Featured content

HBM: Your memory solution for AI & HPC

High-bandwidth memory shortens the information commute to keep pace with today’s powerful GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

As AI powered by GPUs transforms computing, conventional DDR memory can’t keep up.

The solution? High-bandwidth memory (HBM).

HBM is memory chip technology that essentially shortens the information commute. It does this using ultra-wide communication lanes.

An HBM device contains vertically stacked memory chips. They’re interconnected by microscopic wires known as through-silicon vias, or TSVs for short.

HBM also provides more bandwidth per watt. And, with a smaller footprint, the technology can also save valuable data-center space.

Here’s how: A single HBM stack can contain up to eight DRAM modules, with each module connected by two channels. This makes an HBM implementation of just four chips roughly equivalent to 30 DDR modules, and in a fraction of the space.

All this makes HBM ideal for workloads that utilize AI and machine learning, HPC, advanced graphics and data analytics.

Latest & Greatest

The latest iteration, HBM3, was introduced in 2022, and it’s now finding wide application in market-ready systems.

Compared with the previous version, HBM3 adds several enhancements:

  • Higher bandwidth: Up to 819 GB/sec., up from HBM2’s max of 460 GB/sec.
  • More memory capacity: 24GB per stack, up from HBM2’s 8GB
  • Improved power efficiency: Delivering more data throughput per watt
  • Reduced form factor: Thanks to a more compact design

However, it’s not all sunshine and rainbows. For one, HBM-equipped systems are more expensive than those fitted out with traditional memory solutions.

Also, HBM stacks generate considerable heat. Advanced cooling systems are often needed, adding further complexity and cost.

Compatibility is yet another challenge. Systems must be designed or adapted to HBM3’s unique interface and form factor.

In the Market

As mentioned above, HBM3 is showing up in new products. That very definitely includes both the AMD Instinct MI300A and MI300X series accelerators.

The AMD Instinct MI300A accelerator combines a CPU and GPU for running HPC/AI workloads. It offers HBM3 as the dedicated memory with a unified capacity of up to 128GB.

Similarly, the AMD Instinct MI300X is a GPU-only accelerator designed for low-latency AI processing. It contains HBM3 as the dedicated memory, but with a higher capacity of up to 192GB.

For both of these AMD Instinct MI300 accelerators, the peak theoretical memory bandwidth is a speedy 5.3TB/sec.

The AMD Instinct MI300X is also the main processor in Supermicro’s AS -8125GS-TNMR2, an H13 8U 8-GPU system. This system offers a huge 1.5TB of HBM3 memory in single-server mode, and an even huger 6.144TB at rack scale.

Are your customers running AI with fast GPUs, only to have their systems held back by conventional memory? Tell them to check out HBM.

Do More:

 

Featured videos


Follow


Related Content

At Computex, AMD & Supermicro CEOs describe AI advances you’ll be adopting soon

Featured content

At Computex, AMD & Supermicro CEOs describe AI advances you’ll be adopting soon

At Computex Taiwan, Lisa Su of AMD and Charles Liang of Supermicro delivered keynotes that focused on AI, liquid cooling and energy efficiency.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The chief executives of both AMD and Supermicro used their Computex keynote addresses to describe their companies’ AI products and, in the case of AMD, pre-announce important forthcoming products.

Computex 2024 was held this past week in Taipei, Taiwan, with the conference theme of “connecting AI.” Exhibitors featured some 1,500 companies from around the world, and keynotes were delivered by some of the IT industry’s top executives.

That included Lisa Su, chairman and CEO of AMD, and Charles Liang, founder and CEO of Supermicro. Here's some of what they previewed at Computex 2024

Lisa Su, AMD: Top priority is AI

Su of AMD presented one of this Computex’s first keynotes. Anyone who thought she might discuss topics other than AI was quickly set straight.

“AI is our number one priority,” Su told the crowd. “We’re at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business, improves our quality of life, and reshapes every part of the computing market.”

AMD intends to lead in AI solutions by focusing on three priorities, she added: delivering a broad portfolio of high-performance, energy-efficient compute engines (including CPUs, GPUs and NPUs); enabling an open and developer-friendly ecosystem; and co-innovating with partners.

The latter point was supported during Su’s keynote by brief visits from several partner leaders. They included Pavan Dhavulari, corporate VP of Windows devices at Microsoft; Christian Laforte, CTO of Stability AI; and (via a video link) Microsoft CEO Satya Nadella.

Fairly late in Su’s hour-plus keynote, she held up AMD’s forthcoming 5th gen EPYC server processor, codenamed Turin. It’s scheduled to ship by year’s end.

As Su explained, Turin will feature up to 192 cores and 384 threads, up from the current generation’s max of 128 cores and 256 threads. Turin will contain 13 chiplets built in both 3-nm and 6-nm processor technology. Yet it will be available as a drop-in replacement for existing EPYC platforms, Su said.

Turin processors will use AMD’s new ‘Zen5’ cores, which Su also announced at Computex. She described AMD’s ‘Zen5’ as “the highest performance and most energy-efficient core we’ve ever built.”

Su also discussed AMD’s MI3xx family of accelerators. The MI300, introduced this past December, has become the fastest ramping product in AMD’s history, she said. Microsoft’s Nadella, during his short presentation, bragged that his company’s cloud was the first to deliver general availability of virtual machines using the AMD MI300X accelerator.

Looking ahead, Su discussed three forthcoming Instinct accelerators on AMD’s road map: The MI325, MI350 and MI400 series.

The AMD Instinct MI325, set to launch later this year, will feature more memory (up to 288GB) and higher memory bandwidth (6TB/sec.) than the MI300. But the new component will still use the same infrastructure as the MI300, making it easy for customers to upgrade.

The next series, MI350, is set for launch next year, Su said. It will then use AMD’s new CDNA4 architecture, which Su said “will deliver the biggest generational AI leap in our history.” The MI350 will be built on 3nm process technology, but will still offer a drop-in upgrade from both the MI300 and MI325.

The last of the three, the MI400 series, is set to start shipping in 2026. That’s also when AMD will deliver a new generation of CDNA, according to Su.

Both the MI325 and MI350 series will leverage the same industry standard universal baseboard OCP server design used by MI300. Su added: “What that means is, our customers can adopt this new technology very quickly.”

Charles Liang, Supermicro: Liquid cooling is the AI future

Liang dedicated his Computex keynote to the topics of liquid cooling and “green” computing.

“Together with our partners,” he said, “we are on a mission to build the most sustainable data centers.”

Liang predicted a big change from the present, where direct liquid cooling (DLC) has a less-than-1% share of the data center market. Supermicro is targeting 15% of new data center deployments in the next year, and Liang hopes that will hit 30% in the next two years.

Driving this shift, he added, are several trends. One, of course, is the huge uptake of AI, which requires high-capacity computing.

Another is the improvement of DLC technology itself. Where DLC system installations used to take 4 to 12 months, Supermicro is now doing them in just 2 to 4 weeks, Liang said. Where liquid cooling used to be quite expensive, now—when TCO and energy savings are factored in—“DLC can be free, with a big bonus,” he said. And where DLC systems used to be unreliable, now they are high performing with excellent uptime.

Supermicro now has capacity to ship 1,000 rack scale solutions with liquid cooling per month, Liang said. In fact, the company is shipping over 50 liquid-cooled racks per day, with installations typically completed within just 2 weeks.

“DLC,” Liang said, “is the wave of the future.”

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: AI edition

Featured content

Research Roundup: AI edition

Catch up on the latest research and analysis around artificial intelligence.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is the No. 1 AI solution being deployed. Three in 4 knowledge workers are already using AI. The supply of workers with AI skills can’t meet the demand. And supply chains can be helped by AI, too.

Here’s your roundup of the latest in AI research and analysis.

GenAI is No. 1

Generative AI isn’t just a good idea, it’s now the No. 1 type of AI solution being deployed.

In a survey recently conducted by research and analysis firm Gartner, more than a quarter of respondents (29%) said they’ve deployed and are now using GenAI.

That was a higher percentage than any other type of AI in the survey, including natural language processing, machine learning and rule-based systems.

The most common way of using GenAI, the survey found, is embedding it in existing applications. For example, using Microsoft Copilot for 365. This was cited by about 1 in 3 respondents (34%).

Other approaches mentioned by respondents included prompt engineering (cited by 25%), fine-tuning (21%) and using standalone tools such as ChatGPT (19%).

Yet respondents said only about half of their AI projects (48%) make it into production. Even when that happens, it’s slow. Moving an AI project from prototype to production took respondents an average of 8 months.

Other challenges loom, too. Nearly half the respondents (49%) said it’s difficult to estimate and demonstrate an AI project’s value. They also cited a lack of talent and skills (42%), lack of confidence in AI technology (40%) and lack of data (39%).

Gartner conducted the survey in last year’s fourth quarter and released the results earlier this month. In all, valid responses were culled from 644 executives working for organizations in the United States, the UK and Germany.

AI ‘gets real’ at work

Three in 4 knowledge workers (75%) now use AI at work, according to the 2024 Work Trend Index, a joint project of Microsoft and LinkedIn.

Among these users, nearly 8 in 10 (78%) are bringing their own AI tools to work. That’s inspired a new acronym: BYOAI, short for Bring Your Own AI.

“2024 is the year AI at work gets real,” the Work Trend report says.

2024 is also a year of real challenges. Like the Gartner survey, the Work Trend report finds that demonstrating AI’s value can be tough.

In the Microsoft/LinkedIn survey, nearly 8 in 10 leaders agreed that adopting AI is critical to staying competitive. Yet nearly 6 in 10 said they worry about quantifying the technology’s productivity gains. About the same percentage also said their organization lacks an AI vision and plan.

The Work Trend report also highlights the mismatch between AI skills demand and supply. Over half the leaders surveyed (55%) say they’re concerned about having enough AI talent. And nearly two-thirds (65%) say they wouldn’t hire someone who lacked AI skills.

Yet fewer than 4 in 10 users (39%) have received AI training from their company. And only 1 in 4 companies plan to offer AI training this year.

The Work Trend report is based on a mix of sources: a survey of 31,000 people in 31 countries; labor and hiring trends on the LinkedIn site; Microsoft 365 productivity signals; and research with Fortune 500 customers.

AI skills: supply-demand mismatch

The mismatch between AI skills supply and demand was also examined recently by market watcher IDC. It expects that by 2026, 9 of every 10 organizations will be hurt by an overall IT skills shortage. This will lead to delays, quality issues and revenue loss that IDC predicts will collectively cost these organizations $5.5 trillion.

To be sure, AI skills are currently the most in-demand skill for most organizations. The good news, IDC finds, is that more than half of organizations are now using or piloting training for GenAI.

“Getting the right people with the right skills into the right roles has never been more difficult,” says IDC researcher Gina Smith. Her prescription for success: Develop a “culture of learning.”

AI helps supply chains, too

Did you know AI is being used to solve supply-chain problems?

It’s a big issue. Over 8 in 10 global businesses (84%) said they’ve experienced supply-chain disruptions in the last year, finds a survey commissioned by Blue Yonder, a vendor of supply-chain solutions.

In response, supply-chain executives are making strategic investments in AI and sustainability, Blue Yonder finds. Nearly 8 in 10 organizations (79%) said they’ve increased their investments in supply-chain operations. Their 2 top areas of investment were sustainability (cited by 48%) and AI (41%).

The survey also identified the top supply-chain areas for AI investment. They are planning (cited by 56% of those investing in AI), transportation (53%) and order management (50%).

In addition, 8 in 10 respondents to the survey said they’ve implemented GenAI in their supply chains at some level. And more than 90% said GenAI has been effective in optimizing their supply chains and related decisions.

The survey, conducted by an independent research firm with sponsorship by Blue Yonder, was fielded in March, with the results released earlier this month. The survey received responses from more than 600 C-suite and senior executives, all of them employed by businesses or government agencies in the United States, UK and Europe.

Do more:

 

Featured videos


Follow


Related Content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Featured content

Supermicro, Vast collaborate to deliver turnkey AI storage at rack scale

Supermicro and Vast Data are jointly offering an AMD-based turnkey solution that promises to simplify and accelerate AI and data pipelines.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro and Vast Data are collaborating to deliver a turnkey, full-stack solution for creating and expanding AI deployments.

This joint solution is aimed at hyperscalers, cloud service providers (CSPs) and large, data-centric enterprises in fintech, adtech, media and entertainment, chip design and high-performance computing (HPC).

Applications that can benefit from the new joint offering include enterprise NAS and object storage; high-performance data ingestion; supercomputer data access; scalable data analysis; and scalable data processing.

Vast, founded in 2016, offers a software data platform that enterprises and CSPs use for data-intensive computing. The platform is based on a distributed systems architecture, called DASE, that allows a system to run read and write operations at any scale. Vast’s customers include Pixar, Verizon and Zoom.

By collaborating with Supermicro, Vast hopes to extend its market. Currently, Vast sells to infrastructure providers at a variety of scales. Some of its largest customers have built 400 petabyte storage systems, and a few are even discussing systems that would store up to 2 exabytes, according to John Mao, Vast’s VP of technology alliances.

Supermicro and Vast have engaged with many of the same CSPs separately, supporting various parts of the solution. By formalizing this collaboration, they hope to extend their reach to new customers while increasing their sell-through to current customers.

Vast is also looking to the Supermicro alliance to expand its global reach. While most of Vast’s customers today are U.S.-based, Supermicro operates in over 100 countries worldwide. Supermicro also has the infrastructure to integrate, test and ship 5,000 fully populated racks per month from its manufacturing plants in California, Netherlands, Malaysia and Taiwan.

There’s also a big difference in size. Where privately held Vast has about 800 employees, publicly traded Supermicro has more than 5,100.

Rack solution

Now Vast and Supermicro have developed a new converged system using Supermicro’s Hyper A+ servers with AMD EPYC 9004 processors. The solution combines 2 separate Vast servers. 

This converged system is well suited to large service providers, where the typical Supermicro-powered Vast rack configuration will start at about 2PB, Mao adds.

Rack-scale configurations can cut costs by eliminating the need for single-box redundancy. This converged design makes the system more scalable and more cost-efficient.

Under the hood

One highlight of the joint project: It puts Vast’s DASE architecture on Supermicro’s  industry-standard servers. Each server will have both the compute and storage functions of a Vast cluster.

At the same time, the architecture is disaggregated via a high-speed Ethernet NVMe fabric. This allows each node to access all drives in the cluster.

The Vast platform architecture uses a series of what the company calls an EBox. Each EBox, in turn, contains 2 kinds of storage servers in a container environment: CNode (short for Compute Node) and DNode (short for Data Node). In a typical EBox, one CNode interfaces with client applications and writes directly to two DNode containers.

In this configuration, Supermicro’s storage servers can act as a hardware building block to scale Vast to hundreds of petabytes. It supports Vast’s requirement for multiple tiers of solid-state storage media, an approach that’s unique in the industry.

CPU to GPU

At the NAB Show, held recently in Las Vegas, Supermicro’s demos included storage servers, each powered by a single-socket AMD EPYC 9004 Series processor.

With up to 128 PCIe Gen 5 lanes, the AMD processor empowers the server to connect more SSDs via NVMe with a single CPU. The Supermicro storage server also lets users move data directly from storage to GPU memory supporting Nvidia’s GPU Direct storage protocol, essentially bypassing a GPU cluster’s CPU using RDMA.

If you or your customers are interested in the new Vast solution, get in touch with your local Supermicro sales rep or channel partner. Under the terms of the new partnership, Supermicro is acting as a Vast integrator and OEM. It’s also Vast’s only rack-scale partner.

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: What’s the difference between AI training and AI inference?

Featured content

Tech Explainer: What’s the difference between AI training and AI inference?

AI training and inference may be two sides of the same coin, but their compute needs can be quite different. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Artificial Intelligence (AI) training and inference are two sides of the same coin. Training is the process of teaching an AI model how to perform a given task. Inference is the AI model in action, drawing its own conclusions without human intervention.

Take a theoretical machine learning (ML) model designed to detect counterfeit one-dollar bills. During the training process, AI engineers would feed the model large data sets containing thousands, or even millions, of pictures. And tell the training application which are real and which are counterfeit.

Then inference could kick in. The AI model could be uploaded to retail locations, then run to detect bogus bills.

A deeper look at training

That’s the high level. Let’s dig in a bit deeper.

Continuing with our bogus-bill detecting workload, during training, the pictures fed to the AI model would include annotations telling the AI how to think about each piece of data.

For instance, the AI might see a picture of a dollar bill with an embedded annotation that essentially tells the model “this is legal tender.” The annotation could also identify characteristics of a genuine dollar, such as the minute details of the printed iconography and the correct number of characters in the bill’s serial number.

Engineers might also feed the AI model pictures of counterfeit bills. That way, the model could learn the tell-tale signs of a fake. These might include examples of incomplete printing, color discrepancies and missing watermarks.

On to inference

One the training is complete, inference can take over.

Still with our example of counterfeit detection, the AI model could now be uploaded to the cloud, then connected with thousands of point-of-sale (POS) devices in retail locations worldwide.

Retail workers would scan any bill they suspect might be fake. The machine learning model, in turn, would then assess the bill’s legitimacy.

This process of AI inference is autonomous. In other words, once the AI enters inference, it’s no longer getting help from engineers and app developers.

Using our example, during inference the AI system has reached the point where it can reliably discern both legal and counterfeit bills. And it can do so with a high enough success percentage to satisfy its human controllers.

Different needs

AI training and inference also have different technology requirements. Basically, training is far more resource-intensive. The focus is on achieving low-latency operation and brute force.

Training a large language model (LLM) chatbot like the popular ChatGPT often forces its underlying technology to contend with more than a trillion parameters. An AI parameter is a variable learned by the LLM during training. These parameters include configuration settings and components that define the LLM’s behavior.)

To meet these requirements, IT operations must deploy a system that can bring to bear raw computational power in a vast cluster.

By contrast, inference applications have different compute requirements. “Essentially, it’s, ‘I’ve trained my model, now I want to organize it,’” explained AMD executive VP and CTO Mark Papermaster in a recent virtual presentation.

AMD’s dual-processor solution

Inferencing workloads are both more concise and less demanding than those for training. Therefore, it makes sense to run them on more affordable GPU-CPU combination technology like the AMD Instinct MI300A.

The AMD Instinct MI300A is an accelerated processing unit (APU) that combines the facility of a standard AI accelerator with the efficiency of AMD EPYC processors. Both the CPU and GPU elements can share memory, dramatically enhancing efficiency, flexibility and programmability.

A single AMD MI300A APU packs 228 GPU compute units, 24 of AMD’s ‘Zen 4’ CPU cores, and 128GB of unified HBM3 memory. Compared with the previous-generation AMD MI250X accelerators, this translates to approximately 2.6x the workload performance per watt using FP32.

That’s a significant increase in performance. It’s likely to be repeated as AI infrastructure evolves along with the proliferation of AI applications that now power our world.

Do more:

 

 

Featured videos


Follow


Related Content

Research Roundup: Tech leaders’ time, GenAI for HR, network security in the cloud, dangerous dating sites

Featured content

Research Roundup: Tech leaders’ time, GenAI for HR, network security in the cloud, dangerous dating sites

Catch up on the latest IT market research from MIT, Gartner, Dell’Oro Group and others. 

Learn More about this topic
  • Applications:

Tech leaders are spending more time with the channel. HR execs are getting serious about GenAI. Network security is moving to the cloud. And online dating sites can be dangerous.

That’s some of the latest and greatest from leading IT researchers. And here’s your Performance Intensive Computing roundup.

Tech leaders spend more time with the channel

C-level technology executives in 2022 spent 17% of their time working with external customers and channel partners, up from 10% of their time in 2007, according to a recent report from the MIT Center for Information Systems Research (CISR).

Good news, right? Well, conversely, these same tech leaders spent less time collaborating with their coworkers and working on their organizations’ technology stacks. Guess something had to give.

The report, published earlier this year, is the work of three MIT researchers. To compile the data, they reviewed surveys of CIOs, CTOs and CDOs conducted in 2007, 2016 and 2022.

Why the 7% increase in time spent with external customers and channel partners? According to the MIT researchers, it’s the “growing number of digital touchpoints.”

HR execs using GenAI

Nearly 4 in 10 HR executives (38%) are now piloting, planning to implement, or have already implemented generative AI, finds research firm Gartner. That’s up sharply from just 19% of HR execs as recently as last June.

The results come from a quick Gartner survey. This past Jan. 31, the firm polled nearly 180 HR execs.

One of the survey’s key findings: “More organizations are moving from exploring how GenAI might be used…to implementing solutions,” says Dion Love, a VP in Gartner’s HR practice.

Gartner’s January survey also found 3 top use cases for GenAI in HR:

  • HR service delivery: Of those working with GenAI, over 4 in 10 (43%) are using the technology for employee-facing chatbots.
  • HR operations: Nearly as many (42%) are working with GenAI for administrative tasks, policies and generating documents.
  • Recruiting: About the same percentage (41%) are working with GenAI for job descriptions and skills data.

Yet all this work is not leading to many new GenAI-related job roles. Over two-thirds of the respondents (67%) said they do not plan to add any GenAI-related roles to the HR function over the next 12 months.

Network security moving to the cloud

Sales of SaaS-based and virtual network-security solutions surged last year by 26%, reaching a global total of $9.6 billion. By contrast, the overall network-security market shrank by 1%.

That’s according to a report from Dell’Oro Group. It calls the move to network-security solutions in the cloud a “pivotal shift.”

Dell’Oro senior director Mauricio Sanchez goes even further. He calls the industry’s gravitation toward SaaS and virtual solutions “nothing short of revolutionary.”

Also, nearly $5 billion of that $9.6 billion market was due to a 30% rise in spending on SSE networks, Dell’Oro says. SSE, short for Security Service Edge, incorporates various services—including network service brokering, identity service brokering, and security as a service—in a single package.

Looking for love online? Be careful

Nearly 7 out of 10 online daters have been scammed while using dating sites. Some of the victims lost money; others risked their personal security.

That’s according to a survey conducted for ID Shield. The survey was limited, reaching only about 270 people. But all respondents had at least used a dating app in the last 3 years.

The survey’s key findings:

  • Financial loss: Six in 10 scam victims on dating sites lost more than $10,000 to the crooks. And slightly more (64%) disclosed personal and finance information that was later used against them.
  • ID theft: Nearly 7 in 10 respondents were asked to verify their identity to someone on the dating app. And nearly two-thirds (65%) divulged their Social Security numbers. 
  • Repeat users: You might think the victims would learn. But 93% of users who were scammed once on a dating app say they continue to use the same app. Let’s hope they’re at least being careful.

 

Featured videos


Follow


Related Content

AMD and Supermicro: Pioneering AI Solutions

Featured content

AMD and Supermicro: Pioneering AI Solutions

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bringing AMD Instinct to the Forefront

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions. Their shared vision? To enable organizations to unlock the full potential of AI workloads, from training massive language models to accelerating complex simulations.

The AMD Instinct MI300 Series: Changing The AI Acceleration Paradigm

At the heart of this collaboration lies the AMD Instinct MI300 Series—a family of accelerators designed to redefine performance boundaries. These accelerators combine high-performance AMD EPYC™ 9004 series CPUs with the powerful AMD InstinctTM MI300X GPU accelerators and 192GB of HBM3 memory, creating a formidable force for AI, HPC, and technical computing.

Supermicro’s H13 Generation of GPU Servers

Supermicro’s H13 generation of GPU Servers serves as the canvas for this technological masterpiece. Optimized for leading-edge performance and efficiency, these servers integrate seamlessly with the AMD Instinct MI300 Series. Let’s explore the highlights:

8-GPU Systems for Large-Scale AI Training:

  • Supermicro’s 8-GPU servers, equipped with the AMD Instinct MI300X OAM accelerator, offer raw acceleration power. The AMD Infinity Fabric™ Links enable up to 896GB/s of peak theoretical P2P I/O bandwidth, while the 1.5TB HBM3 GPU memory fuels large-scale AI models.
  • These servers are ideal for LLM Inference and training language models with trillions of parameters, minimizing training time and inference latency, lowering the TCO and maximizing throughput.

Benchmarking Excellence

But what about real-world performance? Fear not! Supermicro’s ongoing testing and benchmarking efforts have yielded remarkable results. The continued engagement between AMD and Supermicro performance teams enabled Supermicro to test pre-release ROCm versions with the latest performance optimizations and publicly released optimization like Flash Attention 2 and vLLM. The Supermicro AMD-based system AS -8125GS-TNMR2 showcases AI inference prowess, especially on models like Llama-2 70B, Llama-2 13B, and Bloom 176B. The performance? Equal to or better than AMD’s published results from the Dec. 6 Advancing AI event.

Image - Blog - AMD and Supermicro Pioneering AI Solutions

Charles Liang’s Vision

In the words of Charles Liang, President and CEO of Supermicro:

“We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators. Our proven architecture allows for fully integrated liquid cooling solutions, giving customers a competitive advantage.”

Conclusion

The AMD-Supermicro partnership isn’t just about hardware and software stacks; it’s about pushing boundaries, accelerating breakthroughs, and shaping the future of AI. So, as we raise our virtual glasses, let’s toast to innovation, collaboration, and the relentless pursuit of performance and excellence.

Featured videos


Follow


Related Content

At MWC, Supermicro intros edge server, AMD demos tech advances

Featured content

At MWC, Supermicro intros edge server, AMD demos tech advances

Learn what Supermicro and AMD showed at the big mobile world conference in Barcelona. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

This year’s MWC Barcelona, held Feb. 27 - 29, was a really big show. Over 101,000 people attended from 205 countries and territories. More than 2,700 organizations either exhibited, partnered or sponsored. And over 1,100 subject-matter experts made presentations.

Among those many exhibitors were Supermicro and AMD.

Supermicro showed off the company’s new AS -1115SV, a cost-optimized, single-AMD-processor server for the edge data center.

And AMD offered demos on AI engines, cryptography for quantum computing and more.

Supermicro AS -1115SV

Okay, Supermicro’s full SKU for this system is A+ Server AS -1115SV-WTNRT. That’s a mouthful, but the essence is simple: It’s a 1U short-depth server, powered by a single AMD processor, and designed for the edge data center.

The single CPU in question is an AMD EPYC 8004 Series processor with up to 64 cores. Memory maxes out at 576 GB of DDR5, and you also get 3 PCIe 5.0 x16 slots and up to 10 hot-swappable 2.5-inch drive bays.

The server’s intended applications include virtualization, firewall, edge computing, cloud services, and database/storage. Supermicro says the server’s high efficiency and low power envelope make it ideal for both telco and edge applications.

AMD’s MWC demos

AMD gave a slew of demos AMD from its MWC booth. Here are three:

  • 5G advanced & AI integrated on the same device: To meet today’s requirements, both 5G advanced and 6G wireless communication systems require that intensive signal processing and novel AI algorithms can be implemented on the same device and AI engine. AMD demo’d its AI Engines, power-efficient, general-purpose processors that can be programmed to address both signal-processing and AI requirements in future wireless systems.
  • High-performance quantum safe cryptography​: Quantum computing threatens the security of existing asymmetric or public-key cryptographic algorithms. This demo showed some powerful alternatives on AMD devices: Kyber, Dilithum and PQShield.
  • GreenRAN 5G on EPYC 8004 Series processors: GreenRAN is an open RAN (radio access network) solution from Parallel Wireless. It’s designed to operate seamlessly across various general-purpose CPUs—including, as this demo showed, the AMD 8004 EPYC family.

Do more:

 

Featured videos


Follow


Related Content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Featured content

Supermicro Adds AI-Focused Systems to H13 JumpStart Program

Supermicro is now letting you validate, test and benchmark AI workloads on its AMD-based H13 systems right from your browser. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has added new AI-workload-optimized GPU systems to its popular H13 JumpStart program. This means you and your customers can validate, test and benchmark AI workloads on a Supermicro H13 system right from your PC’s browser.

The JumpStart program offers remote sessions to fully configured Supermicro systems with SSH, VNC, and web IPMI. These systems feature the latest AMD EPYC 9004 Series Processors with up to 128 ‘Zen 4c’ cores per socket, DDR5 memory, PCIe 5.0, and CXL 1.1 peripherals support.

In addition to previously available models, Supermicro has added the H13 4U GPU System with dual AMD EPYC 9334 processors and Nvidia L40S AI-focused universal GPUs. This H13 configuration is designed for heavy AI workloads, including applications that leverage machine learning (ML) and deep learning (DL).

3 simple steps

The engineers at Supermicro know the value of your customer’s time. So, they made it easy to initiate a session and get down to business. The process is as simple as 1, 2, 3:

  • Select a system: Go to the main H13 JumpStart page, then scroll down and click one of the red “Get Access” buttons to browse available systems. Then click “Select Access” to pick a date and time slot. On the next page, select the configuration and press “Schedule” and then “Confirm.”
  • Sign In: log in with a Supermicro SSO account to access the JumpStart program. If you or your customers don’t already have an account, creating a new account is both free and easy.
  • Initiate secure access: When the scheduled time arrives, begin the session by visiting the JumpStart page. Each server will include documentation and instructions to help you get started quickly.

So very secure

Security is built into the program. For instance, the server is not on a public IP address. Nor is it directly addressable to the Internet. Supermicro sets up the jump server as a proxy, and this provides access to only the server you or your customer are authorized to test.

And there’s more. After your JumpStart session ends, the server is manually secure-erased, the BIOS and firmware are re-flashed, and the OS is reinstalled with new credentials. That way, you can be sure any data you’ve sent to the H13 system will disappear once the session ends.

Supermicro is serious about its security policies. However, the company still warns users to keep sensitive data to themselves. The JumpStart program is meant for benchmarking, testing and validation only. In their words, “processing sensitive data on the demo server is expressly prohibited.”

Keep up with the times

Supermicro’s expertly designed H13 systems are at the core of the JumpStart program, with new models added regularly to address typical workloads.

In addition to the latest GPU systems, the program also features hardware focused on evolving data center roles. This includes the Supermicro H13 CloudDC system, an all-in-one rackmount platform for cloud data centers. Supermicro CloudDC systems include single AMD EPYC 9004 series processors and up to 10 hot-swap NVMe/SATA/SAS drives.

You can also initiate JumpStart sessions on Supermicro Hyper Servers. These multi-use machines are optimized for tasks including cloud, 5G core, edge, telecom and hyperconverged storage.

Supermicro Hyper Servers included in the company’s JumpStart program offer single or dual processor configurations featuring AMD EPYC 9004 processors and up to 8TB of DDR5 memory in a 1U or 2U form factor.

Helping your customers test and validate a Supermicro H13 system for AI is now easy. Just get a JumpStart.

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: IT spending, data-center accelerators, GenAI for software testing, social-media usage

Featured content

Research Roundup: IT spending, data-center accelerators, GenAI for software testing, social-media usage

Get your roundup of the latest, greatest IT research. 

Learn More about this topic
  • Applications:

Global IT spending this year will increase by nearly 7%. Nearly half of data-center systems bought this year will be accelerators. Generative AI will soon automate 70% of all software tests. And 8 in 10 American adults use YouTube.

That’s some of the latest, greatest IT research. And here’s your Performance Intensive Computing roundup.

IT spending on the rise

IT spending worldwide will rise by nearly 7% this year over last year, predicts Gartner, for a 2024 total of $4.99 trillion. (Yes, the T is correct.)

The fastest-growing sector will be software. Gartner expects software spending worldwide to rise by nearly 13% this year, bringing total annual spending to slightly more than $1 trillion.

The second-fastest growth will come in data center systems, where Gartner predicts a spending rise this year of 7.5%, for a worldwide total of $261.3 billion.

The overall spending forecast of 6.8% is more than twice 2023’s spending increase of just 3.3%. Last year, CIOs experienced what Gartner calls “change fatigue.” That manifested itself in unsigned contracts and unformed tech partnerships.

What about generative AI? Gartner says the technology won’t impact IT spending significantly this year. Instead, organizations this year will mainly plan how they’ll use GenAI in the future.

Diving with ‘accelerators’ 

Spending on semiconductors used in data-center systems will enjoy a 5-year compound annual growth rate (CAGR) of 25%, reaching $286 billion in 2028, expects Dell’Oro Group.

Dell’Oro expects nearly half of that will go to ‘accelerators,’ most of them GPUs. In 2023, it adds, data-center accelerator revenue surpassed that of CPUs for the first time. Over the next 5 years, this gap will widen further.

“Ultimately,” says Dell’Oro senior research director Baron Fung, “this will enhance overall efficiency in data centers.”

GenAI for software testing

By 2028—just 4 years off—GenAI tools will be able to write 70% of all software tests, according to a forecast from IDC.

That will not only lower the need for manual testing, but also improve test coverage, software usability and code quality, IDC adds.

It’s a big deal. In IDC’s own survey of IT leaders in the Asia-Pacific region, nearly half the respondents (48%) said code review and testing is one of the most important tasks AI could help with.

To do this, a GenAI tool uses AI algorithms to generate and manage test scripts. This can also include creating test cases, testing procedures, and even self-healing of failed tests.

How Americans use social media

How popular is social media with Americans? Very.

More than 8 in 10 Americans (83%) say they’ve used YouTube, finds a recent Pew Research Center survey of over 5,730 U.S. adults.

Nearly 7 in 10 adults (68%) report they use Facebook, the survey finds. And nearly half (47%) say they use Instagram.

Other social media sites are less popular, but still are used by about quarter to a third of U.S. adults, Pew says. These sites include LinkedIn, Pinterest, Reddit, TikTok, WhatsApp and X.

The fastest-growing social site among U.S. adults? That would be TikTok. In 2021, only about one in five Americans (21%) told Pew they were using the video site. Today that’s up to one in three (33%).

Age matters, too. While only 15% of those 65 and over use Instagram, the site is used by 78% of those aged 18 to 29, Pew finds.

Similarly, while 65% of Americans under the age of 30 use Snapchat, among those over 65, Snapchat is used by just 4%.

 

Featured videos


Follow


Related Content

Pages