Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: What is the AMD “Zen” core architecture?

Featured content

Tech Explainer: What is the AMD “Zen” core architecture?

Originally launched in 2017, this CPU architecture now delivers high performance and efficiency with ever-thinner processes.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The recent release of AMD’s 5th generation processors—formerly codenamed Turin—also heralded the introduction of the company’s “Zen 5” core architecture.

“Zen” is AMD’s name for a design ethos that prioritizes performance, scalability and efficiency. As any CTO will tell you, these 3 aspects are crucial for success in today’s AI era.

AMD originally introduced its “Zen” architecture in 2017 as part of a broader campaign to steal market share and establish dominance in the all-important enterprise IT space.

Subsequent generations of the “Zen” design have markedly increased performance and efficiency while delivering ever-thinner manufacturing processes.

Now and Zen

Since the “Zen” core’s original appearance in AMD Ryzen 1000-series processors, the architecture’s design philosophy has maintained its focus on a handful of vital aspects. They include:

  • A modular design. Known as Infinity Fabric, it facilitates efficient connectivity among multiple CPU cores and other components. This modular architecture enhances scalability and performance, both of which are vital for modern enterprise IT infrastructure.
  • High core counts and multithreading. Both are common to EPYC and Ryzen CPUs built using the AMD “Zen” core architecture. Simultaneous multithreading enables each core to process 2 threads. In the case of EPYC processors, this makes AMD’s CPUs ideal for multithreaded workloads that include Generative AI, machine learning, HPC and Big Data.
  • Advanced manufacturing processes. These allow faster, more efficient communication among individual CPU components, including multithreaded cores and multilevel caches. Back in 2017, the original “Zen” architecture was manufactured using a 14-nanometer (nm) process. Today’s new “Zen 5” and “Zen 5c” architectures (more on these below) reduce the lithography to just 4nm and 3nm, respectively.
  • Enhanced efficiency. This enables IT staff to better manage complex enterprise IT infrastructure. Reducing heat and power consumption is crucial, too, both in data centers and at the edge. The AMD “Zen” architecture makes this possible by offering enterprise-grade EPYC processors that offer up to 192 cores, yet require a maximum thermal design power (TDP) of only 500W.

The Two-Fold Path

The latest, fifth generation “Zen” architecture is divided into two segments: “Zen 5” and “Zen 5c.”

“Zen 5” employs a 4-nanometer (nm) manufacturing process to deliver up to 128 cores operating at up to 4.1GHz. It’s optimized for high per-core performance.

“Zen 5c,” by contrast, offers a 3nm lithography that’s reserved for AMD EPYC 96xx, 97xx, 98xx, and 99xx series processors. It’s optimized for high density and power efficiency.

The most powerful of these CPUs—the AMD EPYC 9965—includes an astonishing 192 cores, a maximum boost clock speed of 3.7GHz, and an L3 cache of 384MB.

Both “Zen 5” and “Zen 5c” are key components of the 5th gen AMD EPYC processors introduced earlier this month. Both have also been designed to achieve double-digit increases in instructions per clock cycle (IPC) and equip the core with the kinds of data handling and processing power required by new AI workloads.

Supermicro’s Satori

AMD isn’t the only brand offering bold, new tech to harried enterprise IT managers.

Supermicro recently introduced its new H14 servers, GPU-accelerated systems and storage servers powered by AMD EPYC 9005 Series processors and AMD Instinct MI325X Accelerators. A number of these servers also support the new AMD “Turin” CPUs.

The new product line features updated versions of Supermicro’s vaunted Hyper system, Twin multinode servers, and AI-inferencing GPU systems. All are now available with the user’s choice of either air or liquid cooling.

Supermicro says its collection of purpose-built powerhouses represents one of the industry’s most extensive server families. That should be welcome news for organizations intent on building a fleet of machines to meet the highly resource-intensive demands of modern AI workloads.

By designing its next-generation infrastructure around AMD 5th Generation components, Supermicro says it can dramatically increase efficiency by reducing customers’ total data-center footprints by at least two-thirds.

Enlightened IT for the AI Era

While AMD and Supermicro’s advances represent today’s cutting-edge technology, tomorrow is another story entirely.

Keeping up with customer demand and the dizzying pace of AI-based innovation means these tech giants will soon return with more announcements, tools and design methodologies. AMD has already promised a new accelerator, the AMD Instinct MI350, will be formally announced in the second half of 2025.

As far as enterprise CTOs are concerned, the sooner, the better. To survive and thrive amid heavy competition, they’ll need an evolving array of next-generation technology. That will help them reduce their bottom lines even as they increase their product offerings—a kind of technological nirvana.

Do More:

Watch a related video: 

Featured videos


Follow


Related Content

Do your customers need more room for AI? AMD has an answer

Featured content

Do your customers need more room for AI? AMD has an answer

If your customers are looking to add AI to already-crowded, power-strapped data centers, AMD is here to help. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

How can your customers make room for AI in data centers that are already full?

It’s a question that’s far from academic. Nine in 10 tech vendors surveyed recently by the Uptime Institute expect AI to be widely used in data centers in the next 5 years.

Yet data center space is both hard to find and costly to rent. Vacancy rates have hit new lows, according to real-estate services firm CBRE Group.

Worse, this combination of supply shortages and high demand is driving up data center pricing and rents. Across North America, CRBE says, pricing is up by 20% year-on-year.

Getting enough electric power is an issue, too. Some utilities have told prospective data-center customers they won’t get the power they requested until the next decade, reports The Wall Street Journal. In other cases, strapped utilities are simply giving customers less power than they asked for.

So how to help your customers get their data centers ready for AI? AMD has some answers. And a free software tool to help.

The AMD Solution

AMD’s solution is simple, with just 2 points:

  • Make the most of existing data-center real estate and power by consolidating existing workloads.
  • Replace the low-density compute of older, inefficient and out-of-warranty systems with compute that’s newer, denser and more efficient.

AMD is making the case that your customers can do both by moving from older Intel-based systems to newer ones that are AMD-based.

For example, the company says, replacing servers based on Intel Xeon 6143 Sky Lake processors with those based on AMD EPYC 9334 CPUs can result in the need for 73% fewer servers, 70% fewer racks and 69% less power.

That could include Supermicro servers powered by AMD EPYC processors. Supermicro H13 servers using AMD EPYC 9004 Series processors offer capabilities for high-performance data centers.

AMD hasn’t yet done comparisons with either its new 5th gen EPYC processors (introduced last week) or Intel’s 86xx CPUs. But the company says the results should be similar.

Consolidating processor-based servers can also make room in your customers’ racks for AMD Instinct MI300 Series accelerators designed specifically for AI and HPC workloads.

For example, if your customer has older servers based on Intel Xeon Cascade Lake processors, migrating them to servers based on AMD EPYC 9754 processors instead can gain them as much as a 5-to-1 consolidation.

The result? Enough power and room to accommodate a new AI platform.

Questions Answered

Simple doesn’t always mean easy. And you and your customers may have concerns.

For example, isn’t switching from one vendor to another difficult?

No, says AMD. The company cross-licenses the X86 instruction set, so on its processors, most workloads and applications will just work.

What about all those cores on AMD processors? Won’t they raise a customer’s failure domain too high?

No, says AMD. Its CPUs are scalable enough to handle any failure domain from 8 to 256 cores per server.

Wouldn’t moving require a cold migration? And if so, wouldn’t that disrupt the customer’s business?

Again, AMD says no. While moving virtual machines (VMs) to a new architecture does require a cold migration, the job can be done without any application downtime.

That’s especially true if you use AMD’s free open-source tool known as VAMT, short for VMware Architecture Migration Tool. VAMT automates cold migration. In one AMD test, it migrated hundreds of VMs in just an hour.

So if your customers among those struggling to find room for AI systems in their already-crowded and power-strapped data centers, tell them consider a move to AMD.

Do More:

 

Featured videos


Follow


Related Content

The AMD Instinct MI300X Accelerator draws top marks from leading AI benchmark

Featured content

The AMD Instinct MI300X Accelerator draws top marks from leading AI benchmark

In the latest MLPerf testing, the AMD Instinct MI300X Accelerator with ROCm software stack beat the competition with strong GenAI inference performance. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

New benchmarks using the AMD Instinct MI300X Accelerator show impressive performance that surpasses the competition.

This is great news for customers operating demanding AI workloads, especially those underpinned by large language models (LLMs) that require super-low latency.

Initial platform tests using MLPerf Inference v4.1 measured AMD’s flagship accelerator against the Llama 2 70B benchmark. This test is an indication for real-world applications, including natural language processing (NLP) and large-scale inferencing.

MLPerf is the industry’s leading benchmarking suite for measuring the performance of machine learning and AI workloads from domains that include vision, speech and NLP. It offers a set of open-source AI benchmarks, including rigorous tests focused on Generative AI and LLMs.

Gaining high marks from the MLPerf Inference benchmarking suite represents a significant milestone for AMD. It positions the AMD Instinct MI300X accelerator as a go-to solution for enterprise-level AI workloads.

Superior Instincts

The results of the LLaMA2-70B test are particularly significant. That’s due to the benchmark’s ability to produce an apples-to-apples comparison of competitive solutions.

In this benchmark, the AMD Instinct MI300X was compared with NVIDIA’s H100 Tensor Core GPU. The test concluded that AMD’s full-stack inference platform was better than the H100 at achieving high-performance LLMs, a workload that requires both robust parallel computing and a well-optimized software stack.

The testing also showed that because the AMD Instinct MI300X offers the largest GPU memory available—192GB of HBM3 memory—it was able to fit the entire LLaMA2-70B model into memory. Doing so helped to avoid network overhead by preventing model splitting. This, in turn, maximized inference throughput, producing superior results.

Software also played a big part in the success of the AMD Instinct series. The AMD ROCm software platform accompanies the AMD Instinct MI300X. This open software stack includes programming models, tools, compilers, libraries and runtimes for AI solution development on the AMD Instinct MI300 accelerator series and other AMD GPUs.

The testing showed that the scaling efficiency from a single AMD Instinct MI300X, combined with the ROCm software stack, to a complement of eight AMD Instinct accelerators was nearly linear. In other words, the system’s performance improved proportionally by adding more GPUs.

That test demonstrated the AMD Instinct MI300X’s ability to handle the largest MLPerf inference models to date, containing over 70 billion parameters.

Thinking Inside the Box

Benchmarking the AMD Instinct MI300X required AMD to create a complete hardware platform capable of addressing strenuous AI workloads. For this task, AMD engineers chose as their testbed the Supermicro AS -8125GS-TNMR2, a massive 8U complete system.

Supermicro’s GPU A+ Client Systems are designed for both versatility and redundancy. Designers can outfit the system with an impressive array of hardware, starting with two AMD EPYC 9004-series processors and up to 6TB of ECC DDR5 main memory.

Because AI workloads consume massive amounts of storage, Supermicro has also outfitted this 8U server with 12 front hot-swap 2.5-inch NVMe drive bays. There’s also the option to add four more drives via an additional storage controller.

The Supermicro AS -8125GS-TNMR2 also includes room for two hot-swap 2.5-inch SATA bays and two M.2 drives, each with a capacity of up to 3.84TB.

Power for all those components is delivered courtesy of six 3,000-watt redundant titanium-level power supplies.

Coming Soon: Even More AI power

AMD engineers continually push the limits of silicon and human ingenuity to expand the capabilities of their hardware. So it should come as little surprise that new iterations of the AMD Instinct series are expected to be released in the coming months. This past May, AMD officials said they plan to introduce AMD Instinct MI325, MI350 and MI400 accelerators.

Forthcoming Instinct accelerators, AMD says, will deliver advances including additional memory, support for lower-precision data types, and increased compute power.

New features are also coming to the AMD ROCm software stack. Those changes should include software enhancements including kernel improvements and advanced quantization support.

Are you customers looking for a high-powered, low-latency system to run their most demanding HPC and AI workloads? Tell them about these benchmarks and the AMD Instinct MI300X accelerators.

Do More:

 

Featured videos


Follow


Related Content

Why Lamini offers LLM tuning software on Supermicro servers powered by AMD processors

Featured content

Why Lamini offers LLM tuning software on Supermicro servers powered by AMD processors

Lamini, provider of an LLM platform for developers, turns to Supermicro’s high-performance servers powered by AMD CPUs and GPUs to run its new Memory Tuning stack.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI systems powered by large language models (LLMs) have a serious problem: Their answers can be inaccurate—and sometimes, in the case of AI “hallucinations,” even fictional.

For users, the challenge is equally serious: How do you get precise factual accuracy—that is, correct answers with zero hallucinations—while upholding the generalization capabilities that make LLMs so valuable?

A California-based company, Lamini, has come up with an innovative solution. And its software stack runs on Supermicro servers powered by AMD CPUs and GPUs.

Why Hallucinations Happen

Here’s the premise underlying Lamini’s solution: Hallucinations happen because the right answer is clustered with other, incorrect answers. As a result, the model doesn’t know that a nearly right answer is in fact wrong.

To address this issue, Lamini’s Memory Tuning solution teaches the model that getting the answer nearly right is the same as getting it completely wrong. Its software does this by tuning literally millions of expert adapters with precise facts on top of any open-source LLM, such as Llama 3 or Mistral 3.

The Lamini model retrieves only the most relevant experts from an index at inference time. The goal is high accuracy, high speed and low cost.

More than Fine-Tuning

Isn’t this just LLM fine-tuning? Lamini says no, its Memory Tuning is fundamentally different.

Fine-tuning can’t ensure that a model’s answers are faithful to the facts in its training data. By contrast, Lamini says, its solution has been designed to deliver output probabilities that are not just close, but exactly right.

More specifically, Lamini promises its solution can deliver 95% LLM accuracy with 10x fewer hallucinations.

In the real world, Lamini says one large customer used its solution and raised LLM accuracy from 50% to 95%, and reduced the rate of AI hallucinations from an unreliable 50% to just 5%.

Investors are certainly impressed. Earlier this year Lamini raised $25 million from an investment group that included Amplify Partners, Bernard Arnault and AMD Ventures. Lamini plans to use the funding to accelerate its expert AI development and expand its cloud infrastructure.

Supermicro Solution

As part of its push to offer superior LLM tuning, Lamini chose Supermicro’s GPU server — model number AS -8125S-TNMR2 — to train LLM models in a reasonable time.

This Supermicro 8U system is powered by dual AMD EPYC 9000 series CPUs and eight AMD Instinct MI300X GPUs.

The GPUs connect with CPUs via a standard PCIe 5 bus. This gives fast access when the CPU issues commands or sends data from host memory to the GPUs.

Lamini has also benefited from Supermicro’s capacity and quick delivery schedule. With other GPUs makers facing serious capacity issues, that’s an important benefit for both Lamini and its customers.

“We’re thrilled to be working with Supermicro,” says Lamini co-founder and CEO Sharon Zhou.

Could your customers be thrilled by Lamini, too? Check out the “do more” links below.

Do More:

 

Featured videos


Follow


Related Content

Why CSPs Need Hyperscaling

Featured content

Why CSPs Need Hyperscaling

Today’s cloud service providers need IT infrastructures that can scale like never before.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Hyperscaling IT infrastructure may be one of the toughest challenges facing cloud service providers (CSPs) today.

The term hyperscale refers to an IT architecture’s ability to scale in response to increased demand.

Hyperscaling is tricky, in large part because demand is a constantly moving target. Without much warning, a data center’s IT demand can increase exponentially due to a myriad of factors.

That could mean a public emergency, the failure of another CSP’s infrastructure, or simply the rampant proliferation of data—a common feature of today’s AI environment.

To meet this growing demand, CSPs have a lot to manage. That includes storage measured in exabytes, AI workloads of massive complexity, and whatever hardware is needed to keep system uptime as close to 100% as possible.

The hardware alone can be a real challenge. CSPs now oversee both air- and liquid-powered cooling systems, redundant power sources, diverse networking gear, and miles of copper and fiber-optic cabling. It’s a real handful.

Design with CSPs in Mind

To help CSPs cope with this seemingly overwhelming complexity, Supermicro offers purpose-built hardware designed to tackle the world’s most demanding workloads.

Enterprise-class servers like Supermicro’s H13 and A+ server series offer CSPs powerful platforms built to handle the rigors of resource-intensive AI workloads. They’ve been designed to scale quickly and efficiently as demand and data inevitably increase.

Take the Supermicro GrandTwin. This innovative solution puts the power and flexibility of multiple independent servers in a single enclosure.

The design helps lower operating expenses by enabling shared resources, including a space-saving 2U enclosure, heavy-duty cooling system, backplane and N+1 power supplies.

To help CSPs tackle the world’s most demanding AI workloads, Supermicro offers GPU server systems. These include a massive—and massively powerful—8U eight-GPU server.

Supermicro H13 GPU servers are powered by 4th-generation AMD EPYC processors. These cutting-edge chips are engineered to help high-end applications perform better and return faster.

To make good on those lofty promises, AMD included more and faster cores, higher bandwidth to GPUs and other devices, and the ability to address vast amounts of memory.

Theory Put to Practice

Capable and reliable hardware is a vital component for every modern CSP, but it’s not the only one. IT infrastructure architects must consider not just their present data center requirements but how to build a bridge to the requirements they’ll face tomorrow.

To help build that bridge, Supermicro offers an invaluable list: 10 essential steps for scaling the CSP data center.

A few highlights include:

  • Standardize and scale: Supermicro suggests CSPs standardize around a preferred configuration that offers the best compute, storage and networking capabilities.
  • Plan ahead for support: To operate a sophisticated data center 24/7 is to embrace the inevitability of technical issues. IT managers can minimize disruption and downtime when some-thing goes wrong by choosing a support partner who can solve problems quickly and efficiently.
  • Simplify your supply chain: Hyperscaling means maintaining the ability to move new infra-structure into place fast and without disruption. CSPs can stack the odds in their favor by choosing a partner that is ever ready to deliver solutions that are integrated, validated, and ready to work on day one.

Do More:

Hyperscaling for CSPs will be the focus of a session at the upcoming Supermicro Open Storage Summit ‘24, which streams live Aug. 13 - Aug. 29.

The CSP session, set for Aug. 20, will cover the ways in which CSPs can seamlessly scale their AI operations across thousands of GPUs while ensuring industry-leading reliability, security and compliance capabilities. The speakers will feature representatives from Supermicro, AMD, Vast Data and Solidigm.

Learn more and register now to attend the 2024 Supermicro Open Storage Summit.

 

Featured videos


Follow


Related Content

You’re invited to attend the Supermicro Open Storage Summit ‘24

Featured content

You’re invited to attend the Supermicro Open Storage Summit ‘24

Join this free online event being held August 13 – 29.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Into storage? Then learn about the latest storage innovations at the Supermicro Open Storage Summit ’24. It’s an online event happening over three weeks, August 13 – 29. And it’s free to attend.

The theme of this year’s summit is “enabling software-defined storage from enterprise to AI.” Sessions are aimed at anyone involved with data storage, whether you’re a CIO, IT support professional, or anything in between.

The Supermicro Open Storage Summit ’24 will bring together executives and technical experts from the entire software-defined storage ecosystem. They’ll talk about the latest developments enabling storage solutions.

Each session will feature Supermicro product experts along with leaders from both hardware and software suppliers. Together, these companies give a boost to the software-defined storage solution ecosystem.

Seven Sessions

This year’s Open Storage Summit will feature seven sessions. They’ll cover topics and use cases that include storage for AI, CXL, storage architectures and much more.

Hosting and moderating duties will be filled by Rob Strechay, managing director and principal analyst at theCUBE Research. His company provides IT leaders with competitive intelligence, market analysis and trend tracking.

All the Storage Summit sessions will start at 10 a.m. PDT / 1 p.m. EDT and run for 45 minutes. All sessions will also be available for on-demand viewing later. But by attending a live session, you’ll be able to participate in the X-powered Q&A with the speakers.

What’s On Tap

What can you expect? To give you an idea, here are a few of the scheduled sessions:

Aug. 14: AI and the Future of Media Storage Workflows: Innovations for the Entertainment Industry

Whether it’s movies, TV, or corporate videos, the post-production process including editing, special effects, coloring, and distribution requires both high-performance and large-capacity solutions. In this session, Supermicro, Quantum, AMD and Western Digital will discuss how primary and secondary storage is optimized for post-production workflows.

Aug. 20: Hyperscale AI: Secure Data Services for CSPs

Cloud services providers must seamlessly scale their AI operations across thousands of GPUs, while ensuring industry-leading reliability, security, and compliance capabilities. Speakers from Supermicro, AMD, VAST Data, and Solidigm will explain how CSPs can deploy AI models at an unprecedented scale with confidence and security.

There’s a whole lot more, too. Learn more about the Supermicro Open Storage Summit ’24 and register to attend now.

 

Featured videos


Follow


Related Content

Tech Explainer: What is multi-tenant storage?

Featured content

Tech Explainer: What is multi-tenant storage?

Similar to the way an apartment building lets tenants share heat, hot water and other services, multitenancy lets users share storage resources for fast development and low costs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Multi-tenant storage—also referred to as multitenancy—helps organizations develop applications faster and more efficiently.

It does this by enabling multiple users to both share the resources of a centralized storage architecture and customize their storage environments without affecting the others.

You can think of multi-tenant storage as being like an apartment building. The building’s tenants share a common infrastructure and related services, such as heat, hot water and electricity. Yet each tenant can also set up their individual apartment to suit their unique needs.

When it comes to data storage, leveraging a multi-tenant approach also helps lower each user’s overhead costs. It does this by distributing maintenance fees across all users. Also, tenants can share applications, security features and other infrastructure.

Multitenancy for Cloud, SaaS, AI

Chances are, your customers are already using multi-tenant storage architecture to their advantage. Public cloud platforms such as Microsoft Azure, Amazon Web Services and Google Cloud all serve multiple tenants from a shared infrastructure.

Popular SaaS providers including Dropbox also employ multitenancy to offer millions of customers a unique experience based on a common user interface. Each user’s data store is available to them only, despite its being kept in a common data warehouse.

AI-related workloads will become increasingly common in multi-tenant environments, too. That includes the use of large language models (LLMs) to enable Generative AI. Also, certain AI and ML workloads may be more effective in situations in which they feed—and are fed by—multiple tenants.

In addition, all users in a multitenancy environment can contribute data for AI training, which requires enormous quantities of data. And because each tenant creates a unique data set, this process may offer a wider array of training data more efficiently compared to a single source.

What’s more, data flowing in the other direction—from the AI model to each tenant—also increases efficiency. By sharing a common AI application, tenants gain access to a larger, more sophisticated resource than they would with single tenancy.

Choosing the Right Solution

Whether your customers opt for single tenant, multi-tenant or a combination of the two, they must deploy hardware that can withstand rigorous workloads.

Supermicro’s ASG-1115S–NE3X12R storage server is just such a storage solution. This system offers eight front hot-swap E3.S 1T PCIe 5.0 x4 NVMe drive bays; four front fixed E3.S 2T PCIe 5.0 x8 CXL Type 3 drive bays; and two M.2 NVMe slots.

Processing gets handled by a single AMD EPYC 9004-series CPU. It offers up to 128 cores and 6TB of ECC DDR5 main memory.

Considering the Supermicro storage server’s 12 drives, eight heavy-duty fans and 1600W redundant Titanium Level power supply, you might assume that it takes up a lot of rack space. But no. Astonishingly, the entire system is housed in a single 1U chassis.

Do More:

 

Featured videos


Follow


Related Content

What you need to know about high-performance storage for media & entertainment

Featured content

What you need to know about high-performance storage for media & entertainment

To store, process and share their terabytes of data, media and entertainment content creators need more than your usual storage.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Maintaining fast, efficient and reliable data storage in the age of modern media and entertainment is an increasingly difficult challenge.

Content creators ranging from independent filmmakers to major studios like Netflix and Amazon are churning out enormous amounts of TV shows, movies, video games, and augmented and virtual reality (AR/VR) experiences. Each piece of content must be stored in a way that ensures it’s easy to access, ready to share and fast enough to stream.

This becomes a monumental task when you’re dealing with petabytes of high-resolution footage and graphics. Operating at that scale can overwhelm even the most seasoned professionals.

Those pros must also ensure they have both primary and secondary storage. Primary storage is designed to deliver rapid data retrieval speeds. Secondary storage, on the other hand, provides slower access times and is used for long-term storage.

Seemingly Insurmountable Odds

For media and entertainment production companies, the goal is always the same: speed production and cut costs. That’s why fast, efficient and reliable data storage solutions have become a vital necessity for those who want to survive and thrive in the modern age of media and entertainment.

The amount of data created in a single media project can be staggering.

Each new project uses one or more cameras producing footage with a resolution as high as 8K. And content captured at 8K has 16 times more pixels per frame than traditional HD video. That translates to around 1 terabyte of data for every 1.5 to 2 hours of footage.

For large-scale productions, shooting can continue for weeks, even months. At roughly a terabyte for every 2 hours of shooting, that footage quickly adds up, creating a major data-storage headache.

But wait, there’s more: Your customer’s projects may also include both AR and VR data. High-quality AR/VR can contain hundreds of effects, textures and 3D models, producing data that measures not just in terabytes but petabytes.

Further complicating matters even more, AR/VR data often requires real-time processing, low-latency transfer and multiuser access.

Deploying AI adds yet another dimension. Generative AI (GenAI) now has the ability to create stunning additions to any multimedia project. These may include animated backgrounds, special effects and even virtual actors.

However, AI accounts for some of the most resource-intensive workloads in the world. To meet these stringent demands, not just any storage solution will do.

Extreme Performance Required

For media and entertainment content creators, choosing the right storage solution can be a make-or-break decision. Production companies that produce the highest rate of data must opt for something like the Supermicro H13 Petascale storage server.

The H13 Petascale storage server boasts extreme performance for data-intensive applications. For major content producers, that means high-resolution media editing, AR and VR creation, special effects and the like.

The H13 Petascale storage server is also designed to handle some of the tech industry’s most demanding workloads. These include AI and machine learning (ML) applications, geophysical modeling and big data.

Supermicro’s H13 Petascale storage server delivers up to 480 terabytes of high-performance storage via 16 hot-swap all-flash drives. The system is based on the Enterprise Data Center Standard Form Factor (EDSFF) E3 form factor NVMe storage to provide high-capacity scaling. The 2U Petascale version has double the storage bays and capacity.

Operating on the EDSFF standard also offers better performance with PCIe 5 connectivity and improved thermal efficiency.

Under the hood of this storage beast is a 4th generation AMD EPYC processor with up to 128 cores and 6TB of DDR5 memory. Combined with 128 lanes of PCIe 5 bandwidth, H13 delivers more than 200GB/sec. of bandwidth and more than 25 million input/output operations per second (IOPS).

Data’s Golden Age

Storing, sending and streaming massive amounts of data will continue to be a challenge for the media and entertainment industry.

Emerging formats will push the boundaries of resolution. New computer-aided graphics systems will become the industry standard. And consumers will continue to demand fully immersive AR and VR experiences.

Each of these evolutions will produce more and more data, forcing content creators to search for faster and more cost-effective storage methods.

Note: The media and entertainment industry will be the focus of a special session at the upcoming Supermicro Open Storage Summit ‘24, streaming live from Aug. 13 to Aug. 29. The M&E session, scheduled for Aug. 14 at 10 a.m. PDT / 1 p.m. EDT, will focus on AI and the future of media storage workflows. The speakers will represent Supermicro, AMD, Quantum and Western Digital. Learn more and register now to attend the 2024 Supermicro Open Storage Summit.

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: AI boosts project management & supply chains, HR woes, SMB supplier overload

Featured content

Research Roundup: AI boosts project management & supply chains, HR woes, SMB supplier overload

Catch up on the latest IT market intelligence from leading researchers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Artificial intelligence is boosting both project management and supply chains. Cybersecurity spending is on a tear. And small and midsize businesses are struggling with more suppliers than employees.

That’s some of the latest IT intelligence from leading industry watchers. And here’s your research roundup.

AI for PM 

What’s artificial intelligence good for? One area is project management.

In a new survey, nearly two-thirds of project managers (63%) reported improved productivity and efficiency with AI integration.

The survey was conducted by Capterra, an online marketplace for software and services. As part of a larger survey, the company polled 2,500 project managers in 12 countries.

Nearly half the respondents (46%) said they use AI in their project management tools. Capterra then dug in deeper with this second group—totaling 1,153 project managers—to learn what kinds of benefits they’re enjoying with AI.

Among the findings:

  • Over half the AI-using project managers (54%) said they use the technology for risk management. That’s the top use case reported.
  • Project managers plan to increase their AI spending by an average of 36%.
  • Nine in 10 project managers (90%) said their AI investments earned a positive return in the last 12 months.
  • Improved productivity as a result of using AI was reported by nearly two-thirds of the respondents (63%).
  • Looking ahead, respondents expect the areas of greatest impact from AI to be task automation, predictive analytics and project planning.

AI for Supply Chains, Too

A new report from consulting firm Accenture finds that the most mature supply chains are 23% more profitable than others. These supply-chain leaders are also six times more likely than others to use AI and Generative AI widely.

To figure this out, Accenture analyzed nearly 1,150 companies in 15 countries and 10 industries. Accenture then identified the 10% of companies that scored highest on its supply-chain maturity scale.

This scale was based on the degree to which an organization uses GenAI, advanced machine learning and other new technologies for autonomous decision-making, advanced simulations and continuous improvement. The more an organization does this, the higher was their score.

Accenture also found that supply-chain leaders achieved an average profit margin of 11.8%, compared with an average margin of 9.6% among the others. (That’s the 23% profit gain mentioned earlier.) The leaders also delivered 15% better returns to shareholders: 8.5% vs. 7.4% for others.

HR: Help Wanted 

If solving customer pain points is high on your agenda—and it should be—then here’s a new pain point to consider: Fewer than 1 in 4 human relations functions say they’re getting full business value from their HR technology.

In other words, something like 75% of HR executives could use some IT help. That’s a lot of business.

The assessment comes from research and analysis firm Gartner, based on its survey of 85 HR leaders conducted earlier this year. Among Gartner’s findings:

  • Only about 1 in 3 HR executives (35%) feel confident that their approach to HR technology helps to achieve their organization’s business objectives.
  • Two out of three HR executives believe their HR function’s effectiveness will be hurt if they don’t improve their technology.

Employees are unhappy with HR technology, too. Earlier this year, Gartner also surveyed more than 1,200 employees. Nearly 7 in 10 reported experiencing at least one barrier when interacting with HR technology over the previous 12 months.

Cybersecurity’s Big Spend

Looking for a growth market? Don’t overlook cybersecurity.

Last year, worldwide spending on cybersecurity products totaled $106.8 billion. That’s a lot of money. But event better, it marked a 15% increase over the previous year’s spending, according to market watcher IDC.

Looking ahead, IDC expects this double-digit growth rate to continue for at least the next five years. By 2028, IDC predicts, worldwide spending on cybersecurity products will reach $200 billion—nearly double what was spent in 2023.

By category, the biggest cybersecurity spending last year went to network security: $27.4 billion. After that came endpoint security ($21.6 billion last year) and security analytics ($20 billion), IDC says.

Why such strong spending? In part because cybersecurity is now a board-level topic.

“Cyber risk,” says Frank Dickson, head of IDC’s security and trust research, “is business risk.”

SMBs: Too Many Suppliers

It’s not easy standing out as a supplier of small and midsize business customers. A new survey finds the average SMB has nine times more suppliers than it does employees—and actually uses only about 1 in 4 of those suppliers.

The survey, conducted by spend-management system supplier Spendesk, focused on customers in Europe. (Which makes sense, as Spendesk is headquartered in Paris.) Spendesk examined 4.7 million suppliers used by a sample of its 5,000 customers in the UK, France, Germany and Spain.

Keeping many suppliers while using only a few of them? That’s not only inefficient, but also costly. Spendesk estimates that its SMB customers could be collectively losing some $1.24 billion in wasted time and management costs.

And there’s more at stake, too. A recent study by management consultants McKinsey & Co. finds that small and midsize organizations—those with anywhere from 1 to 200 employees—are actually big business.

By McKinsey’s reckoning, SMBs account for more than 90% of all businesses by number … roughly half the global GDP … and more than two-thirds of all business jobs.

Fun fact: Nearly 1 in 5 of the largest businesses originally started as small businesses.

Do More:

 

Featured videos


Follow


Related Content

HBM: Your memory solution for AI & HPC

Featured content

HBM: Your memory solution for AI & HPC

High-bandwidth memory shortens the information commute to keep pace with today’s powerful GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

As AI powered by GPUs transforms computing, conventional DDR memory can’t keep up.

The solution? High-bandwidth memory (HBM).

HBM is memory chip technology that essentially shortens the information commute. It does this using ultra-wide communication lanes.

An HBM device contains vertically stacked memory chips. They’re interconnected by microscopic wires known as through-silicon vias, or TSVs for short.

HBM also provides more bandwidth per watt. And, with a smaller footprint, the technology can also save valuable data-center space.

Here’s how: A single HBM stack can contain up to eight DRAM modules, with each module connected by two channels. This makes an HBM implementation of just four chips roughly equivalent to 30 DDR modules, and in a fraction of the space.

All this makes HBM ideal for workloads that utilize AI and machine learning, HPC, advanced graphics and data analytics.

Latest & Greatest

The latest iteration, HBM3, was introduced in 2022, and it’s now finding wide application in market-ready systems.

Compared with the previous version, HBM3 adds several enhancements:

  • Higher bandwidth: Up to 819 GB/sec., up from HBM2’s max of 460 GB/sec.
  • More memory capacity: 24GB per stack, up from HBM2’s 8GB
  • Improved power efficiency: Delivering more data throughput per watt
  • Reduced form factor: Thanks to a more compact design

However, it’s not all sunshine and rainbows. For one, HBM-equipped systems are more expensive than those fitted out with traditional memory solutions.

Also, HBM stacks generate considerable heat. Advanced cooling systems are often needed, adding further complexity and cost.

Compatibility is yet another challenge. Systems must be designed or adapted to HBM3’s unique interface and form factor.

In the Market

As mentioned above, HBM3 is showing up in new products. That very definitely includes both the AMD Instinct MI300A and MI300X series accelerators.

The AMD Instinct MI300A accelerator combines a CPU and GPU for running HPC/AI workloads. It offers HBM3 as the dedicated memory with a unified capacity of up to 128GB.

Similarly, the AMD Instinct MI300X is a GPU-only accelerator designed for low-latency AI processing. It contains HBM3 as the dedicated memory, but with a higher capacity of up to 192GB.

For both of these AMD Instinct MI300 accelerators, the peak theoretical memory bandwidth is a speedy 5.3TB/sec.

The AMD Instinct MI300X is also the main processor in Supermicro’s AS -8125GS-TNMR2, an H13 8U 8-GPU system. This system offers a huge 1.5TB of HBM3 memory in single-server mode, and an even huger 6.144TB at rack scale.

Are your customers running AI with fast GPUs, only to have their systems held back by conventional memory? Tell them to check out HBM.

Do More:

 

Featured videos


Follow


Related Content

Pages