Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Gaming as a Service gets a platform boost

Featured content

Gaming as a Service gets a platform boost

Gaming as a Service gets a boost from Blacknut’s new platform for content providers that’s powered by Supermicro and Radian Arc.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Getting into Gaming as a Service? Cloud gaming provider Blacknut has released a new platform for content providers that’s powered by Supermicro and Radian Arc.

This comprehensive edge and cloud architecture provides content providers worldwide with bundled and fully managed game licensing, in-depth content metadata and a global hybrid-cloud solution.

If you’re not into gaming yet, you might want to be. Interactive entertainment and game streaming are on the rise.

Last year, an estimated 30 million paying users spent a combined $2.4 billion on cloud gaming services, according to research firm Newzoo. Looking ahead, Newzoo expects this revenue to more than triple by 2025, topping $8 billion. That would make the GaaS market an attractive investment for content providers.

What’s more, studies show that Gen Z consumers (aged 11 to 26 years old) spend over 12 hours a week playing video games. That’s more time than they spend watching TV, by about 30 minutes a week.

Paradigm shift

This data could signal a paradigm shift that challenges the dominance of traditional digital entertainment. That could include subscription video on demand (SVOD) such as Netflix as well as content platforms including ISPs, device manufacturers and media companies.

To help content providers capture younger, more tech-savvy consumers, Blacknut, Supermicro and Radian Arc are lending their focus to deploying a fully integrated GaaS platform. Blacknut, based in France, offers cloud-based gaming. Australia-based Radian Arc provides digital infrastructure and cloud game technology.

The system offers IT hardware solutions at the edge and the core, system management software and extensive IP. Blacknut’s considerable collection includes a catalog of over 600 AAA to indie games.

Blacknut is also providing white-glove services that include:

  • Onboard games wish lists and help establishing exclusive publisher agreements
  • Support for Bring Your Own Game (BYOG) and freemium game models
  • Assistance with the development of IP-licensed games designed in partnership with specialized studios
  • Marketing support to help providers develop go-to-market plans and manage subscriber engagement

The tech behind GaaS

Providers of cloud-based content know all too well the challenge of providing customers with high-availability, low-latency service. The right technology is a carefully choreographed ballet of hybrid cloud infrastructure, modern edge architecture and the IT expertise required to make it all run smoothly.

At the edge, Blacknut’s GaaS offering operates on Radian Arc’s GPU Edge Infrastructure-as-a-Service platform powered by Supermicro GPU Edge Infrastructure solutions.

These hardware solutions include flexible GPU servers featuring 6 to 8 directly attached GPUs and AMD EPYC processors. Also on board are cloud-optimized, scalable management servers and feature-rich ToR networking switches.

Combined with Blacknut’s public and private cloud infrastructure, an impressive array of hardware and software solutions come together. These can create new ways for content providers to quickly roll out their own cloud-gaming products and capture additional market share.

Going global

The Blacknut GaaS platform is already live in 45 countries and is expanding via distribution partnerships with over-the-top providers and carriers.

The solution can also be pre-embedded in set-top boxes and TV ecosystems. Indeed, it has already found its way onto such marquis devices as Samsung Gaming Hub, LG Gaming Shelf and Amazon FireTV.

To learn more about the Blacknut GaaS platform powered by Radian Arc and Supermicro, check out this new solution brief:

 

Featured videos



Find AMD & Supermicro Elsewhere

Related Content

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos



Find AMD & Supermicro Elsewhere

Related Content

What is the AMD Instinct MI300A APU?

Featured content

What is the AMD Instinct MI300A APU?

Accelerate HPC and AI workloads with the combined power of CPU and GPU compute. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A APU, set to ship in this year’s second half, combines the compute power of a CPU with the capabilities of a GPU. Your data-center customers should be interested if they run high-performance computing (HPC) or AI workloads.

More specifically, the AMD Instinct MI300A is an integrated data-center accelerator that combines AMD Zen 4 cores, AMD CDNA3 GPUs and high-bandwidth memory (HBM) chiplets. In all, it has more than 146 billion transistors.

This AMD component uses 3D die stacking to enable extremely high bandwidth among its parts. In fact, nine 5nm chiplets that are 3D-stacked on top of four 6nm chiplets with significant HBM surrounding it.

And it’s coming soon. The AMD Instinct MI300A is currently in AMD’s labs. It will soon be sampled with customers. And AMD says it’s scheduled for shipments in the second half of this year. 

‘Most complex chip’

The AMD Instinct MI300A was publicly displayed for the first time earlier this year, when AMD CEO Lisa Su held up a sample of the component during her CES 2023 keynote. “This is actually the most complex chip we’ve ever built,” Su told the audience.

A few tech blogs have gotten their hands on early samples. One of them, Tom’s Hardware, was impressed by the “incredible data throughput” among the Instinct MI300A’s CPU, GPU and memory dies.

The Tom’s Hardware reviewer added that will let the CPU and GPU work on the same data in memory simultaneously, saving power, boosting performance and simplifying programming.

Another blogger, Karl Freund, a former AMD engineer who now works as a market researcher, wrote in a recent Forbes blog post that the Instinct MI300 is a “monster device” (in a good way). He also congratulated AMD for “leading the entire industry in embracing chiplet-based architectures.”

Previous generation

The new AMD accelerator builds on a previous generation, the AMD Instinct MI200 Series. It’s now used in a variety of systems, including Supermicro’s A+ Server 4124GQ-TNMI. This completely assembled system supports the AMD Instinct MI250 OAM (OCP Acceleration Module) accelerator and AMD Infinity Fabric technology.

The AMD Instinct MI200 accelerators are designed with the company’s 2nd gen AMD CDNA Architecture, which encompasses the AMD Infinity Architecture and Infinity Fabric. Together, they offer an advanced platform for tightly connected GPU systems, empowering workloads to share data fast and efficiently.

The MI200 series offers P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric Links with up to 800 GB/sec. of peak total theoretical I/O bandwidth. That’s 2.4x the GPU P2P theoretical bandwidth of the previous generation.

Supercomputing power

The same kind of performance now available to commercial users of the AMD-Supermicro system is also being applied to scientific supercomputers.

The AMD Instinct MI25X accelerator is now used in the Frontier supercomputer built by the U.S. Dept. of Energy. That system’s peak performance is rated at 1.6 exaflops—or over a billion billion floating-point operations per second.

The AMD Instinct MI250X accelerator provides Frontier with flexible, high-performance compute engines, high-bandwidth memory, and scalable fabric and communications technologies.

Looking ahead, the AMD Instinct MI300A APU will be used in Frontier’s successor, known as El Capitan. Scheduled for installation late this year, this supercomputer is expected to deliver at least 2 exaflops of peak performance.

 

Featured videos



Find AMD & Supermicro Elsewhere

Related Content

Pages