Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Meet Supermicro’s latest MicroBlade, powered by AMD EPYC 4005 processors

Featured content

Meet Supermicro’s latest MicroBlade, powered by AMD EPYC 4005 processors

This Supermicro/AMD platform offers a flexible, density-optimized blade architecture for a 6U system. Pack it to the max, and you’ll get 320 server nodes in a standard 48U rack.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has introduced a MicroBlade platform powered by the latest AMD EPYC 4005 series processors. Its intended workloads include cloud, virtualization, data services, edge computing and Software as a Service (SaaS).

These MicroBlades are offered in a new 6U system that supports up to 20 blades in a single enclosure. Each blade holds two CPU nodes, so a fully loaded system with 20 blades has 40 nodes. Fill a standard 48U rack with eight of these 6U systems, and you get a total of 160 blades with 320 server nodes.

The Supermicro system also lets customers mix newer processors with older ones. So customers can expand and upgrade only when their compute requirements change, protecting their earlier investments and providing seamless scalability.

Blade Power

Supermicro MicroBlades offer a powerful and flexible extreme-density 3U and 6U all-in-one blade architecture. Compared with standard 1U rackmount servers, they provide up to 86% more power efficiency and up to 56% improved density, Supermicro says.

The new Supermicro MicroBlade (model number MBA-315R-1DE12) is a dual-node device that measures roughly 23 x 5 x 1 inches and weighs just over 3 pounds. In addition to the two AMD processors, the MicroBlade packs in dual 25GbE LAN ports with 100G uplinks for high-speed networking. There are also two slots for up to 128GB of DDR5 memory.

The blade supports the Intelligent Platform Management Interface (IPMI) v.2.0 via Chassis Management Module. CMM offers remote control of individual server blades, power supplies, cooling fans and networking switches. This lets sys admins control maximum power consumption, manage power allocation, reboot and reset the server, and obtain BIOS configuration data—all remotely with a processor that operates independently of the managed systems.

AMD EPYC 4005 series processors are designed for entry-level systems used by small businesses and hosted IT services providers. While affordable, they offer high performance, advanced technologies and energy efficiency.

Do you have small-business or IT service provider customers looking for a flexible yet powerful server? Tell them about the new Supermicro MicroBlade platform powered by AMD EPYC 4005 series processors.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What’s a Neocloud?

Featured content

Tech Explainer: What’s a Neocloud?

This cloud variant has arisen to meet the needs of AI developers. Find out how it differs from hyperscalers—and why your customers might want to jump on board.

Learn More about this topic
  • Applications:
  • Featured Technologies:

A new kind of technology demands a new kind of cloud.

Sure, it’s easy to take cloud computing for granted. After all, it’s been years since “the cloud” became part of our lives and everyday vernacular.

Over the years, clouds ranging from the simple (think Dropbox) to the fabulously complex (think multicloud ecosystems) have been powerful enough to handle whatever we’ve thrown their way.

But now our widespread adoption of AI demands a new kind of cloud.

To the rescue: Behold the neocloud!

Neoclouds offer AI workload-specific functionality as a service. And to help save enterprises and SMBs considerable expenses of time and money, neoclouds offer platforms designed to empower the rapid development and launch of the latest AI creations.

A neocloud isn’t your typical “run anything” platform. Instead, it’s optimized to run a narrow selection of highly specialized AI-centric tasks. These include AI/ML inference and training, data analytics and media rendering.

Neoclouds vs. Traditional Clouds

To better understand how neoclouds fit into the grand scheme of modern cloud architecture, it helps to compare and contrast them with their forebearer, the hyperscaler.

Hyperscalers that include Amazon Web Services (AWS), Microsoft Azure and Google Cloud also offer cloud-based services. They simply offer a much larger and less AI-specific selection.

The seemingly endless array of services these hyperscalers offer makes them ideal for developers who prize flexibility and versatility. Hyperscalers let developers combine multiple managed services to simultaneously harness the power of distributed databases, machine-learning pipelines and other components of a highly customized platform.

By contrast, neoclouds are tuned for specific workloads. They offer a narrower focus and so-called “opinionated architecture” designed to make autonomous architectural decisions. That level of specificity and autonomy changes the nature of the development process from DIY to plug & play.

 

                  

 

More-Specific Hardware, Too

To fully compare neocloud apples with hyperscaler oranges, you also need to look under the hood. The tech behind the latest cloud type makes a huge difference.

For both hyperscalers and neoclouds, we’re talking about some of the most advanced tech ever. But here again, it’s the neocloud’s laser-like focus on AI that makes it an invaluable development tool.

It’s for that reason that popping the top off an AI server like the Supermicro’s 8U server (model AS -8126GS-TNMR) will treat you to a view of truly cutting-edge CPUs, GPUs and networking gear. That gear includes a couple of server-focused AMD EPYC 9005 series processors with as many as 384 cores and up to 6TB of DDR5 memory.

For brute-force AI processing, the Supermicro A+ server also offers room for eight onboard AMD Instinct MI350X GPUs banded together via AMD Infinity Fabric Link.

Supermicro’s behemoth is also equipped with AMD ROCm. Pronounced “rock-em,” it’s a software stack designed to translate the code written by programmers into sets of instructions that AMD GPUs can understand and execute perfectly.

The Neocloud Sales Pitch, Condensed

The what and how of neoclouds are important. But if your customers are considering investing in neocloud, they’ll surely want to know about the why, as well.

So why would you want to engage a neocloud for AI development? There are four main reasons:

1. Neoclouds cut admin work, letting you concentrate instead on production.

A new eBook from Supermicro and AMD, The Smartest Path to Scalable AI, cites neoclouds for their “frictionless dev-to-prod motion.”

That’s tech business-speak for a system that handles the nitty-gritty details, getting out of your way so you can get to work. That includes one-command access to optimized hardware and preconfigured environments.

Bottom line: Less admin, more development, and faster time-to-market.

2. A neocloud delivers instant gratification, not endless development integration.

“Day 0 readiness” is the catchphrase that sums up this one. And not just for any single aspect of the neocloud platform, but for the whole stack. That includes hardware, software, and the managed offerings wrapped around them, collectively referred to as services.

Bottom line: Large models and agents start running efficiently from the get-go.

3. A neocloud is always up-to-date with the latest, greatest silicon.

The last thing you want to contend with is outdated infrastructure. That may fly when it comes to making last-decade file storage app. But creating tomorrow’s brilliant new AI requires cutting-edge tech. The problem is, that tech gets expensive. The solution? Rent, don’t buy.

Bottom line: Access to all the cool toys, with no down payment.

4. It’s already got wheels; you don’t have to reinvent them.

Neoclouds come well stocked with what are known as specialized microservices. These are pre-built, workload-specific building blocks that developers can stand on to bypass the mundanities of production and get to the good stuff.

Examples of wheels you won’t have to reinvent include distributed training orchestration, streaming ingestion services, and GPU render farms.

Bottom line: Neoclouds do the boring due diligence, and let developers get all the glory.

The Future’s Future

Neoclouds are already the future. They’re coming online now, and revealing themselves to be the greatest thing for developers since sliced bread.

But tech moves fast these days. There’s always someone thinking about the next step.

When it comes to the next step for neoclouds, that’s likely to involve deeper specialization, more compelling economics, and consolidation.

That makes sense in terms of the big picture. As both enterprises and SMBs adopt neoclouds, they’ll create more demand. That demand, in turn, should help fund expansion.

Eventually, we may see a new level of specificity. For example, one neocloud could offer low-latency SaaS production inferencing, while another may focus on analytics that cater to medical research.

What happens after that is hard to predict. But one easy-to-believe theory foretells a time in which neoclouds plug into hyperscalers. With that kind of power, imagine what tomorrow’s developers will be able to do!

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

i3D Supercharges Game Hosting with AMD EPYC Processors

Featured content

i3D Supercharges Game Hosting with AMD EPYC Processors

Discover how game host i3D boosted per-core performance, improved the user experience, and improved its TCO…all by moving to Supermicro MicroBlade servers with AMD EPYC 4000 Series processors.

Learn More about this topic
  • Applications:
  • Featured Technologies:

One thing hosts of fast-moving multiplayer games don’t want is jitter.

Jitter is the game industry’s term for an inconsistent user experience. It occurs when what’s known as the tick rate base workload—the frequency with which gamers are updated—differs from one game-player to another.

Keeping this tick rate consistent across gamers isn’t easy. Some games get updated hundreds of times a second.

That explains why global game hosting provider i3D.net recently refreshed its infrastructure stack. The company chose Supermicro MicroBlade servers powered by AMD EPYC 4004 Series processors.

Single-Core Rules

To understand i3D’s choice, it helps to understand how the demands of game hosting differs from those of conventional cloud hosting.

Cloud hyperscalers generally try to pack as many compute cores as possible into the smallest possible space. That’s because they want to support many virtual machines on a single node. To get this result, they buy large servers with lots of cores and plenty of memory.

By contrast, for game hosting providers, it’s single-core performance that rules. These companies want to provide the best possible performance for their users. For this purpose, core count per CPU is relatively unimportant.

One thing that really matters for game hosting is single-core performance. And to control costs, gaming providers typically scale with lots of smaller machines rather than a single big one.

All that’s important for i3D. The company, founded in the Netherlands in 2002, initially rented consumer game servers. In 2018, i3D was acquired by Paris-based Ubisoft, and today it offers not only game online services, but also cloud and compute resources, connectivity services, and colocation via its private data center.

Big Games, Big Systems

i3D planned its rollout in large part to support a new game, “Dune: Awakening.” It’s a massively multiplayer online game.

To provide the needed scale, i3D acquired Supermicro MicroCloud servers powered by AMD EPYC 4464P processors. This CPU, part of AMD’s 4th generation EPYC 4004 series, packs 12 cores, 24 threads and 64MB of cache. Yet its power consumption is just 65 watts, a level that fits most data centers.

Now that the rollout is complete, i3D has found that single-core performance with the new setup on a bare-metal is 52% higher than the previous solution.

As Paul Louvet, i3D’s senior product manager of bare metal, puts it: “AMD has the best performance out there for a very attractive cost.”

Double the Nodes

These AMD processors power i3D’s choice of Supermicro 3U MicroCloud servers with eight nodes (Supermicro model AS -3015MR-H8TNR). Each node has a single AMD CPU. This means i3D can fit 96 nodes in one rack, more than double what they could do before.

The Supermicro chassis also includes dual power supplies, bolstering reliability.

Though the upgrade involved a transition from older servers based on a competitor’s processors, i3D says the shift to AMD was seamless. I3D now has about 1,800 nodes powered by AMD EPYC processors, and it will add even more soon.

Looking ahead, i3D plans to upgrade the Supermicro servers to AMD EPYC 4545P processors. This CPU is a member of AMD’s 5th generation EPYC 4005 series, which offers ‘Zen 5’ data-center processors designed for small businesses and hosted services.

Importantly, these processors offer 16 cores to the prior generation’s 12. That will allow i3D to employ four additional cores for the same power usage.

“That’s incredible,” says Louvet of i3D. “This CPU will allow us to have much better TCO per core.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

2025: Look Back at the Year’s Top Advances

Featured content

2025: Look Back at the Year’s Top Advances

Catch up on 2025’s highlights: ROCm 7.0, liquid-cooled AI servers, server processors for SMBs, and a MicroBlade server that’s highly efficient.

Learn More about this topic
  • Applications:
  • Featured Technologies:

2025 was a year to remember. But in case you’ve forgotten, here are some of the year’s top advances.

ROCm for the AI Era

This past fall, AMD introduced version 7.0 of its ROCm software stack. This latest edition features capabilities designed especially for AI.

ROCm, part of AMD’s portfolio since 2016, translates code written by human programmers into instruction sets that AMD GPUs and CPUs can understand and execute.

Now AMD has purpose-built ROCm 7.0 for GenAI, large-scale AI training, and AI inferencing. Essentially, ROCm now offers the tools and runtime to make the most complex GPU workloads run efficiently.

The full ROCm 7.0 stack contains multiple components. These include drivers, a Heterogeneous Interface for Portability (HIP), math and AI libraries, compilers and system-management tools.

Liquid-Cooled AI Servers

Supermicro introduced two rackmount AI servers in June, both of them powered by AMD Instinct MI350 Series GPUs and dual AMD EPYC 9005 CPUs.

One of the two new servers, Supermicro model number AS -4126GS-NMR-LCC, is a 4U liquid-cooled system. This server can handle up to eight GPUs, the user’s choice of AMD’s Instinct MI325X or MI355X.

The other server, Supermicro model number AS -8126GS-TNMR, is a larger 8U server that’s also air-cooled. It also offers a choice of AMD GPUs, either the AMD Instinct MI325X or AMD Instinct MI350X.

Both servers feature PCIe 5.0 connectivity; memory capacities of up to 2.3TB; support for AMD’s ROCm open-source software; and support for AMD Infinity Fabric Link connections for GPUs.

In June, Supermicro CEO Charles Liang said the new servers “strengthen and expand our industry-leading AI solutions—and give customers greater choice and better performance as they design and build the next generation of data centers.”

EPYCs for SMBs

In May, AMD introduced a CPU series designed specifically for small and medium businesses.

The processors, known as the AMD EPYC 4005 Series, bring a full suite of enterprise-level features and performance. But they’re designed for on-prem SMBs and cloud service providers who need cost-effective solutions in a 3U form factor.

“We’re delivering the right balance of performance, simplicity, and affordability,” says Derek Dicker, AMD’s corporate VP of enterprise and HPC. 

That balance includes the same AMD ‘Zen 5’ core architecture behind the AMD EPYC 9005 Series processors used in data centers run by large enterprises.

The AMD EPYC 4005 Series CPUs for SMBs come in a single-socket package. Depending on model, they offer anywhere from 6 to 16 cores and boosted performance of up to 5.7 GHz.

One model of the AMD EPYC 4005 line also includes integrated AMD 3D V-Cache technology for a larger 128MB L3 cache and lower latency.

MicroBlades for CSPs

The AMD EPYC 4005 Series processors made a star appearance in November, when Supermicro introduced a 6U, 20-node MicroBlade server (model number MBA-315R-1G) powered by the new CPUs.

These servers are intended for small and midsize cloud service providers.

Each blade is powered by a single AMD EPYC 4005 CPU. When 20 blades are combined in the system’s 6U form factor, the system offers 3.3x higher density than a traditional 1U server. It also reduces cabling by up to 95%, saves up to 70% space, and lowers energy costs by up to 30%.

This MicroBlade system with an AMD EPYC 4005 processor is also available as a motherboard (model number BH4SRG) for use in Supermicro A+ servers.

~~~~~~~~~

Happy holidays from all of us at Performance Intensive Computing, and best wishes for the new year! We look forward to serving you in 2026.

~~~~~~~~~~

Read related 2025 blog posts:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: Server Sales Rise, AI Helps Customer Service, Social Media is for Adults, LLMs Know What You Need

Featured content

Research Roundup: Server Sales Rise, AI Helps Customer Service, Social Media is for Adults, LLMs Know What You Need

Catch up on the latest research from leading technology analysts and market watchers.

Learn More about this topic
  • Applications:

Servers set a new sales record. AI is augmenting customer service workers, rather than replacing them. Most adults use social media. And finely-tuned LLMs can identify customer needs better than a human professional.

That’s some of the latest from leading tech analysts and market researchers. And here’s your research roundup.

Servers Set Record Sales

Server sales set a new record in this year’s third quarter. Global sales of these systems rose 61% year-on-year, reaching an all-time quarterly high of $112.4 billion, according to market watcher IDC.

Of that total, $76.3 billion came from x86 servers, representing nearly 70% of the total. Compared with the year-earlier quarter, that marked an increase of about 33%, IDC says.

A much bigger jump came from sales of non-x86 servers. Those sales rose 197% year-on-year, reaching a worldwide total in Q3 of $36.2 billion.

Another fast-growing sector is servers with embedded GPUs. In Q3, sales of these AI-ready servers rose nearly 50% year-on-year, IDC says. Systems with GPUs now represent more than half of all server market revenue.

Growth of server sales in the quarter varied by region. The biggest sales rise was in the United States, where Q3 server sales rose nearly 80% year-on-year, IDC says. Other fast growers included Canada (with server sales up 70%), China (38%) and Japan (28%).

AI Helps Customer Service

Artificial intelligence is mainly augmenting, rather than replacing, customer-service workers, finds a new survey.

The survey, conducted by technology analyst firm Gartner, finds that only one in five customer-service leaders (20%) have reduced staffing due to AI. Even better, about half the respondents (55%) said their staffing levels have remained stable, even as AI has enabled them to handle higher customer-service volume.

AI can even lead to the creation of new jobs. Four in 10 respondents to the Gartner survey (42%) said their organizations are hiring specialized roles to support AI deployment and management. These new roles include AI strategists, conversational AI designers and automation analysts.

The survey, conducted by Gartner in October, collected responses from 321 customer service and support leaders.

Social Media for Adults? Yes!

If anyone tells you social media is strictly for kids, set them straight. A poll conducted by Pew Research finds the vast majority of U.S. adults use social media.

Specifically, YouTube is used by over eight in 10 U.S. adults (84%), the survey finds. And Facebook is used by seven in 10 U.S. adults (71%).

Another social media platform popular with grownups is Instagram. The Pew survey finds it’s used by fully half of U.S. adults.

Plenty of other social media sites are used by U.S. adults, too, if in smaller numbers. They include TikTok (used by 37%), WhatsApp (32%), Reddit (26%), Snapchat (25%) and X (21%).

The survey was conducted by Pew earlier this year, and it drew responses from 5,022 U.S. adults.

The LLM Knows What You Want

Large language models can identify customer needs better than an expert, finds a recent research paper from MIT.

To conduct their experiment, the paper’s three co-authors—John Hauser of MIT Sloan, Artem Timoshenko of Northwestern’s Kellogg School, and MIT pre-doc Chengfeng Mao—fine-tuned an LLM using studies supplied by a market research firm.

They then compared the output of their finely-tuned LLM with that of human analysts and untrained LLMs. The test asked consumers about their preferences for wood stains. In all, consumers were asked about eight primary customer needs and 30 secondary needs.

The results: The fine-tuned LLM identified 100% of the customers’ primary and secondary needs. By comparison, the human analysts missed a few, identifying 87% of the primary needs and 80% of the secondary needs.

That said, actually understanding the needs of wood-stain customers remains a job for humans, says Hauser, a professor of marketing at MIT Sloan.

“If you have to pull customer needs out of a story, the supervised fine-tuned LLM can do it,” he says. “But if you ask an LLM what customers care about when staining a deck, its answers are superficial.”

Want to learn more? Read the full paper.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro adds MicroBlade for CSPs powered by AMD EPYC 4005 series processors

Featured content

Supermicro adds MicroBlade for CSPs powered by AMD EPYC 4005 series processors

To serve cloud service providers, Supermicro adds a 6U, 20-node MicroBlade server powered by AMD EPYC 4005 series processors.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Not every cloud service provider is as big or deep-pocketed as the big three—AWS, Google and Microsoft. And to serve those smaller and midsize CSPs, Supermicro recently added a 6U, 20-node server to its MicroBlade family powered by AMD EPYC 4005 series processors.

Smaller CSPs represent a big market. To be sure,  AWS, Google and Microsoft collectively drew nearly 65% of total worldwide cloud infrastructure services revenue in this year’s third quarter, according to Synergy Research Group.

But for both smaller CSPs and their suppliers, that remaining 35% was still quite valuable. Synergy estimates worldwide cloud infrastructure services revenue in Q3 totaled $107 billion. That means the share left to smaller and midsize CSPs was about $37 billion.

MicroBlade, Macro Benes

To serve these smaller CSPs, Supermicro recently introduced a 6U, 20-node MicroBlade (model number MBA-315R-1G) powered by a single AMD EPYC 4005 series processor.

This MicroBlade system delivers a cost-effective, green computing solution. It’s intended for workloads that include not only cloud computing, but also web hosting, dedicated hosting, virtual desktop infrastructure (VDI), AI inferencing, and enterprise workloads.

Supermicro CEO Charles Liang calls the new servers “a very cost-effective, green computing solution for cloud service providers.”

Key benefits of the new Supermicro system include up to 95% cable reduction with two integrated Ethernet switches per server; 70% space savings; and 30% energy savings over traditional 1U servers.

The system offers 3.3x higher density than a traditional 1U server. As a result, users can pack as many as 160 servers with 2,650 CPU cores, as well as 16 Ethernet switches, in a single 48U rack.

Under the hood, each MicroBlade server blade supports a single AMD EPYC 4005 CPU with up to 16 cores and 192GB of DDR5 memory. Also supported is a dual-slot, full-height/full-length (FHFL) GPU.

Also, this Supermicro system contains a dual-port 10GbE network switch. It’s designed to simplify topologies and enable more server instances per rack.

The 6U MicroBlade chassis can hold up to 20 individual server blades, two Ethernet switches and two management modules.

To protect workloads such as dedicated hosting, VDI, online gaming and AI inferencing, the Supermicro system also offers N+N redundancy. This setup configures two sets of independent components to provide high levels of reliability.

The MicroBlade system will also be available as a motherboard (model number BH4SRG) for Supermicro A+ servers.

 

 

Inside the AMD EPYC 4005 Series

The processors powering the new Supermicro server, AMD’s EPYC 4005 series, offer powerful performance for AI, cloud and hosting workloads. Yet they’re attractively priced for smaller businesses and hosting services.

The processors are based on the same core generation, ‘Zen 5,’ as are AMD’s more powerful data center processors, the AMD EPYC 9005 series. Yet the 4005 series processors have been designed for smaller operations, offering a combination of affordability, efficiency and ease of use.

AMD’s corporate VP for enterprise and HPC, Derek Dicker, says the AMD EPYC 4005 series processors “give our technology partners the flexibility to create powerful yet affordable systems that meet the specific needs of growing businesses and dedicated hosters.”

Do you have CSP clients looking for an affordable yet powerful servers? Tell them about these new AMD-powered Supermicro servers, coming soon.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: Cloud infrastructure spending, AI PoCs, preemptive security, AI worries

Featured content

Research Roundup: Cloud infrastructure spending, AI PoCs, preemptive security, AI worries

Get the latest insights from leading IT researchers, industry analysts and market watchers.

Learn More about this topic
  • Applications:

Global spending on cloud infrastructure services rose in the latest quarter by over 20%. Only about one in four AI tests in the Asia/Pacific are moving on to full production. Cybersecurity is about to become preemptive. And the rise of AI has many U.S. adults concerned.

That’s the latest from leading IT researchers, market watchers and pollsters. And here’s your research roundup.

Cloud Infrastructure Spending: Up, Up and Away

Global spending on cloud infrastructure services rose 22% year-on-year in this year’s second quarter (April, May and June), reaching a total of $95.3 billion, according to market watcher Canalys. This marks the sector’s fourth consecutive quarter of year-on-year growth topping 20%.

All that demand was driven by three main forces, Canalys says: AI consumption, revived legacy migrations, and cloud-native scale-ups.

Also during Q2, the Big Three cloud providers—Amazon Web Services, Google Cloud and Microsoft Azure—held their collective 65% share of the market. What’s more, customer spending with the Big Three increased in the quarter by 27% year-on-year, Canalys says.

Customer demand for AI is shifting, too. “An increasing number of enterprises are seeking the capability to switch between different AI models based on specific business requirements,” says Canalys senior analyst Yi Zhang. Their goal: an optimal balance of performance, cost and application fit.

AI PoC to Production? Not Many Yet

Organizations in the Asia/Pacific region are experimenting with AI, but fewer than one in four of their AI applications (23%) have moved from proof-of-concept (PoC) to production, finds industry analyst IDC.

One result, says IDC researcher Abhishek Kumar: “Many Asian businesses are reassessing how to launch and scale AI.”

Part of this reassessment involves a shift to new AI approaches based on end-to-end platforms. However, moving to these approaches won’t be easy, Kumar says. Organizations need to understand not only each vendor’s approach, but also how the proposed systems align with their own organization’s requirements.

IDC recommends that organizations start thinking of their AI suppliers as partners, not just providers. Though we’ve heard that before, this time it’s different: AI is likely to dramatically reshape entire workflows.

Cybersecurity’s Future: Preemptive

Detection and response are currently the main cybersecurity techniques, but that’s about to change, predicts Gartner. The research firm believes that by 2030, over half of all cybersecurity spending worldwide will instead go to technologies that are preemptive.

“Preemptive cybersecurity will soon be “the new gold standard,” asserts Gartner VP Carl Manion.

Why the shift? Because detection/response-based cybersecurity will no longer be enough to keep assets safe from AI-enabled attackers, Manion says.

As part of this shift, organizations will move away from one-size-fits-all security solutions, instead adopting approaches that are more targeted. These could include security systems for specific verticals, such as healthcare and finance; specific application types, such as industrial control systems; and specific threat actor methods, such as supply-chain attacks.

Preemptive cybersec could also include what are known as autonomous cyber immune systems (ACIS). Like a biological immune system, an ACIS will be able to both detect attacks and fight them off.

Resistance to this shift will be futile, Manion says. Organizations that stick with older detection and response security systems, will be exposing their products, services and customers to what he calls “a new, rapidly escalating level of danger.”

AI has U.S. Adults Fretting

The rise of artificial intelligence has U.S. adults concerned, finds a new poll by Pew Research. A majority of respondents say they believe the rising use of AI will worsen people’s ability to think creatively, form meaningful relationships, make difficult decisions and solve problems.

The poll, conducted by Pew in June, reached over 5,000 adults who live in the United States. Pew released the poll results earlier this month.

Overall, more than half the survey respondents (57%) rated the societal risks of AI as high. Only one in four (25%) said the benefits of AI are high.

Other findings include:

  • Creative thinking: In the poll, more than half the respondents (53%) said increased use of AI will worsen people’s ability to think creatively. Only 16% thought increased use of AI would improve this ability. Another 16% said it would be neither better nor worse, and a final 16% wasn’t sure.
  • Relationships: Exactly half the respondents (50%) believe increased use of AI will worsen people’s ability to form meaningful relationships. Only 5% believe wider AI use would improve this ability. A quarter (25%) thought there would be no change, while one in five (20%) weren’t sure.
  • Decisions: More than a third of respondents (40%) believe increased use of AI will worsen our ability to make difficult decisions. Fewer than one in five (19%) expect AI to improve this ability. About the same number (20%) foresee no change, and the same percentage said they weren’t sure.
  • Problem-solving: This was a closer contest. Over a third of respondents (38%) said wider use of AI will worsen our ability to solve problems, while more than a quarter (29%) said it would improve this ability. Fifteen percent expect no change, and 17% weren’t sure.
  • Deepfakes: Over three-quarters of respondents (76%) said it’s important to be able to detect whether a picture, video or text was created by AI. But over half of all (53%) also said they’re not confident they can make these detections.

These concerns aside, the AI market still has plenty of room for growth. A recent forecast from Grand View Research has global AI sales rising from about $280 billion last year to nearly $3.5 trillion in 2033. That would represent an impressive 8-year compound annual growth rate (CAGR) of just over 30%.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Featured content

Vultr, Supermicro, AMD team to offer hi-performance cloud compute & AI infrastructure

Vultr, a global provider of cloud services, now offers Supermicro servers powered by AMD Instinct GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro servers powered by the latest AMD Instinct GPUs and supported by the AMD ROCm open software ecosystem are at the heart of a global cloud infrastructure program offered by Vultr.

Vultr calls itself a modern hyperscaler, meaning it provides cloud solutions for organizations facing complex AI and HPC workloads, high operational costs, vendor lock-in, and the need for rapid insights.

Launched in 2014, Vultr today offers services from 32 data centers worldwide, which it says can reach 90% of the world’s population in under 40 milliseconds. Vultr’s services include cloud instances, dedicated servers, cloud GPUs, and managed services for database, cloud storage and networking.

Vultr’s customers enjoy benefits that include costs 30% to 50% lower than those of the hyperscalers and 20% to 30% lower than those of other independent cloud providers. These customers—there are over 220,000 of them worldwide—also enjoy Vultr’s full native AI stack of compute, storage and networking.

Vultr is the flagship product of The Constant Co., based in West Palm Beach, Fla. The company was founded by David Aninowsky, an entrepreneur who also started GameServers.com and served as its CEO for 18 years.

Now Vultr counts among its partners AMD, which joined the Vultr Cloud Alliance, a partner program, just a year ago. In addition, AMD’s venture group co-led a funding round this past December that brought Vultr $333 million.

Expanded Data Center

Vultr is now expanding its relationship with Supermicro, in part because that company is first to market with the latest AMD Instinct GPUs. Vultr is now offering Supermcro systems powered by AMD Instinct MI355X, MI325X and MI300X GPUs. And as part of the partnership, Supermicro engineers work on-site with Vultr technicians.

Vultr is also relying on Supermicro for scaling. That’s a challenge for large AI implementations, as these configurations require deep expertise for both integration and operations.

Among Vultr’s offerings from Supermicro is a 4U liquid-cooled server (model AS -4126GS-NMR-LCC) with dual AMD EPYC 9005/9004 processors and up to eight AMD GPUs—the user’s choice of either MI325X or MI355X.

Another benefit of the new arrangement is access to AMD’s ROCm open source software environment, which will be made available within Vultr’s composable cloud infrastructure. This AMD-Vultr combo gives users access to thousands of open source, pre-trained AI models & frameworks.

Rockin’ with ROCm

AMD’s latest update to the software is ROCm 7, introduced in July and now live and ready to use. Version 7 offers advancements that include big performance gains, advanced features for scaling AI, and enterprise-ready AI tools.

One big benefit of AMD ROCm is that its open software ecosystem eliminates vendor lock-in. And when integrated with Vultr, ROCm supports AI frameworks that include PyTorch and TensorFlow, enabling flexible, rapid innovation. Further, ROCm future-proofs AI solutions by ensuring compatibility across hardware, promoting adaptability and scalability.

AMD’s roadmap is another attraction for Vultr. AMD products on tap for 2026 include the Instinct 400 family (codename Helios), new EPYC CPUs (Venice) and an 800-Gbit NIC (Vulcano).

Conversely, Vultr is a big business for AMD. Late last year, a tech blog reported that Vultr’s first shipment of AMD Instinct MI300X GPUs numbered “in the thousands.”

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Featured content

Looking for business benefits from GenAI? Supermicro, AMD & PioVation have your solution

Struggling to deliver business benefits from Generative AI? Supermicro, AMD and PioVation have a new solution that not only works out-of-the-box, but is also highly scalable.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Experimenting with Generative AI can be fun, but CEOs and corporate boards aren’t interested in fun. They want to see real business results—things like an enhanced customer experience, more innovative products, streamlined operations and lower TCO. And they want to see them now.

Getting GenAI to deliver these kinds of business results isn’t easy. A recent report from MIT finds that despite nearly $40 billion worth of enterprise investment in GenAI, 95% of the organizations are getting “zero return.”

That estimate is based on solid numbers. The MIT researchers reviewed over 300 AI projects, interviewed with more than 50 organizations, and surveyed some 150 senior leaders.

The latest forecasts aren’t much cheerier. Research firm Gartner this summer predicted that by the end of this year, nearly a third of all GenAI projects (30%) will be abandoned after the proof-of-concept stage. Gartner says the projects will be cut due to poor data quality, inadequate risk controls, escalating costs and unclear business value.

“After last year’s hype, executives are impatient to see returns on GenAI investments,” says Gartner analyst Rita Sallam. “Yet organizations are struggling to prove and realize value.”

That’s About to Change

Supermicro, AMD and startup PioVation have partnered to jointly develop a GenAI solution that offers a pre-validated, turnkey infrastructure for deploying large language models (LLMs). The benefits include lower deployment overhead, enhanced observability, and ensured control of sovereign data.

Partner PioVation is a developer of AI platforms for enterprises, government agencies, and small and midsize businesses. Its products can be run either on-premises or in PioVation’s cloud in Munich, Germany. The company, founded in 2024 by former AMD executive Mazda Sabony, has formed partnerships with several companies, including AMD and Supermicro.

The GenAI solution being offered by the three companies has been designed to scale all the way from compact on-prem clusters up to large-scale multi-tenant cloud environments. And its architecture integrates Supermicro rack-level systems, AMD Instinct GPUs, and PioVation’s agentic AI platform, PioSphere. The result, the companies say, is out-of-the box agentic AI at any scale.

Full Stack

The Supermicro-AMD-PioVation offering is a full-stack solution. An autonomous microservice chains LLM prompts, invokes domain-specific tools, and integrates seamlessly with your existing systems via REST (an architectural style for distributed hypermedia systems), gRPC (a remote procedure call framework), or event streams running on the pre-validated Supermicro server powered by AMD Instinct GPUs.

Another feature is the solution’s Model Context Protocol (MCP). It lets agents interact with external tools in a way that’s both modular and composable. The MCP also governs how tools are registered, discovered, invoked and composed dynamically at runtime. This includes input/output serialization, maintaining execution context, and enforcing consistency across tool chains. MCP also enables context-aware tool usage, making every agent interoperable, auditable and enterprise-ready from the start.

The solution is available in three topologies, each designed for different operational scales and use cases:

  • MiniStack: For SMBs, pilots, research and the edge.
  • EdgeCluster: For regulated sites, branches and other locations where high availability is required.
  • Cloud Deployment: For cloud service providers (CSPs), enterprises and AI providers.

All three versions include a unified agent dashboard, role-based access control, and policy enforcement.

Business Benefits

The three partners haven’t forgotten about the need for GenAI to deliver real business results that can keep CEOs and corporate boards happy. To that end, the solution offers benefits that include:

  • Turnkey deployment: PioSphere’s Cloud OS has been prevalidated on the Supermicro platform powered by AMD GPUs.
  • Unified operations stack: A tightly integrated environment eliminates fragmented AI tooling.
  • No-code agent development: A PioVation feature known as AgentStudio lets nontechnical users design, deploy and iterate AI agents using a no-code interface.
  • Sovereign data control: Built-in controls support national and regional compliance frameworks, including Europe’s GDPR and the United States’ HIPAA.
  • Multi-tenant scalability: An organization can create separate, secure environments for different business units or clients, yet they’ll all share a common infrastructure footprint.
  • Integrated LLM operations and agent life-cycle management: Users can integrate any LLM published on the Hugging Face or Kaggle communities with one-click connectors. Other built-in features include RAG (retrieval augmented generation) pipelines and full agent life-cycle tools.
  • Intelligent autoscaling: During workload spikes, the solution’s dynamic autoscaling ensures resource utilization, cost efficiency and seamless performance.

Put it all together, and you have a solution that goes far beyond mere experimentation. The three partners—Supermicro, AMD and PioVation—are serious about helping your GenAI projects deliver serious benefits for the business.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: Cloud infrastructure, smart supply chains, augmented reality, AI tools

Featured content

Research Roundup: Cloud infrastructure, smart supply chains, augmented reality, AI tools

Get briefed on the latest IT market surveys, forecasts and analysis. 

Learn More about this topic
  • Applications:

Cloud infrastructure sales are booming. Most supply-chain managers don’t have an AI strategy yet. VR/AR is making a surprising comeback. And nearly half of U.S. adults use GenAI tools.

That’s some of the latest from leading IT market watchers, survey organizations and analysts. And here’s your research roundup. 

Cloud Infrastructure Booming

The market for cloud infrastructure services is robust, with global sales hitting $90.9 billion in this year’s first quarter, a year-on-year rise of 21%, finds market watcher Canalys.

What’s behind the boom? AI, mostly. Canalys says enterprises realize that to deploy AI applications, they first need to strengthen their cloud power.

Also, cloud providers are working to lower the cost of AI usage, in part by investing in infrastructure. In the year’s first quarter, the big three cloud-service providers—AWS, Microsoft Azure and Google Cloud—collectively increased their spending on cloud infrastructure by 24%, according to Canalys.

Few Supply Chains Have AI Strategies

While AI has the potential to transform supply chains, fewer than one in four supply-chain leaders (23%) have a formal AI strategy in place. So finds a new survey by research firm Gartner.

And that’s a problem, says Gartner researcher Benjamin Jury. “Without a structured approach,” he warns, “organizations risk creating inefficient systems that struggle to scale and adapt to evolving business demands.”

The Gartner survey was conducted earlier this year. It reached 120 supply-chain leaders who have deployed AI in their organizations within the last year.

How can supply-chain leaders do better with AI? Gartner recommends three moves:

  • Develop a formal supply-chain AI strategy. It should be both defined and documented.
  • Adopt a Run-Grow-Transform framework. By implementing projects in all three states, organizations can better allocate resources and deliver quick results.
  • Invest in AI-ready infrastructure. Do this in collaboration with the CIO and other executives.

Virtual Reality’s Comeback

Remember all the excitement about virtual and augmented reality? It’s back.

The global market for AR/VR headsets rebounded in this year’s first quarter, with unit shipments rising 18% year-on-year, according to research firm IDC.

Meta, which changed its name from Facebook in 2021 to reflect the shift, now leads the AR/VR business with a 51% market share, IDC finds.

What’s behind the VR comeback? “The market is clearly shifting toward more immersive and versatile experiences,” offers Jitesh Ubrani, an IDC research manager.

Ubrani and colleagues expect even bigger gains ahead. IDC predicts global sales of AR/VR headsets will more than double by 2026, rising from about 5 million units this year to more than 10 million units next year.

IDC also expects the market to shift away from AR and VR and instead toward mixed reality (MR) and extended reality (ER). MR appeals mainly to gamers and consumers. ER will be used for gaming, too, but it should also power smart glasses, enabling AI to assist tasks such as identifying objects in photos and providing instant language translations.

IDC predicts smart glasses will enjoy wide appeal among consumers and businesses alike. Just last week, Meta and sunglasses maker Oakley announced what they call Performance AI glasses, featuring a built-in camera and open-ear speakers.

Do You Use GenAI?

The chances either way are almost even. More than one in four U.S. adults (44%) do use Generative AI tools such as ChatGPT at least sometimes. But over half (56%) never use these tools or only rarely.

Similarly, U.S. adults are split on whether AI will make life better or worse: 42% believe AI will make their lives somewhat or much worse, while a very close 44% think AI will make their lives somewhat or much better.

These findings come from a new NBC News poll. Powered by Survey Monkey, the poll was conducted from May 30 to June 10, and it received responses from more than 19,400 U.S. adults.

Respondents were also evenly split when asked about the role of AI in schools. Slightly over half the respondents (53%) said integrating AI tools in the classroom would prepare students for the future. Conversely, nearly as many (47%) said they favor prohibiting AI in the classroom.

The NBC survey found that attitudes toward AI were unaffected by political leanings. The pollsters asked respondents whether they were Republicans, Democrats or Independents. Differences in responses by political leaning were mostly within the poll’s overall margin of error, which NBC News put at plus or minus 2.1%.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages