Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

Research Roundup: Tariffs, the data center next door, agentic supply chains, cyber AI, and ransomware

Featured content

Research Roundup: Tariffs, the data center next door, agentic supply chains, cyber AI, and ransomware

Catch up on the latest IT market research and analysis.

Learn More about this topic
  • Applications:

U.S. tariffs could slow IT spending worldwide. Many Americans are okay with a data center next door. Supply chains could start making decisions on their own. AI is both a dangerous cyber threat and a great cyber defense. And ransomware continues to morph.

That’s some of the latest from leading IT market watchers. And here’s your research roundup:

Tariff Tremors

Uncertainty related to President Trump’s tariffs has led IT market watcher IDC to lower its estimate for global IT spending this year.

At the start of this year, IDC had expected global IT spending to rise by 10% this year. Then, in March, the company lowered that, saying spending would grow by just 5%. Now IDC, citing rising uncertainty over tariffs, is hedging its bet, pegging growth at anywhere from 5% to 9%.

Regardless of the exact impact on overall IT spending, IDC feels confident that tariffs will indeed have an impact on the IT industry.

In an April blog post, four IDC analysts wrote, “New tariffs will have an inflationary impact on technology prices in the U.S., as well as causing significant disruption to supply chains.”  

The impact of tariffs should be felt most immediately in compute, storage and network hardware as well as datacenter construction, the IDC analysts wrote, adding: “Even sectors such as software and services will be affected if tariffs are longer lived.”

Data Center NIMBY? No

Seven in 10 Americans say they’re comfortable with a data center being built within a few miles of their home—that is, if it’s done sustainably and with community input.

That’s from a survey of 600 U.S. adults conducted for Modine, a provider of thermal-management products.

The survey’s key findings include:

  • Nearly half of U.S. adults (47%) say they’d be fine with a data center being built within five miles of their home.
  • Americans’ top concerns about having a data center nearby are: increased energy demand (cited by 63% of respondents); noise pollution (60%); and lower property values (52%).
  • About six in 10 respondents (62%) say they’d like local data-center owners to contribute to community initiatives such as schools and infrastructure.
  • Slightly over half the respondents (55%) favor tax breaks to encourage responsible data-center development.

Agentic AI & Supply Chains

You may have already heard the term agentic AI. It refers to the idea that artificial intelligence systems can operate autonomously, without human intervention.

IT research firm Gartner predicts that fully half of all supply-chain management solutions will include agentic AI capabilities, and by as soon as 2030. This means future supply-chain systems will use intelligent agents to make and act on decisions, all without a human’s oversight.

Further, these agentic AI systems will provide what Gartner calls a virtual workforce. AI agents will assist, offload and augment human work along with more traditional software applications.

Gartner also says agentic AI systems could help supply-chain managers improve efficiency and contribute more to their organizations’ profit growth. Mainly, by enhancing resource efficiency, automating complex tasks, and introducing new business models.

“AI agents will autonomously complete tasks without relying on explicit inputs or predefined outcomes,” says Kaitlynn Sommers, a senior analyst at Gartner. “Agents will continuously learn from real-time data and adapt to evolving conditions and complex demands.”

AI: Both Cyber Friend and Cyber Foe

AI is both the greatest threat to cybersecurity and cybersecurity’s greatest defense, say management consultants McKinsey & Co.

AI is reshaping the cybersecurity landscape, write four McKinsey analysts in a new blog post. This technology brings new opportunities, as well as new threats.

For one, conducting a cyberattack is relatively fast and easy with AI. Criminals can use AI to create convincing phishing emails, fake websites and deepfake videos. They can also use machine learning to observe an attack, then modify their tactics based on the results, making future attacks more effective.

But AI is also what McKinsey calls a “game changer” for cybersecurity defense. Organizations can use AI to detect, react to, and recover from attacks with greater speed. And AI-driven anomaly detection can help organizations detect cyberattacks before they escalate.

Integration of AI into cybersecurity solutions is vital, McKinsey says. Especially because more than nine in 10 AI capabilities will come from third-party vendors. With integration, AI can be added to mainstream cyber tools such as zero trust, SASE and security-posture management.

The State of Ransomware: Slightly Worse

Ransomware is getting worse. In 2024, the percentage of users worldwide who were affected by ransomware increased by nearly half a percentage point, says security firm Kaspersky in 2025 “State of Ransomware” report. That may sound like a small increase, but ransomware criminals focus on quality of their victims rather than the quantity.

The frequency of attacks varies greatly by geographical region, Kaspersky finds. The highest rate is found in the Middle East, where nearly one in 100 users (0.72%) were attacked in 2024. Next worse was APAC, with an attack rate of 0.6%. The global average was 0.44%.

“Ransomware is one of the most pressing cybersecurity threats facing organizations today,” says Dmitry Galov, head of a Kaspersky research center. “Building cyber awareness at every level is just as important as investing in the right technology.”

New ransomware trends identified by Kaspersky:

  • AI use: Ransomware groups are using AI tools to enhance development and evade detection. One example is FunkSec, a group that uses AI to take a contrarian approach to ransomware; instead of attacking a few high-value targets, FunkSec makes many attacks for low ransoms.
  • Ransomware-as-a-Service: Criminals who lack technical development skills can now just buy a ransomware package on the dark web. There are even ransomware platforms that offer malware, tech support and even revenue-sharing affiliate programs.
  • Unconventional vulnerabilities: Attackers are increasingly targeting overlooked entry points. These include IoT devices, smart appliances and misconfigured hardware. In this way, the bad guys can capitalize on expanding attack surfaces created by interconnected systems.
  • LLM proliferation: Criminals can take advantage of large language models sold on the dark web, which lower the technical barriers to creating malicious code, phishing campaigns and social-engineering attacks. One example is LowCode, which provides an AI-assisted drag-and-drop interface for software development.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What’s a NIC? And how can it empower AI?

Featured content

Tech Explainer: What’s a NIC? And how can it empower AI?

With the acceleration of AI, the network interface card is playing a new, leading role.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The humble network interface card (NIC) is getting a status boost from AI.

At a fundamental level, the NIC enables one computing device to communicate with others across a network. That network could be a rendering farm run by a small multimedia production house, an enterprise-level data center, or a global network like the internet.

From smartphones to supercomputers, most modern devices use a NIC for this purpose. On laptops, phones and other mobile devices, the NIC typically connects via a wireless antenna. For servers in enterprise data centers, it’s more common to connect the hardware infrastructure with Ethernet cables.

Each NIC—or NIC port, in the case of an enterprise NIC—has its own media access control (MAC) address. This unique identifier enables the NIC to send and receive relevant packets. Each packet, in turn, is a small chunk of a much larger data set, enabling it to move at high speeds.

Networking for the Enterprise

At the enterprise level, everything needs to be highly capable and powerful, and the NIC is no exception. Organizations operating full-scale data centers rely on NICs to do far more than just send emails and sniff packets (the term used to describe how a NIC “watches” a data stream, collecting only the data addressed to its MAC address).

Today’s NICs are also designed to handle complex networking tasks onboard, relieving the host CPU so it can work more efficiently. This process, known as smart offloading, relies on several functions:

  • TCP segmentation offloading: This breaks big data into small packets.
  • Checksum offloading: Here, the NIC independently checks for errors in the data.
  • Receive side scaling: This helps balance network traffic across multiple processor cores, preventing them from getting bogged down.
  • Remote Direct Memory Access (RDMA): This process bypasses the CPU and sends data directly to GPU memory.

Important as these capabilities are, they become even more vital when dealing with AI and machine learning (ML) workloads. By taking pressure off the CPU, modern NICs enable the rest of the system to focus on running these advanced applications and processing their scads of data.

This symbiotic relationship also helps lower a server’s operating temperature and reduce its power usage. The NIC does this by increasing efficiency throughout the system, especially when it comes to the CPU.

Enter the AI NIC

Countless organizations both big and small are clamoring to stake their claims in the AI era. Some are creating entirely new AI and ML applications; others are using the latest AI tools to develop new products that better serve their customers.

Either way, these organizations must deal with the challenges now facing traditional Ethernet networks in AI clusters. Remember, Ethernet was invented over 50 years ago.

AMD has a solution: a revolutionary NIC it has created for AI workloads, the AMD AI NIC card. Recently released, this NIC card is designed to provide the intense communication capabilities demanded by AI and ML models. That includes tightly coupled parallel processing, rapid data transfers and low-latency communications.

AMD says its AI NIC offers a significant advancement in addressing the issues IT managers face as they attempt to reconcile the broad compatibility of an aging network technology with modern AI workloads. It’s a specialized network accelerator explicitly designed to optimize data transfer within back-end AI networks for GPU-to-GPU communication.

To address the challenges of AI workloads, what’s needed is a network that can support distributed computing over multiple GPU nodes with low jitter and RDMA. The AMD AI NIC is designed to manage the unique communication patterns of AI workloads and offer high throughput across all available links. It also offers congestion avoidance, reduced tail latency, scalable performance, and fast job-completion times.

Validated NIC

Following rigorous validation by the engineers at Supermicro, the AMD AI NIC is now supported on the Supermicro 8U GPU Server (AS -8126GS-TNMR). This behemoth is designed specifically for AI, deep learning, high-performance computing (HPC), industrial automation, retail and climate modeling.

In this configuration, AMD’s smart AI-focused NIC can offload networking tasks. This lets the Supermicro SuperServer’s dual AMD EPYC 9000-series processors run at even higher efficiency.

In the Supermicro server, the new AMD AI NIC occupies one of the myriad PCI Express x16 slots. Other optional high-performance PCIe cards include a CPU-to-GPU interconnect and up to eight AMD Instinct GPU accelerators.

In the NIC of time

A chain is only as strong as its weakest link. The chain that connects our ever-expanding global network of AI operations is strengthened by the advent of NICs focused on AI.

As NICs grow more powerful, these advanced network interface cards will help fuel the expansion of the AI/ML applications that power our homes, offices, and everything in between. They’ll also help us bypass communication bottlenecks and speed time to market.

For SMBs and enterprises alike, that’s good news indeed.

Do More:

1

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Meet AMD’s new EPYC CPUs for SMBs—and Supermicro servers that support them

Featured content

Meet AMD’s new EPYC CPUs for SMBs—and Supermicro servers that support them

AMD introduced the AMD EPYC 4005 series processors for SMBs and cloud service providers. And Supermicro announced that the new AMD processors are now shipping in several of its servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD this week introduced the AMD EPYC 4005 series processors. These are purpose-built CPUs designed to bring enterprise-level features and performance to small and medium businesses.

And Supermicro, wasting no time, also announced that several of its servers are now shipping with the new AMD EPYC 4005 CPUs.

EPYC 4005

The new AMD EPYC 4005 series processors are intended for on-prem users and cloud service providers who need powerful but cost-effective solutions in a 3U height form factor.

Target customers include SMBs, departmental and branch-office server users, and hosted IT service providers. Typical workloads for servers powered by the new CPUs will include general-purpose computing, dedicated hosting, code development, retail edge deployments, and content creation, AMD says.

“We’re delivering the right balance of performance, simplicity, and affordability,” says Derek Dicker, AMD’s corporate VP of enterprise and HPC. “That gives our customers and system partners the ability to deploy enterprise-class solutions that solve everyday business challenges.”

The new processors feature AMD’s ‘Zen 5’ core architecture and come in a single-socket package. Depending on model, they offer anywhere from 6 to 16 cores; up to 192GB of dual-channel DDR5 memory; 28 lanes of PCIe Gen 5 connectivity; and boosted performance of up to 5.7 GHz. One model of the AMD EPYC 4005 line also includes integrated AMD 3D V-Cache tech for a larger 128MB L3 cache and lower latency.

On a standard 42U rack, servers powered by AMD EPYC 4005 can provide up to 2,080 cores (that’s 13 3U servers x 10 nodes/server x 16 cores/node). That level of capacity can reduce a user’s size requirements while also lowering their TCO.

The new AMD CPUs follow the AMD EPYC 4004 series, introduced this time last year. The EPYC 4004 processors, still available from AMD, use the same AM5 socket as the 4005s.

Supermicro Servers

Also this week, Supermicro announced that several of its servers are now shipping with the new AMD EPYC 4005 series processors. Supermicro also introduced a new MicroCloud 3U server that’s available in 10-node and 5-node versions, both powered by the AMD EPYC 4005 CPUs.

"Supermicro continues to deliver first-to-market innovative rack-scale solutions for a wide range of use cases,” says Mory Lin, Supermicro’s VP of IoT, embedded and edge computing.

Like the AMD EPYC 4005 CPUs, the Supermicro servers are intended for SMBs, departmental and branch offices, and hosted IT service providers.

The new Supermicro MicroCloud 10-node server features single-socket AMD processors (your choice of either 4004 or the new 4005) as well as support for one single-width GPU accelerator card.

Supermicro’s new 5-node MicroCloud server also offers a choice of AMD EPYC 4004 or 4005 series processor. In contrast to the 10-node server, the 5-node version supports one double-width GPU accelerator card.

Supermicro has also added support for the new AMD EPYC 4005 series processors to several of its existing server lines. These servers include 1U, 2U and tower servers.

Have SMB, branch or hosting customers looking for affordable compute power? Tell them to:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What are bare metal servers?

Featured content

Tech Explainer: What are bare metal servers?

Find out why bare metal servers are an important ingredient for cloud services providers—and why they sometimes offer big advantages over virtualized servers.

Learn More about this topic
  • Applications:

The term “bare metal” is an apt description for a server class that invites ground-up customization. These machines begin as a blank slate of just hardware. Later, they end up with the power to deliver high-performance computing for enterprise-level applications.

Cloud service providers (CSPs) such as AWS and Google Cloud make available bare metal to provide their customers with single-tenancy servers, virtual machines and containers.

Once a bare metal server is deployed, the end user can install any OS and application software that’s supported by the system. Then they can customize the system to suit their unique needs.

To bare, or not to bare

The customization options and raw power available via bare metal servers make them a particularly good solution for resource-intensive workloads. These includes AI training, machine learning, video rendering, 3D modeling, and complex scientific simulations such as those used to predict the effects of climate change.

These kinds of workloads require consistent CPU and GPU power, high-speed storage and a ton of RAM. With these resources in place, users are free to operate massive databases, data lakes and warehouses.

However, bare metal is not the ideal solution for every use case. It can be complex to set up, meaning a deployment can slow organizations looking for a quick turnaround or rapid pivot.

Most users will also need some experience with system management. That’s because setting up bare metal requires IT managers to install and configure an OS and other vital software from scratch.

Bare metal vs. virtual servers

When choosing between bare metal and virtual servers, IT managers must carefully weigh the pros and cons of each solution.

Virtual solutions can be faster and more agile than bare metal. They also cost less. That makes it easier to launch a project quickly, scale it as demand grows, and tear the whole thing down if things don’t go as planned.

Another pro for virtual solutions is the ability to more cost-effectively create a global deployment. This means IT managers can set up a worldwide network of edge locations or global content delivery network (CDN) distribution in minutes.

Bare metal servers don’t always offer the same speedy deployment. Creating a global network of bare metal servers can end up being more expensive and time-consuming than a virtual solution.

The power of single tenancy

So if virtual servers are so compelling, why opt instead for bare metal? The answer has much to do with what’s known as single tenancy.

Each bare metal server is deployed for a single end user. So the user has access to 100% of the server’s resources.

By comparison, virtual servers provisioned by CSPs are nearly always multi-tenant devices. That means users share the hardware and software resources of the server with other users.

This, in turn, can lead to the “noisy neighbor” effect. This occurs when an application or virtual machine uses the majority of available resources, causing performance issues for neighboring users.

Multi-tenancy also introduces additional security concerns. You can’t always be sure your neighbors are good people. On a virtual server, cyber criminals can more easily hack neighboring applications.

Denying access to those neighbors in the first place means single-tenancy bare metal servers present fewer threat vectors.

Is bare metal really bare?

No, not 100%. But it’s not far off.

As far as the customer of a cloud service provider is concerned, bare metal servers come as bare as they can possibly be.

But CSPs need a way to keep track of usage so they can charge customers accordingly. This is accomplished with a thin layer of software that exists outside the end user’s environment, and it may use a certain (low) number of cores on each CPU.

Using this software, the CSP can monitor server uptime and track bandwidth, processing and storage resources. The management software layer also empowers the provider to provision and reboot the system should the need arise.

Evolving Solutions

Enterprise-grade IT infrastructure is evolving at breakneck speed. That includes bare metal servers, which become faster and more flexible with the release of new processing and storage tech.

That could signal more cost-effective solutions for SMBs hoping to become enterprises if their products achieve mass adoption. After all, the world’s best app will always fail if you can’t serve the data fast enough.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

To make room for AI, modernize your data center

Featured content

To make room for AI, modernize your data center

A new report finds the latest AMD-powered Supermicro servers can modernize the data center, lowering TCO and making room for AI systems.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Did you know that dramatic improvements in processor power can enable your corporate customers to lower their total cost of ownership (TCO) by consolidate servers and modernizing their data centers?

Server consolidation is a hot topic in the context of AI. Many data centers are full and running with all the power that’s available. So how can they make room for new AI systems? Also, how can they get the kind of power that today’s AI systems require?

One answer: with consolidation. 

Four in One

All this is especially relevant in light of a new report from Principled Technologies.

The report, prepared for AMD, finds that an organization that upgrades to new Supermicro servers powered by the current 5th generation AMD EPYC processors can consolidate servers on a 4:1 ratio.

In other words, the level of performance that previously required four older servers can now be delivered with just one.

Further, Principled found that organizations that make this upgrade can also free up data-center space; lower operating costs by up to $2.8 million over five years; shrink power-consumption levels; and reduce the maintenance load on sys admins.

Testing Procedures

Here’s how Principled figured all this out. To start, they obtained two systems:

Next, Principled’s researchers compared the transactional database performance of the two servers. They did this with HammerDB TPROC-C, an open-source benchmarking tool for online transaction processing (OLTP) workloads.

To ensure the systems were sufficiently loaded, Principled also measured both servers’ CPU and power utilization rates, pushing both servers to 80% CPU core utilization.

Then Principled calculated a consolidation ratio. That is, how many of the older servers would be needed to do the same level of work done by just 1 new server?

Finally, Principled calculated the expected 5-year costs for software licensing, power, space and maintenance. These calculations were made for both the older and new Supermicro servers, so they could be compared.

The Results

So what did Principled find? Here are the key results:

  • Performance upgrades: The new servers, based on AMD 5th Gen EPYC processors, is much more powerful. To match the database performance of just 1 new server, the testers required 4 of the older servers.
  • Lower operating costs: Consolidating those four older servers onto just one new server could lower an organization’s TCO by over 60%, saving up to an estimated $2.8 million over five years. The estimated 5-year TCO for the legacy server was $4.68 million, compared with $1.78 million for the new system.
  • Lower software license costs: Much of the savings would come from consolidating software licenses. They’re typically charged on a per-core basis, and the new test server needed only about a third as many cores as did the four older systems: 96 cores on the new system, compared with a total of 256 cores on the four older servers.
  • Reduced power consumption: To run the same benchmark, the new system needed only about 40% of the power required by the four older servers.
  • Lower space and cooling requirements: Space savings were calculated by comparing data-center footprint costs, taking into account the 4:1 consolidation and rack space needed. Cooling costs were factored in, too. The savings here were pretty dramatic, even if the figures were relatively low. The new system’s space costs were just $476, or 75% lower than the legacy system’s cost of $1,904.
  • Reduced maintenance costs: This was estimated with the assumption that one full-time sys admin with an annual salary of roughly $100K is responsible for 100 servers. The savings here brought a cost of over $26K for the older setup down to about $6,500 for the new, for a reduction of 75%.

Implicit in the results, though not actually calculated, is the way these reductions could also free up funding, floor space and other resources that organizations can then use for new AI systems.

So if your customers are grappling with finding new resources for AI, tell them about these test results. Upgrading to servers based on the latest processors could be the answer.

Do More:

 

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Research roundup: Edge computing, supply chain, AI sentiment, back to the office

Featured content

Research roundup: Edge computing, supply chain, AI sentiment, back to the office

Catch up on the latest IT market research, forecasts, surveys and more.

Learn More about this topic
  • Applications:

Spending on edge computing is rising. Supply chains are getting automated. AI sentiment depends on who you ask. And back-to-the-office strategies will require new, integrated technology.

That’s some of the latest from leading IT market researchers, analysts and pollsters. And here’s your Performance Intensive Computing research roundup.

Edge Computing: Hot

How hot? Well, global spending on edge solutions, set to approach $261 billion this year, will grow at a compound annual growth rate (CAGR) of nearly 14%, reaching $380 billion by 2028, predicts market watcher IDC.

"Most industries benefit from the ability to process data closer to the source, leading to faster decision-making, improved security and cost savings,” says IDC researcher Alexandra Rotaru. Those industries, she adds, include retail, industrial manufacturing, utilities, high tech, healthcare and life sciences.

Retail and services will account for nearly 28% of global spending on edge solutions this year, making it the leading vertical sector for edge spending, IDC expects. Use cases for this sector include video analytics, real-time carrier performance, and optimized operations.

Edge computing’s fastest-growing sector over the next five years will be financial services, IDC says. That sector’s use cases—including augmented fraud analysis—should spur its edge-solutions spending to exceed a CAGR of 15%.

Not Your Father’s Supply Chain

If you think supply chain managers still rely on clipboards and spreadsheets, think again. There’s a brave new world of supply chain technology that includes agentic AI, ambient invisible intelligence and an augmented connected workforce, according to a new report from IT advisors Gartner. If you don’t know what those are, read on.

Supply chain managers have been under pressure since at least the pandemic. Now they’re looking to boost productivity, gain value from digital investments, and adopt new and innovative operating models. To do all this, Gartner says, they’ll adopt advanced supply-chain tech, including:

  • Agentic AI: The term “agentic” basically means AI systems that can make decisions and solve problems autonomously—that is, without human intervention. Supply chains can use it to adjust stock levels based on realtime demand forecasts.
  • Ambient Invisible Intelligence: This technology provides real-time visibility into end-to-end supply chains with small, inexpensive smart tags and sensors. It’s especially useful for monitoring food and other perishables.
  • Augmented connected workforce: By digitizing standard operating procedures, this technology aims to fill skills gaps in the supply chain workforce.
  • Decision Intelligence: Decisions are being automated by this combo of decision modeling, AI and analytics. The technology can also be used to improve the quality of automated decisions over time.

AI Sentiment? Depends on Who You Ask

Is artificial intelligence a positive force leading to innovation, higher productivity and better decisions? Or a negative leading to unemployment and inaccurate results? As a slew of recent surveys show, it depends on who you ask:

  • Nearly three-quarters (72%) of small-business owners have a positive view of AI, finds a survey of 500 business owners by Paychex. Of those small-biz owners now using AI, 66% say it’s improved productivity. Others say they’ve enjoyed AI-driven cost savings (cited by 44%), revenue growth (40%) and improved recruiting (35%).
  • The same percentage (72%) of C-suite executives say their companies have faced challenges when adopting Generative AI, according to a survey conducted by Writer, a GenAI provider. These challenges include internal power struggles, poor return on investment (ROI), underperforming tools, and clashing perspectives among executives and employers.
  • AI can help humans overcome our biases, say nearly half the 2,000 Americans and Canadians recently polled by AI litigation platform Alexi. In addition, nearly three-quarters of them (72%) also support increasing AI literacy in school curriculums by 2026. 
  • Just over half of U.S. workers (52%) say they’re worried about the future impact of AI, and nearly a third (32%) fear AI will lead to fewer job opportunities for them in the future, finds a Pew Research Center survey of nearly 5,300 employed U.S. adults. Only 6% of those Pew surveyed believe workplace AI will lead to more job opportunities for them in the long term.

Back to the Office? Consolidate Tech

Still working from home? Maybe not for long. About one in three businesses (34%) plan to increase office attendance, according to a survey of 200 business executives worldwide conducted by Eptura, a provider of workplace systems.

However, the transition back to the office isn’t always going smoothly, and Eptura says one reason is disconnected technology. The company’s survey finds that half the respondents (50%) manage workplace operations with an average of 17 standalone technologies.

To make sense of this proliferation of systems, formats and dashboards, companies are turning to human intervention. About a third of organizations Eptura surveyed (37%) say they use 11 or more full-time employees to collate, analyze and report on workplace data.

The solution, Eptura says, will come with a unified operational approach with integrated technology across the enterprise. To get there, companies may need to consolidate operational data; implement AI to enhance the workforce experience (planned by 77% of respondents); and hire a new digital workplace leader, a move already in the works at three-quarters of those surveyed.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Healthcare in the spotlight: Big challenges, big tech

Featured content

Healthcare in the spotlight: Big challenges, big tech

To meet some of their industry’s toughest challenges, healthcare providers are turning to advanced technology.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Healthcare providers face some tough challenges. Advanced technology can help.

As a recent report from consultants McKinsey & Co. points out, healthcare providers are dealing with some big challenges. These include rising costs, workforce shortages, an aging population, and increased competition from nontraditional parties.

Another challenge: Consumers expect their healthcare providers to offer new capabilities, such as digital scheduling and telemedicine, as well as better experiences.

One way healthcare providers hope to meet these two challenge streams is with advanced technology. Three-quarters of U.S. healthcare providers increased their IT spending in the last year, according to a survey conducted by consultants Bain & Co. The same survey found that 15% of healthcare providers already have an AI strategy in place, up from just 5% who had a strategy in 2023.

Generative AI is showing potential, too. Another survey, this one done by McKinsey, finds that over 70% of healthcare organizations are now either pursuing GenAI proofs-of-concept or are already implementing GenAI solutions.

Dynamic Duo

There’s a catch to all this: As healthcare providers adopt AI, they’re finding that the required datasets and advanced analytics don’t run well on their legacy IT systems.

To help, Supermicro and AMD are working together. They’re offering healthcare providers heavy-duty compute delivered at rack scale.

Supermicro servers powered by AMD Instinct MI300X GPUs are designed to accelerate AI and HPC workloads in healthcare. They offer the levels of performance, density and efficiency healthcare providers need to improve patient outcomes.

The AMD Instinct MI300X is designed to deliver high performance for GenAI workloads and HPC applications. It’s designed with no fewer than 304 high-throughput compute units. You also get AI-specific functions and 192GB of HBM3 memory, all of it based on AMD’s CDNA 3 architecture.

Healthcare providers can use Supermicro servers powered by AMD GPUs for next-generation research and treatments. These could include advanced drug discovery, enhanced diagnostics and imaging, risk assessments and personal care, and increased patient support with self-service tools and real-time edge analytics.

Supermicro points out that its servers powered by AMD Instinct GPUs deliver massive compute with rack-scale flexibility, as well as high levels of power efficiency.

Performance:

  • The powerful combination of CPUs, GPUs and HBM3 memory accelerates HPC and AI workloads.
  • HBM3 memory offers capacities of up to 192GB dedicated to the GPUs.
  • Complete solutions ship pre-validated, ready for instant deployment.
  • Double-precision power can serve up to 163.4 TFLOPS.

Flexibility:

  • Proven AI building-block architecture streamlines deployment at scale for the largest AI models.
  • An open AI ecosystem with AMD ROCm open software.
  • A unified computing platform with AMD Instinct MI300X plus AMD Infinity fabric and infrastructure.
  • Thanks to a modular design and build, users move faster to the correct configuration.

Efficiency:

  • Dual-zone cooling innovation, used by some of the most efficient supercomputers on the Green500 supercomputer list.
  • Improved density with 3rd Gen AMD CDNA, delivering 19,456 stream cores.
  • Chip-level power intelligence enables the AMD Instinct MI300X to deliver big power performance.
  • Purpose-built silicon design of the 3rd Gen AMD CDNA combines 5nm and 6nm fabrication processes.

Are your healthcare clients looking to unleash the potential of their data? Then tell them about Supermicro systems powered by the AMD MI300X GPUs.

Do More:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is Quantum Computing, and How Does It Work?

Featured content

Tech Explainer: What is Quantum Computing, and How Does It Work?

Quantum computing promises to solve problems faster by simultaneously investigating many possible solutions at once. That’s far easier said than done.

Learn More about this topic
  • Applications:

Quantum computing has the potential to alter life as we know it. If, that is, we can figure out how to make the technology work on a massive scale.

This emerging technology is full of promise. At least in theory, it’s powerful enough to help us cure our most insidious diseases, usher in an era of artificial general intelligence (AGI), and enable us to explore neighboring galaxies.

Way, Way Faster

Quantum computing offers a way to solve these kinds of highly complex problems by simultaneously investigating many possible solutions at once.

To understand why this is so important, imagine a robot that’s attempting to find its way through an enormous maze. First, the robot acts as a human might, investigating each possible route, one at a time. Because the maze is so big and has so many possible pathways, this method could take the robot days, weeks or even years to complete.

Now imagine that instead, the robot can instantaneously clone itself, sending each new instance to investigate a potential route. This method would produce results many orders of magnitude faster than the one-at-a-time method.

And that is the promise offered by quantum computing.

Quantum Mechanics

To do all this heavy lifting, quantum computers behave in ways that may seem mysterious.

As you probably know, today’s standard computers operate using bits—binary switches that at any given moment have a value of either 0 or 1. But quantum computers run differently. They employ qubits (short for quantum bits), each of which can represent 0, 1—or both at the same time.

The ability of a particle-based object to be in two states at once? Yes. It’s a fundamental aspect of quantum mechanics known as superimposition.

Leveraging this ability at the bit level enables quantum computers to significantly reduce the time they need to solve problems. Particularly valuable examples of this include defeating encryption, decoding human physiology, even theorizing the mechanics of light-speed travel.

In other words, Star Trek stuff, pure and simple.

Not So Fast?

So why can’t you buy a quantum computer from your local BestBuy? Turns out that many factors have kept the promise of quantum computing just out of reach.

One of the most prevalent is errors at the qubit level. Qubits have a nasty habit of exchanging information with their environment.

By analogy, imagine spinning a basketball on your fingertip, Harlem Globetrotter style. The fast-spinning ball exists in a delicate state. Even tiny disturbances—such as air currents or ambient vibrations—could make the ball wobble and eventually fall.

A similar situation exists for quantum computers. Small environmental inconsistencies can impact qubits on an exponential scale. In fact, the more qubits you use, the more errors you get. Cross a certain threshold, and eventually the number of errors renders a quantum computer no more powerful than today’s standard computers.

Engineers are making progress in their efforts to solve this problem. For example, a French startup with the unlikely name of Alice & Bob was recently funded to the tune of €100 million to develop a new approach to quantum error correction.

Similarly, Google recently announced Willow, a new quantum computing chip the company says can reduce errors exponentially as it scales up. If a recent blog post by Hartmut Neven, lead of Google Quantum AI, is right, then it would seem Google has solved a 30-year-old challenge in quantum error correction.

The Key: R&D

AMD is also attempting to knock down some common quantum computing roadblocks.

The company filed a patent in 2021 titled “Look Ahead Teleportation for Reliable Computation in Multi-SIMD Quantum Processor.” AMD says this breakthrough improves quantum computing system reliability and reduces the number of required qubits. These efforts could revolutionize quantum computing scalability and error correction.

AMD has also created the Zynq UltraScale+ RFSoC, the industry’s only single-chip adaptable radio platform. The Zynq creates high-accuracy, high-speed pulse sequences to control qubits.

Companies like AMD partner Riverlane are using this cutting-edge technology to better control qubits and reduce errors.

When Will We Be There?

Not even a quantum computer can predict the future. But some experts say we could still be 10 to 20 years away from deploying quantum computing on a scale comparable to the ubiquity of the computers we use today.

In the near term, the most powerful tech companies—including AMD and Supermicro—will be working to harness the massive power of qubits.

To achieve their loftiest goals, however, they’ll need to revolutionize scalability and error correction. Only then can we deploy not just hundreds of qubits, but millions.

Once that code is cracked, there’s no telling where we’ll go from there.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Research Roundup: IT & cloud infrastructure spending rise, tech jobs stay strong, 2 security threats worsen

Featured content

Research Roundup: IT & cloud infrastructure spending rise, tech jobs stay strong, 2 security threats worsen

Catch up on the latest IT industry trends and statistics from leading market watchers and analysts.

Learn More about this topic
  • Applications:

Three of every four CFOs plan to increase their organizations’ IT spending this year. Spending on cloud infrastructure services rose 20% last year. Unemployment among IT workers is lower than the national average. And two types of cyber attacks are bigger threats than ever.

That’s some of the latest from leading IT industry watchers and researchers. And here’s your Performance Intensive Computing roundup.

CFOs: More IT Spending

If it’s true that a rising tide lifts all boats, you might prepare to set sail now. A new survey finds that a majority of corporate CFOs plan to boost their technology budgets this year.

The survey, conducted this past fall by research group Gartner, reached just over 300 CFOs and other senior finance leaders. Gartner published its findings this month, and they include:

  • Over three-quarters of CFOs surveyed (77%) plan to boost spending in the technology category this year.
  • Nearly half the CFOs (47%) plan to increase technology spending by 10% or more this year compared with last year.
  • Nearly a third (30%) plan to increase technology spending by 4% to 9% year-on-year.
  • And fewer than one in 10 CFOs (9%) plan to decrease technology spending this year.

Cloud Infrastructure: Spending Rises

One of those lifted ships: cloud infrastructure services.

In the fourth quarter of 2024, global spending on these services rose 20% year-on-year, according to new metrics from market watcher Canalys.

Global spending for the full year also rose 20%, Canalys said. Spending on cloud infrastructure services hit $321.3 billion last year, up from $267.7 billion in 2023.

The key driver of the growth? That would be AI. The technology “significantly accelerated” cloud adoption, Canalys says.

Looking ahead, Canalys expects global spending on cloud infrastructure services this year to rise by another 19%.

Tech Employment: Mostly Strong

Also on an upswing: technology employment.

New figures from the U.S. Bureau of Labor show that across all sectors of the U.S. economy, tech occupations grew by about 228,000 jobs.

Within the tech industry alone, the picture was more mixed. More than 13,700 jobs were filled in IT services and software development, but in telecom, 7,900 workers lost their jobs.

Tech is still a good industry to work in. The industry’s unemployment rate in January was 2.9%, compared with a national rate of 4%.

“Tech hiring activity was solid across the key categories,” says Tim Herbert, chief research officer at CompTIA, an industry trade group. “Employers continue to balance the need for foundational tech talent and skills with the push into next-gen fields.”

Security: Phishing, DDoS Both Worsen

Two kinds of cyber threats are getting worse:

  • The number of phishing attempts blocked worldwide last year by Kaspersky rose 26% over the previous year.
  • Distributed Denial of Services (DDoS) attacks increased by 82% last year, according to a new report from Zayo Group.

Kaspersky, a cybersecurity and digital privacy company, says it blocked more than 893 million phishing attempts last year, up from 710 million in 2023.

In many instances, the attackers mimicked the websites and social media feeds of well-known brands, including Airbnb, Booking and TikTok. Others falsely presented product giveaways from celebrities. In one, actress Jennifer Aniston was falsely shown promoting a giveaway of 10,000 laptop computers — a giveaway that did not exist.

Separately, Zayo Group, a provider of communications infrastructure, has published its biannual DDoS insights report, and the findings aren’t pretty. The attack volume rose from 90,000 incidents in 2023 to 165,000 incidents last year.

In a DDoS attack, the bad guys make a machine or network resource unavailable by disrupting the services of a host connected to a network. Often they do this by flooding the target system with requests, overloading the system and preventing requests that are legit from being fulfilled

In one worrisome change, the bad guys are increasing the scale of their DDoS attacks by using large botnets, compromised IoT devices and AI.

“The sophistication of DDoS attacks continues to grow,” says Max Clauson, a senior VP at Zayo. “Cybercriminals are finding ways to exploit cloud services, higher-bandwidth availability, and new vulnerabilities in software and network protocols.”

Also, Zayo finds the targets of DDoS attacks are shifting:

  • Telecom is still the most targeted sector, representing 42% of all observed incidents. But that’s down from 48% in 2023.
  • Attacks on the finance industry grew. In 2023 finance represented just 3.5% of all observed instances. In 2024 that doubled to 7%.
  • In healthcare, the total number of DDoS attacks more than tripled from 2023 to 2024, rising by a whopping 223%.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is edge computing — and why does it matter?

Featured content

Tech Explainer: What is edge computing — and why does it matter?

Edge computing, once exotic, is now a core aspect of modern IT infrastructures. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Edge computing is a vital aspect of our modern IT infrastructure. Its use can reduce latency, minimize bandwidth usage, and shorten response times.

This distributed computing methodology enables organizations to process data closer to its source and make decisions faster. This is referred to as operating at the edge.

For contrast, you can compare this with operating at the core, which refers to data being sent to centralized data centers and cloud environments for processing.

The edge is also a big and fast-growing business. Last year, global spending on edge computing rose by 14%, totaling $228 billion, according to market watcher IDC.

Looking ahead, IDC predicts this spend will increase to $378 billion by 2028, for a five-year compound annual growth rate (CAGR) of nearly 18%. Driving this growth will be high demand for real-time analytics, automation and enhanced customer experiences.

How does edge computing work?

Fundamentally, edge computing operates pretty much the same way that other types of computing do. The big difference is the location of the computing infrastructure relative to devices that collect the data.

For instance, a telecommunications provider like Verizon operates at the edge to better serve its customers. Rather than sending customer data to a central location, a telco can process it closer to the source.

An edge node’s proximity to end users can dramatically reduce the time it takes to transfer information to and from each user. This time is referred to as latency. And moving computing to the edge can reduce it. Edge computing can also lower data-error rates and demand for costly data-center space.

For a telco application of edge computing, the flow of data would look something like this:

1.   Users working with their smartphones, PCs and other devices create and request data. Because this happens in their homes, offices or anywhere else they happen to be, the data is said to have been created at the edge.

2.   Next, this customer data is processed by what are known as edge nodes. These are edge computing infrastructure devices placed near primary data sources.

3.   Next, the edge nodes filter the user data with algorithms and AI-enabled processing. Then the nodes send to the cloud only the most relevant data. This helps reduce bandwidth usage and costs.

Edge is Everywhere

Many verticals now rely on edge computing to increase efficiency and better serve their customers. These include energy providers, game developers and IoT appliance manufacturers.

One big vertical for the edge is retail, where major brands rely on edge computing to collect data from shoppers in real time. This helps retailers manage their stock, identify new sales opportunities, reduce shrinkage (that is, theft), and offer unique deals to their customers.

Other areas for the edge include “smart roads.” Here, roadside sensors are used to collect and process data locally to assess traffic conditions and maintenance. In addition, the reduced latency and hyper-locality provided by edge computing can speed communications, paring precious seconds when first responders are called to the scene of an accident.

Inner Workings

Like most modern computers, edge nodes rely on a laundry list of digital components. At the top of that list is a processor like the AMD EPYC Embedded 9004 and 8004 series.

AMD’s latest embedded processors are designed to balance performance and efficiency. The company’s ‘Zen 4’ and ‘Zen 4c’ 5-nanometer core architecture is optimized for always-on embedded systems. And with up to 96 cores operating as fast as 4.15 GHz, these processors can handle the AI-heavy workloads increasingly common to edge computing.

Zoom out from the smallest component to the largest, and you’re likely to find a density- and power-optimized edge platform like the Supermicro H13 WIO.

Systems like these are designed specifically for edge operations. Powered by either AC or DC current for maximum flexibility, the H13 WIO can operate at a scant 80 watts TDP. Yet to handle the most resource-intensive applications, it can scale up to 64 cores.

Getting Edgier

The near future of edge computing promises to be fascinating. As more users sign up for new services, enterprises will have to expand their edge networks to keep up with demand.

What tools will they use? To find out, see the latest edge tech from AMD and Supermicro at this year’s MWC, which kicks off in Barcelona, Spain, on March 3.

Do More:

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages