Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD and Supermicro: Pioneering AI Solutions

Featured content

AMD and Supermicro: Pioneering AI Solutions

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bringing AMD Instinct to the Forefront

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions. Their shared vision? To enable organizations to unlock the full potential of AI workloads, from training massive language models to accelerating complex simulations.

The AMD Instinct MI300 Series: Changing The AI Acceleration Paradigm

At the heart of this collaboration lies the AMD Instinct MI300 Series—a family of accelerators designed to redefine performance boundaries. These accelerators combine high-performance AMD EPYC™ 9004 series CPUs with the powerful AMD InstinctTM MI300X GPU accelerators and 192GB of HBM3 memory, creating a formidable force for AI, HPC, and technical computing.

Supermicro’s H13 Generation of GPU Servers

Supermicro’s H13 generation of GPU Servers serves as the canvas for this technological masterpiece. Optimized for leading-edge performance and efficiency, these servers integrate seamlessly with the AMD Instinct MI300 Series. Let’s explore the highlights:

8-GPU Systems for Large-Scale AI Training:

  • Supermicro’s 8-GPU servers, equipped with the AMD Instinct MI300X OAM accelerator, offer raw acceleration power. The AMD Infinity Fabric™ Links enable up to 896GB/s of peak theoretical P2P I/O bandwidth, while the 1.5TB HBM3 GPU memory fuels large-scale AI models.
  • These servers are ideal for LLM Inference and training language models with trillions of parameters, minimizing training time and inference latency, lowering the TCO and maximizing throughput.

Benchmarking Excellence

But what about real-world performance? Fear not! Supermicro’s ongoing testing and benchmarking efforts have yielded remarkable results. The continued engagement between AMD and Supermicro performance teams enabled Supermicro to test pre-release ROCm versions with the latest performance optimizations and publicly released optimization like Flash Attention 2 and vLLM. The Supermicro AMD-based system AS -8125GS-TNMR2 showcases AI inference prowess, especially on models like Llama-2 70B, Llama-2 13B, and Bloom 176B. The performance? Equal to or better than AMD’s published results from the Dec. 6 Advancing AI event.

Image - Blog - AMD and Supermicro Pioneering AI Solutions

Charles Liang’s Vision

In the words of Charles Liang, President and CEO of Supermicro:

“We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators. Our proven architecture allows for fully integrated liquid cooling solutions, giving customers a competitive advantage.”

Conclusion

The AMD-Supermicro partnership isn’t just about hardware and software stacks; it’s about pushing boundaries, accelerating breakthroughs, and shaping the future of AI. So, as we raise our virtual glasses, let’s toast to innovation, collaboration, and the relentless pursuit of performance and excellence.

Featured videos


Follow


Related Content

10 best practices for scaling the CSP data center — Part 1

Featured content

10 best practices for scaling the CSP data center — Part 1

Cloud service providers, here are best practices—courtesy of Supermicro—to help you design and deploy rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 10 best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 1: First standardize, then scale

First, select a configuration of compute, storage and networking. Then scale these configurations up and down into setups you designate as small, medium and large.

Later, you can deploy these standard configurations at various data centers with different numbers of users, workload sizes and growth estimates.

Best Practice No. 2: Optimize the configuration

Good as Best Practice No. 1 is, it may not work if you handle a very wide range of workloads. If that’s the case, then you may want to instead optimize the configuration.

Here’s how. First, run the software on the rack configuration to determine the best mix of CPUs, including cores, memory, storage and I/O. Then consider setting up different sets of optimized configurations.

For example, you might send AI training workloads to GPU-optimized servers. But a database application on a standard 2-socket CPU system.

Best Practice No. 3: Plan for tech refreshes 

When it comes to technology, the only constant is change itself. That doesn’t mean you can just wait around for the latest, greatest upgrade. Instead, do some strategic planning.

That might mean talking with key suppliers about their road maps. What are their plans for transitions, costs, supply chains and more?

Also consider that leading suppliers now let you upgrade some server components without having to replace the entire chassis. That reduces waste. That could also help you get more power from your current racks, servers and power requirements.

Best Practice No. 4: Look for new architectures

New architectures can help you increase power at lower cost. For example, AMD and Supermicro offer data-center accelerators that let you run AI workloads on a mix of GPUs and CPUs, a less costly alternative to all-GPU setups.

To find out if you could benefit from new architectures, talk with your suppliers about running proof-of-concept (PoC) trials of their new technologies. In other words, try before you buy.

Best Practice No. 5: Create a support plan

Sure, you need to run 24x7, but that doesn’t mean you have to pay third parties for all of that. Instead, determine what level of support you can provide in-house. For what remains, you can either staff up or outsource.

When you do outsource, make sure your supplier has tested your software stack before. You want to be sure that, should you have a problem, the supplier will be able to respond quickly and correctly.

Do more:

 

Featured videos


Follow


Related Content

10 best practices for scaling the CSP data center — Part 2

Featured content

10 best practices for scaling the CSP data center — Part 2

Cloud service providers, here are more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Cloud service providers, here are 5 more best practices—courtesy of Supermicro—that you can follow for designing and deploying rack-scale data centers. All are based on Supermicro’s real-world experience with customers around the world.

Best Practice No. 6: Design at the data-center level

Consider your entire data center as a single unit, complete with its range of both strengths and weaknesses. This will help you tackle such macro-level issues as the separation of hot and cold aisles, forced air cooling, and the size of chillers and fans.

If you’re planning an entirely new data center, remember to include a discussion of cooling tech. Why? Because the physical infrastructure needed for an air-cooled center is quite different than that needed for liquid cooling.

Best Practice No. 7: Understand & consider liquid cooling

We’re approaching the limits of air cooling. A new approach, one based on liquid cooling, promises to keep processors and accelerators running within their design limits.

Liquid cooling can also reduce a data center’s Power Usage Effectiveness (PUE) ratio, a measure of how much energy is used by a center’s computing equipment. This cooling tech can also minimize the need for HVAC cooling power.

Best Practice No. 8: Measure what matters

You can’t improve what you don’t measure. So make sure you are measuring such important factors as your data center’s CPU, storage and network utilization.

Good tools are available that can take these measurements at the cluster level. These tools can also identify both bottlenecks and levels of component over- or under-use.

Best Practice No. 9: Manage jobs better

A CSP’s data center is typically used simultaneously by many customers. That pretty much means using a job-management scheduler tool.

One tricky issue is over-demand. That is, what do you do if you lack enough resources to satisfy all requests for compute, storage or networking? A job scheduler can help here, too.

Best Practice No. 10: Simplify your supply chain

Sure, competition across the industry is a good thing, driving higher innovation and lower prices. But within a single data center, standardizing on just a single supplier could be the winning ticket.

This approach simplifies ordering, installation and support. And if something should go wrong, then you’ll have only the proverbial “one throat to choke.”

Can you still use third-party hardware as appropriate? Sure. And with a single main supplier, that integration should be simpler, too.

Do more:

 

Featured videos


Follow


Related Content

Data-center service providers: ready for transformation?

Featured content

Data-center service providers: ready for transformation?

An IDC researcher argues that providers of data-center hosting services face new customer demands that require them to create new infrastructure stacks. Key elements will include rack-scale integration, accelerators and new CPU cores. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

If your organization provides data-center hosting services, brace yourself. Due to changing customer demands, you’re about to need an entirely new infrastructure stack.

So argues Chris Drake, a senior research director at market watcher IDC, in a recently published white paper sponsored by Supermicro and AMD, The Power of Now: Accelerate the Datacenter.

In his white paper, Drake asserts that this new data center infrastructure stack will include new CPU cores, accelerated computing, rack-scale integration, a software-defined architecture, and the use of a micro-services application environment.

Key drivers

That’s a challenging list. So what’s driving the need for this new infrastructure stack? According to Drake, changing customer requirements.

More specifically, a growing need for hosted IT requirements. For reasons related to cost, security and performance, many IT shops are choosing to retain proprietary workloads on premises and in private-cloud environments.

While some of these IT customers have sufficient capacity in their data centers to host these workloads on prem, many don’t. They’ll rely instead on service providers for a range of hosted IT requirements. To meet this demand, Drake says, service providers will need to modernize.

Another driver: growing customer demand for raw compute power, a direct result of their adoption of new, advanced computing tools. These include analytics, media streaming, and of course the various flavors of artificial intelligence, including machine learning, deep learning and generative AI.

IDC predicts that spending on servers ranging in price from $10K to $250K will rise from a global total of $50.9 billion in 2022 to $97.4 billion in 2027. That would mark a 5-year compound annual growth rate of nearly 14%.

Under the hood

What will building this new infrastructure stack entail? Drake points to 5 key elements:

  • Higher-performing CPU cores: These include chiplet-based CPU architectures that enable the deployment of composable hardware architectures. Along with distributed and composable hardware architectures, these can enable more efficient use of shared resources and more scalable compute performance.
  • Accelerated computing: Core CPU processing will increasingly be supplemented by hardware accelerators, including those for AI. They’ll be needed to support today’s—and tomorrow’s—increasingly diverse range of high-performance and data-intensive workloads.
  • Rack-scale integration: Pre-tested racks can facilitate faster deployment, integration and expansion. They can also enable a converged-infrastructure approach to building and scaling a data center.
  • Software-defined data center technology: In this approach, virtualization concepts such as abstraction and pooling are extended to a data center’s compute, storage, networking and other resources. The benefits include increased efficiency, better management and more flexibility.
  • A microservices application architecture: This approach divides large applications into smaller, independently functional units. In so doing, it enables a highly modular and agile way for applications to be developed, maintained and upgraded.

Plan for change

Rome wasn’t built in a day. Modernizing a data center will take time, too.

To help service providers implement a successful modernization, Drake of IDC offers this 6-point action plan:

1. Develop a transformation road map: Aim to strike a balance between harnessing new technology opportunities on the one hand and being realistic about your time frames, costs and priorities on the other.

2. Work with a full-stack portfolio vendor: You want a solution that’s tailored for your needs, not just an off-the-rack package. “Full stack” here means a complete offering of servers, hardware accelerators, storage and networking equipment—as well as support services for all of the above.

3. Match accelerators to your workloads: You don’t need a Formula 1 race car to take the kids to school. Same with your accelerators. Sure, you may have workloads that require super-low latency and equally high thruput. But you’re also likely to be supporting workloads that can take advantage of more affordable CPU-GPU combos. Work with your vendors to match their hardware with your workloads.

4. Seek suppliers with the right experience: Work with tech vendors that know what you need. Look for those with proven track records of helping service providers to transform and scale their infrastructures.

5. Select providers with supply-chain ownership: Ideally, your tech vendors will fully own their supply chains for boards, systems and rack designs such as liquid-cooling systems. That includes managing the vertical integration needed to combine these elements. The right supplier could help you save costs and get to market faster.

6. Create a long-term plan: Plan for the short term, but also look ahead into the future. Technology isn’t sitting still, and neither should you. Plan for technology refreshes. Ask your vendors for their road maps, and review them. Decide what you can support in-house versus what you’ll probably need to hand off to partners.

Now do more:

 

Featured videos


Follow


Related Content

AMD CTO: ‘AI across our entire portfolio’

Featured content

AMD CTO: ‘AI across our entire portfolio’

In a presentation for industry analysts, AMD chief technology officer Mark Papermaster laid out the company’s vision for artificial intelligence everywhere — from PC and edge endpoints to the largest hypervisor servers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The current buildout of the artificial intelligence infrastructure is an event as big as the original launch of the internet.

AI, now mainly an expense, will soon be monetized. Thousands of AI applications are coming.

And AMD plans to embed AI across its entire product portfolio. That will include components and software on everything from PCs and edge sensors to the largest servers used by the big cloud hypervisors.

These were among the comments of Mark Papermaster, AMD’s executive VP and CTO, during a recent fireside chat hosted by stock research firm Arete Research. During the hour-long virtual presentation, Papermaster answered questions from moderator Brett Simpson of Arete and attending stock analysts. Here are the highlights.

The overall AI market

AMD has said it believes the total addressable market (TAM) for AI through 2027 is $400 billion. “That surprised a lot of people,” Papermaster said, but AMD believes a huge AI infrastructure is needed.

That will begin with the major hyperscalers. AWS, Google Cloud and Microsoft Azure are among those looking at massive AI buildouts.

But there’s more. AI is not only in the domain of these massive clusters. Individual businesses will be looking for AI applications that can drive productivity and enhance the customer experience.

The models for these kinds of AI systems are typically smaller. They can be run on smaller clusters, too, whether on-premises or in the cloud.

AI will also make its way into endpoint devices. They’ll include PCs, embedded devices, and edge sensors.

Also, AI is more than just compute. AI systems also require robust memory, storage and networking.

“We’re thrilled to bring AI across our entire product portfolio,” Papermaster said.

Looking at the overall AI market, AMD expects to see a compound annual growth rate of 70%. “I know that seems huge,” Papermaster said. “But we are investing to capture that growth.”

AI pricing

Pricing considerations need to take into account more than just the price of a GPU, Papermaster argued. You really have to look at the total cost of ownership (TCO).

The market is operating with an underlying premise: Demand for AI compute is insatiable. That will drive more and more compute into a smaller area, delivering more efficient power per FLOP, the most common measure of AI compute performance.

Right now, the AI compute model is dominated by a single player. But AMD is now bringing the competition. That includes the recently announced MI300 accelerator. But as Papermaster pointed out, there’s more, too. “We have the right technology for the right purpose,” he said.

That includes using not only GPUs, but also (where appropriate) CPUs. These workloads can include AI inference, edge computing, and PCs. In this way, user organizations can better manage their overall CapEx spend.

As moderator Simpson reminded him, Papermaster is fond of saying that customers buy road maps. So naturally he was asked about AMD’s plans for the AI future. Papermaster mainly deferred, saying more details will be forthcoming. But he also reminded attendees that AMD’s investments in AI go back several years and include its ROCm software enablement stack.

Training vs. inference

Training and inference are currently the two biggest AI workloads. Papermaster believes we’ll see the AI market bifurcate along their two lines.

Training depends on raw computational power in a vast cluster. For example, the popular ChatGPT generative AI tool uses a model with over a trillion parameters. That’s where AMD’s MI300 comes into play, Papermaster said, “because it scales up.”

This trend will continue, because for large language models (LLMs), the issue is latency. How quickly can you get a response? That requires not only fast processors, but also equally fast memory.

More specific inferencing applications, typically run after training is completed, are a different story, Papermaster said, adding: “Essentially, it’s ‘I’ve trained my model; now I want to organize it.’” These workloads are more concise and less demanding of both power and compute, meaning they can run on more affordable GPU-CPU combinations.

Power needs for AI

User organizations face a challenge: While running an AI system requires a lot of power, many data centers are what Papermaster called “power-gated.” In other words, they’re unable to drive up compute capacity to AI levels using current technology.

AMD is on the case. In 2020, the company committed itself to driving a 30x improvement in power efficiency for its products by 2025. Papermaster said the company is still on track to deliver that.

To do so, he added, AMD is thinking in terms of “holistic design.” That means not just hardware, but all the way through an application to include the entire stack.

One promising area involves AI workloads that can use AI approximation. These are applications that, unlike HPC workloads, do not need incredible levels of accuracy. As a result, performance is better for lower-precision arithmetic than it is for high-precision. “Not all AI models are created equally,” Papermaster said. “You’ll need smaller models, too.”

AMD is among those who have been surprised by the speed of AI adoption. In response, AMD has increased its projection of AI sales this year from $2 billion to $3.5 billion, what Papermaster called the fastest ramp AMD has ever seen.

Do more:

 

Featured videos


Follow


Related Content

AMD Instinct MI300 Series: Take a deeper dive in this advanced technology

Featured content

AMD Instinct MI300 Series: Take a deeper dive in this advanced technology

Take a look at the innovative technology behind the new AMD Instinct MI300 Series accelerators.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Earlier this month, AMD took the wraps off its highly anticipated AMD Instinct MI300 Series of generative AI accelerators and data-center acceleration processing units (APUs). During the announcement event, AMD president Victor Peng said the new components had been “designed with our most advanced technologies.”

Advanced technologies indeed. With the AMD Instinct MI300 Series, AMD is writing a brand-new chapter in the story of AI-adjacent technology.

Early AI developments relied on the equivalent of a hastily thrown-together stock car constructed of whichever spare parts happened to be available at the time. But those days are over.

Now the future of computing has its very own Formula 1 race car. It’s extraordinarily powerful and fine-tuned to nanometer tolerances.

A new paradigm

At the heart of this new accelerator series is AMD’s CDNA 3 architecture. This third generation employs advanced packaging that tightly couples CPUs and GPUs to bring high-performance processing to AI workloads.

AMD’s new architecture also uses 3D packaging technologies that integrate up to 8 vertically stacked accelerator complex dies (XCDs) and four I/O dies (IODs) that contain system infrastructure. The various systems are linked via AMD Infinity Fabric technology and are connected to 8 stacks of high-bandwidth memory (HBM).

High-bandwidth memory can provide far more bandwidth and yet much lower power consumption compared with the GDDR memory found in standard GPUs. Like many of AMD’s notable innovations, its HBM employs a 3D design.

In this case, the memory modules are stacked vertically to shorten the distance the data needs to travel. This also allows for smaller form factors.

AMD has implemented the HMB using a unified memory architecture. This is an increasingly popular design in which a single array of main-memory modules supports both the CPU and GPU simultaneously, speeding tasks and applications.

Unified memory is more efficient than traditional memory architecture. It offers the advantage of faster speeds along with lower power consumption and ambient temperatures. Also, data need not be copied from one set of memory to another.

Greater than the sum of its parts

What really makes AMD CDNA 3 unique is its chiplet-based architecture. The design employs a single logical processor that contains a dozen chiplets.

Each chiplet, in turn, is fabricated for either compute or memory. To communicate, all the chiplets are connected via the AMD Infinity Fabric network-on-chip.

The primary 5nm XCDs contain the computational elements of the processor along with the lowest levels of the cache hierarchy. Each XCD includes a shared set of global resources, including the scheduler, hardware queues and 4 asynchronous compute engines (ACE).

The 6nm IODs are dedicated to the memory hierarchy. These chiplets carry a newly redesigned AMD Infinity Cache and an HBM3 interface to the on-package memory. The AMD Infinity Cache boosts generational performance and efficiency by increasing cache bandwidth and reducing the number of off-chip memory accesses.

Scaling ever upward

System architects are constantly in the process of designing and building the world’s largest exascale-class supercomputers and AI systems. As such, they are forever reaching for more powerful processors capable of astonishing feats.

The AMD CDNA 3 architecture is an obvious step in the right direction. The new platform takes communication and scaling to the next level.

In particular, the advent of AMD’s 4th Gen Infinity Architecture Fabric offers architects a new level of connectivity that could help produce a supercomputer far more powerful than anything we have access to today.

It’s reasonable to expect that AMD will continue to iterate its new line of accelerators as time passes. AI research is moving at a breakneck pace, and enterprises are hungry for more processing power to fuel their R&D.

What will researchers think of next? We won’t have to wait long to find out.

Do more:

 

Featured videos


Follow


Related Content

Supermicro debuts 3 GPU servers with AMD Instinct MI300 Series APUs

Featured content

Supermicro debuts 3 GPU servers with AMD Instinct MI300 Series APUs

The same day that AMD introduced its new AMD Instinct MI300 series accelerators, Supermicro debuted three GPU rackmount servers that use the new AMD accelerated processing units (APUs). One of the three new systems also offers energy-efficient liquid cooling.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro didn’t waste any time.

The same day that AMD introduced its new AMD Instinct MI300 series accelerators, Supermicro debuted three GPU rackmount servers that use the new AMD accelerated processing units (APUs). One of the three new systems also offers energy-efficient liquid cooling.

Here’s a quick look, plus links for more technical details:

Supermicro 8-GPU server with AMD Instinct MI300X: AS -8125GS-TNMR2

This big 8U rackmount system is powered by a pair of AMD EPYC 9004 Series CPUs and 8 AMD Instinct MI300X accelerator GPUs. It’s designed for training and inference on massive AI models with a total of 1.5TB of HBM3 memory per server node.

The system also supports 8 high-speed 400G networking cards, which provide direct connectivity for each GPU; 128 PCIe 5.0 lanes; and up to 16 hot-swap NVMe drives.

It’s an air-cooled system with 5 fans up front and 5 more in the rear.

Quad-APU systems with AMD Instinct MI300A accelerators: AS -2145GH-TNMR and AS -4145GH-TNMR

These two rackmount systems are aimed at converged HPC-AI and scientific computing workloads.

They’re available in the user’s choice of liquid or air cooling. The liquid-cooled version comes in a 2U rack format, while the air-cooled version is packaged as a 4U.

Either way, these servers are powered by four AMD Instinct MI300A accelerators, which combine CPUs and GPUs in an APU. That gives each server a total of 96 AMD ‘Zen 4’ cores, 912 compute units, and 512GB of HBM3 memory. Also, PCIe 5.0 expansion slots allow for high-speed networking, including RDMA to APU memory.

Supermicro says the liquid-cooled 2U system provides a 50%+ cost savings on data-center energy. Another difference: The air-cooled 4U server provides more storage and an extra 8 to 16 PCIe acceleration cards.

Do more:

 

Featured videos


Follow


Related Content

Research Roundup: GenAI, 10 IT trends, cybersecurity, CEOs, and privacy

Featured content

Research Roundup: GenAI, 10 IT trends, cybersecurity, CEOs, and privacy

Catch up on the latest IT research and analysis from leading market watchers.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Generative AI is booming. Ten trends will soon rock your customers’ world. While cybersecurity spending is up, CEOs lack cyber confidence. And Americans worry about their privacy.

That’s some of the latest from leading IT market watchers. And here’s your Performance Intensive Computing roundup.

GenAI market to hit $143B by 2027

Generative AI is quickly becoming a big business.

Market watcher IDC expects that spending on GenAI software, related hardware and services will this year reach nearly $16 billion worldwide.

Looking ahead, IDC predicts GenAI spending will reach $143 billion by 2027. That would represent a compound annual growth rate (CAGR) over the years 2023 to 2027 of 73%—more than twice the growth rate in overall AI spending.

“GenAI is more than a fleeting trend or mere hype,” says IDC group VP Ritu Jyoti.

Initially, IDC expects, the largest GenAI investments will go to infrastructure, including hardware, infrastructure as a service (IaaS), and system infrastructure software. Then, once the foundation has been laid, spending is expected to shift to AI services.

Top 10 IT trends

What will be top-of-mind for your customers next year and beyond? Researchers at Gartner recently made 10 predictions:

1. AI productivity will be a primary economic indicator of national power.

2. Generative AI tools will reduce modernization costs by 70%.

3. Enterprises will collectively spend over $30 billion fighting “malinformation.”

4. Nearly half of all CISOs will expand their responsibilities beyond cybersecurity, driven by regulatory pressure and expanding attack surfaces.

5. Unionization among knowledge workers will increase by 1,000%, motivated by fears of job loss due to the adoption of GenAI.

6. About one in three workers will leverage “digital charisma” to advance their careers.

7. One in four large corporations will actively recruit neurodivergent talent—including people with conditions such as autism and ADHD—to improve business performance.

8. Nearly a third of large companies will create dedicated business units or sales channels for machine customers.

9. Due to labor shortages, robots will soon outnumber human workers in three industries: manufacturing, retail and logistics.

10. Monthly electricity rationing will affect fully half the G20 nations. One result: Energy efficiency will become a serious competitive advantage.

Cybersecurity spending in Q2 rose nearly 12%

Heightened threat levels are leading to heightened cybersecurity spending.

In the second quarter of this year, global spending on cybersecurity products and services rose 11.6% year-on-year, reaching a total of $19 billion worldwide, according to Canalys.

A mere 12 vendors received nearly half that spending, Canalys says. They include Palo Alto Networks, Fortinet, Cisco and Microsoft.

One factor driving the spending is fear, the result of a 50% increase in the number of publicly reported ransomware attacks. Also, the number of breached data records more than doubled in the first 8 months of this year, Canalys says.

All this increased spending should be good for channel sellers. Canalys finds that nearly 92% of all cybersecurity spending worldwide goes through the IT channel.

CEOs lack cyber confidence

Here’s another reason why cybersecurity spending should be rising: Roughly three-quarters of CEOs (74%) say they’re concerned about their organizations’ ability to avert or minimize damage from a cyberattack.

That’s according to a new survey, conducted by Accenture, of 1,000 CEOs from large organizations worldwide.

Two findings from the Accenture survey really stand out:

  • Nearly two-thirds of CEOs (60%) say their organizations do not incorporate cybersecurity into their business strategies, products or services
  • Nearly half (44%) the CEOs believe cybersecurity can be handled with episodic interventions rather than with ongoing, continuous attention.

Despite those weaknesses, nearly all the surveyed CEOs (96%) say they believe cybersecurity is critical to their organizations’ growth and stability. Mind the gap!

How do Americans view data privacy?

Fully eight in 10 Americans (81%) are concerned about how companies use their personal data. And seven in 10 (71%) are concerned about how their personal data is used by the government.

So finds a new Pew Research Center survey of 5,100 U.S. adults. The study, conducted in May and published this month, sought to discover how Americans think about privacy and personal data.

Pew also found that Americans don’t understand how their personal data is used. In the survey, nearly eight in 10 respondents (77%) said they have little to no understanding of how the government uses their personal data. And two-thirds (67%) said the same thing about businesses, up from 59% a year ago.

Another key finding: Americans don’t trust social media CEOs. Over three-quarters of Pew’s respondents (77%) say they have very little or no trust that leaders of social-medica companies will publicly admit mistakes and take responsibility.

And about the same number (76%) believe social-media companies would sell their personal data without their consent.

Do more:

 

Featured videos


Follow


Related Content

Tech Explainer: How does design simulation work? Part 2

Featured content

Tech Explainer: How does design simulation work? Part 2

Cutting-edge technology powers the virtual design process.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The market for simulation software is hot, growing at a compound annual growth rate (CAGR) of 13.2%, according to Markets and Markets. The research firm predicts that the global market for simulation software, worth an estimated $18.1 billion this year, will rise to $33.5 billion by 2027.

No surprise, then, that tech titans AMD and Supermicro would design an advanced hardware platform to meet the demands of this burgeoning software market.

AMD and Supermicro have teamed up with Ansys Inc., a U.S.-based designer of engineering simulation software. One result of this three-way collaboration is the Supermicro SuperBlade.

Shanthi Adloori, senior director of product management at Supermicro, calls the SuperBlade “one of the fastest simulation-in-a-box solutions.”

Adloori adds: “With a high core count, large memory capacity and faster memory bandwidth, you can reduce the time it takes to complete a simulation .”

One very super blade

Adloori isn’t overstating the case.

Supermicro’s SuperBlade can house up to 20 hot-swappable nodes in its 8U chassis. Each of those blades can be equipped with AMD EPYC CPUs and AMD Instinct GPUs. In fact, SuperBlade is the only platform of its kind designed to support both GPU and non-GPU nodes in the same enclosure.

Supermicro SuperBlade’s other tech specs may be less glamorous, but they’re no less impressive. When it comes to memory, each blade can address a maximum of either 8TB or 16TB of DDR5-4800 memory.

Each node can also house 2 NVMe/SAS/SATA drives and as many as eight 3000W Titanium Level power supplies.

Because networking is an essential element of enterprise-grade design simulation, SuperBlade includes redundant 25Gb/10Gb/1Gb Ethernet switches and up to 200Gbps/100Gbps InfiniBand networking for HPC applications.

For smaller operations, the Supermicro SuperBlade is also available in smaller configurations, including  6U and 4U. These versions pack fewer nodes, which ultimately means they’re able to bring less power to bear. But, hey, not every design team makes passenger jets for a living.

It’s all about the silicon

If Supermicro’s SuperBlade is the tractor-trailer of design simulation technology, then AMD CPUs and GPUs are the engines under the hood.

The differing designs of these chips lend themselves to specific core competencies. CPUs can focus tremendous power on a few tasks at a time. Sure, they can multitask. But there’s a limit to how many simultaneous operations they can address.

AMD bills its EPYC 7003 Series CPUs as the world’s highest-performing server processors for technical computing. The addition of AMD 3D V-Cache technology delivers an expanded L3 cache to help accelerate simulations.

GPUs, on the other hand, are required when running simulations where certain tasks require simultaneous operations to be performed. The AMD Instinct MI250X Accelerator contains 220 compute units with 14,080 stream processors.

Instead of throwing a ton of processing power at a small number of operations, the AMD Instinct can address thousands of less resource-intensive operations simultaneously. It’s that capability that makes GPUs ideal for HPC and AI-enabled operations, an increasingly essential element of modern design simulation.

The future of design simulation

The development of advanced hardware like SuperBlade and the AMD CPUs and GPUs that power it will continue to progress as more organizations adopt design simulation as their go-to product development platform.

That progression will continue to manifest in global companies like Boeing and Volkswagen. But it will also find its way into small startups and single users.

Also, as the required hardware becomes more accessible, simulation software should become more efficient.

This confluence of market trends could empower millions of independent designers with the ability to perform complex design, testing and validation functions.

The result could be nothing short of a design revolution.

Part 1 of this two-part Tech Explainer explores the many ways design simulation is used to create new products, from tiny heart valves to massive passenger aircraft. Read Part 1 now.

Do more:

 

Featured videos


Follow


Related Content

Why M&E content creators need high-end VDI, rendering & storage

Featured content

Why M&E content creators need high-end VDI, rendering & storage

Content creators in media and entertainment need lots of compute, storage and networking. Supermicro servers with AMD EPYC processors are enhancing the creativity of these content creators by offering improved rendering and high-speed storage. These systems empower the production of creative ideas.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

When content creators at media and entertainment (M&E) organizations create videos and films, they’re also competing for attention. And today that requires a lot of technology.

Making a full-length animated film involves no fewer than 14 complex steps, including 3D modeling, texturing, animating, visual effects and rendering. The whole process can take years. And it requires a serious quantity of high-end compute, storage and software.

From an IT perspective, three of the most compute-intensive activities for M&E content creators are VDI, rendering and storage. Let’s take a look at each.

* Virtual desktop infrastructure (VDI): While content creators work on personal workstations, they need the kind of processing power and storage capacity available from a rackmount server. That’s what they get with VDI.

VDI separates the desktop and associated software from the physical client device by hosting the desktop environment and applications on a central server. These assets are then delivered to the desktop workstation over a network.

To power VDI setups, Supermicro offers a 4U GPU server with up to 8 PCIe GPUs. The Supermicro AS -4125GS-TNRT server packs a pair of AMD EPYC 9004 processors, Nvidia RTX 6000 GPUs, and 6TB of DDR5 memory.

* Rendering: The last stage of film production, rendering is where the individual 3D images created on a computer are transformed into the stream of 2D images ready to be shown to audiences. This process, conducted pixel by pixel, is time-consuming and resource-hungry. It requires powerful servers, lots of storage capacity and fast networking.

For rendering, Supermicro offers its 2U Hyper system, the AS -2125HS-TNR. It’s configured with dual AMD EPYC 9004 processors, up to 6TB of memory, and your choice of NVMe, SATA or SAS storage.

* Storage: Content creation involves creating, storing and manipulating huge volumes of data. So the first requirement is simply having a great deal of storage capacity. But it’s also important to be able to retrieve and access that data quickly.

For these kinds of storage challenges, Supermicro offers Petascale storage servers based on AMD EPYC processors. They can pack up to 16 hot-swappable E3.S (7.5mm) NVMe drive bays. And they’ve been designed to store, process and move vast amounts of data.

M&E content creators are always looking to attract more attention. They’re getting help from today’s most advanced technology.

Do more:

 

 

Featured videos


Follow


Related Content

Pages