Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

Featured content

AMD intros CPUs, cache, AI accelerators for cloud, enterprise data centers

AMD strengthens its commitment to the cloud and enterprise data centers with new "Bergamo" CPUs, "Genoa-X" cache, Instinct accelerators.

Learn More about this topic
  • Applications:
  • Featured Technologies:

This week AMD strengthened its already strong commitment to the cloud and enterprise markets. The company announced several new products and partnerships at its Data Center and AI Technology Premier event, which was held in San Francisco and simultaneously broadcast online.

“We’re focused on pushing the envelope in high-performance and adaptive computing,” AMD CEO Lisa Su told the audience, “creating solutions to the world’s most important challenges.”

Here’s what’s new:

Bergamo: That’s the former codename for the new 4th gen AMD EPYC 97X4 processors. AMD’s first processor designed specifically for cloud-native workloads, it packs up to 128 cores per socket using AMD’s new Zen 4c design to deliver lots of power/watt. Each socket contains 8 chiplets, each with up to 16 Zen 4c cores; that’s twice as many cores as AMD’s earlier Genoa processors (yet the two lines are compatible). The entire lineup is available now.

Genoa-X: Another codename, this one is for AMD’s new generation of AMD 3D V-Cache technology. This new product, designed specifically for technical computing such as engineering simulation, now supports over 1GB of L3 cache on a 96-core CPU. It’s paired with the new 4th gen AMD EPYC processor, including the high-performing Zen4 core, to deliver high performance/core.

“A larger cache feeds the CPU faster with complex data sets, and enables a new dimension of processor and workload optimization,” said Dan McNamara, an AMD senior VP and GM of its server business.

In all, there are 4 new Genoa-X SKUs, ranging from 16 to 96 cores, and all socket-compatible with AMD’s Genoa processors.

Genoa: Technically, not new, as this family of data-center CPUs was introduced last November. But what is new is AMD’s new focus for the processors on AI, data-center consolidation and energy efficiency.

AMD Instinct: Though AMD had already introduced its Instinct MI300 Series accelerator family, the company is now revealing more details.

This includes the introduction of the AMD Instinct MI300X, an advanced accelerator for generative AI based on AMD’s CDNA 3 accelerator architecture. It will support up to 192GB of HBM3 memory to provide the compute and memory efficiency needed for large language model (LLM) training and inference for generative AI workloads.

AMD also introduced the AMD Instinct Platform, which brings together eight MI300X accelerators into an industry-standard design for the ultimate solution for AI inference and training. The MI300X is sampling to key customers starting in Q3.

Finally, AMD also announced that the AMD Instinct MI300A, an APU accelerator for HPC and AI workloads, is now sampling to customers.

Partner news: Mark your calendar for June 20. That’s when Supermicro plans to explore key features and use cases for its Supermicro 13 systems based on AMD EPYC 9004 series processors. These Supermicro systems will feature AMD’s new Zen 4c architecture and 3D V-Cache tech.

This week Supermicro announced that its entire line of H13 AMD-based systems are now available with support for the 4th gen AMD EPYC processors with Zen 4c architecture and V-Cache technology.

That includes Supermicro’s new 1U and 2U Hyper-U servers designed for cloud-native workloads. Both are equipped with a single AMD EPYC processor with up to 128 cores.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Why your AI systems can benefit from having both a GPU and CPU

Featured content

Why your AI systems can benefit from having both a GPU and CPU

Like a hockey team with players in different positions, an AI system with both a GPU and CPU is a necessary and winning combo. This mix of processors can bring you and your customers both the lower cost and greater energy efficiency of a CPU and the parallel processing power of a GPU. With this team approach, your customers should be able to handle any AI training and inference workloads that come their way.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Sports teams win with a range of skills and strengths. A hockey side can’t win if everyone’s playing goalie. The team also needs a center and wings to advance the puck and score goals, as well as defensive players to block the opposing team’s shots.

The same is true for artificial intelligence systems. Like a hockey team with players in different positions, an AI system with both a GPU and CPU is a necessary and winning combo.

This mix of processors can bring you and your customers both the lower cost and greater energy efficiency of a CPU and the parallel processing power of a GPU. With this team approach, your customers should be able to handle any AI training and inference workloads that come their way.

In the beginning

One issue: Neither CPUs nor GPUs were originally designed for AI. In fact, both designs predate AI by many years. Their origins still define how they’re best used, even for AI.

GPUs were initially designed for computer graphics, virtual reality and video. Getting pixels to the screen is a task where high levels of parallelization speed things up. And GPUs are good at parallel processing. This has allowed them to be adapted for HPC and AI workloads, which analyze and learn from large volumes of data. What’s more, GPUs are often used to run HPC and AI workloads simultaneously.

GPUs are also relatively expensive. For example, Nvidia’s new H100 has an estimated retail price of around $25,000 per GPU. Your customers may incur additional costs from cooling—GPUs generate a lot of heat. GPUs also use a lot of power, which can further raise your customer’s operating costs.

CPUs, by contrast, were originally designed to handle general-purpose computing. A modern CPU can run just about any type of calculation, thanks to its encompassing instruction set.

A CPU processes data sequentially, rather than in parallel, and that’s good for linear and complex calculations. Compared with GPUs, a comparable CPU generally is less expensive, needs less power and runs cooler.

In today’s cost-conscious environment, every data center manager is trying to get the most performance per dollar. Even a high-performing CPU has a cost advantage over comparable GPUs that can be extremely important for your customers.

Team players

Just as a hockey team doesn’t rely on its goalie to score points, smart AI practitioners know they can’t rely on their GPUs to do all types of processing. For some jobs, CPUs are still better.

Due to a CPU’s larger memory capacity, they’re ideal for machine learning training and inference, as long as the scale is relatively small. CPUs are also good for training small neural networks, data preparation and feature extraction.

CPUs offer other advantages, too. They’re generally less expensive than GPUs. In today’s cost-conscious environment, where every data center manager is trying to get the most performance per dollar, that’s extremely important. CPUs also run cooler than GPUs, requiring less (and less expensive) cooling.

GPUs excel in two main areas of AI: machine learning and deep learning (ML/DL). Both involve the analysis of gigabytes—or even terabytes—of data for image and video processing. For these jobs, the parallel processing capability of a GPU is a perfect match.

AI developers can also leverage a GPU’s parallel compute engines. They can do this by instructing the processor to partition complex problems into smaller, more manageable sub-problems. Then they can use libraries that are specially tuned to take advantage of high levels of parallelism.

Theory into practice

That’s the theory. Now let’s look at how some leading AI tech providers are putting the team approach of CPUs and GPUs into practice.

Supermicro offers its Universal GPU Systems, which combine Nvidia GPUs with CPUs from AMD, including the AMD EPYC 9004 Series.

An example is Supermicro’s H13 GPU server, with one model being the AS 8215GS-TNHR. It packs an Nvidia HGX H100 multi-GPU board, dual-socket AMD EPYC 9004 series CPU, and up to 6TB of DDR5 DRAM memory.

For truly large-scale AI projects, Supermicro offers SuperBlade systems designed for distributed, midrange AI and ML training. Large AI and ML workloads can require coordination among multiple independent servers, and the Supermicro SuperBlades are designed to do just that. Supermicro also offers rack-scale, plug-and-play AI solutions powered by the company’s GPUs and turbocharged with liquid cooling.

The Supermicro SuperBlade is available with a single AMD EYPC 7003/7002 series processors with up to 64 cores. You also get AMD 3D V-Cache, up to 2TB of system memory per node, and a 200Gbps InfiniBand HDR switch. Within a single 8U enclosure, you can install up to 20 blades.

Looking ahead, AMD plans to soon ship its Instinct MI300A, an integrated data-center accelerator that combines three key components: AMD Zen 4 CPUs, AMD CDNA3 GPUs, and high-bandwidth memory (HBM) chiplets. This new system is designed specifically for HPC and AI workloads.

Also, the AMD Instinct MI300A’s high data throughput lets the CPU and GPU work on the same data in memory simultaneously. AMD says this CPU-GPU partnership will help users save power, boost performance and simplify programming.

Truly, a team effort.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

A hospital’s diagnosis: Professional AI workloads require professional hardware

Featured content

A hospital’s diagnosis: Professional AI workloads require professional hardware

A Taiwanese hospital’s initial use of AI to interpret medical images with consumer graphics cards fell short. The prescription? Supermicro workstations powered by AMD components. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

A Taiwanese hospital has learned that professional AI workloads are too much to handle for consumer-level hardware—and that pro-level workloads require pro-level systems.

When Shuang-Ho Hospital first used AI to interpret medical images, it relied on consumer graphics cards installed on desktop PCs. But staff found that for diagnostics imaging, the graphics cards performed poorly. Plus, the memory capacity of the PCs was insufficient. The result: image resolution too low to be useful.

The hospital, affiliated with Taipei Medical University, offers a wide range of services, including reproductive medicine, a sleep center, and treatment for cancer, dementia and strokes. It opened in 2008 and is located in New Taipei City.

In its quest to use AI for healthcare, Shuang-Ho Hospital is far from alone. Last year, global sales of healthcare AI totaled $15.4 billion, estimates Grand View Research. Looking ahead, the market watcher expects healthcare AI sales through 2030 to enjoy a compound annual growth rate (CAGR) of nearly 38%.

A subset of that market, AI for diagnostic imaging, is a big and fast-growing field. The U.S. government has approved nearly 400 AI algorithms for radiology, according to the American Hospital Association. And the need is great. The World Economic Forum estimates that of all the data produced by hospitals each year—roughly 50 petabytes—97% goes unused.

‘Just right’

Shuang-Ho Hospital knew it needed an AI system that was more robust. But initially it wasn’t sure where to turn. A Supermicro demo changed all that. “The workstation presented by Supermicro was just right for our needs,” says Dr. Yen-Ting Chen, an attending physician in the hospital’s medical imaging department.

Supermicro’s solution for the hospital was its AS-5014-TT SuperWorkstation, powered by AMD’s Ryzen Threadripper Pro 3995WX processor and equipped with a pair of AMD Radeon Pro W6800 professional graphics cards. This tower workstation is optimized for applications that include AI and deep learning.

For the hospital, one especially appealing feature is the Supermicro workstation’s use of a multicore processor that can be paired with multiple GPU cards. The AMD Threadripper Pro has 64 cores, and each of the hospital’s Supermicro workstations was configured with two GPUs.

Another attractive feature had nothing to do with tech specs. “The price was very reasonable,” says Dr. Yen-Ting Chen. “It naturally became our best choice.”

Smart tech, healthier brains

Now that Shuang-Ho Hospital has the AMD-powered Supermicro workstations installed, the advantages of a professional system over consumer products has become even clearer. For one, AI training is much better than it was with the consumer cards.

Even more important, the images from brain tomography, which with the consumer cards had to be degraded, can now be used at full resolution. (Tomography is an approach to imaging that combines scans taken from different angles to create cross-sectional “slices.”)

For now, the hospital is using the Supermicro workstations to help interpret scans for cerebral thrombosis, a serious health condition involving a blood clot in a vein of the brain. Learnings from this first AI workload are being shared with other departments.

Long-term, the hospital plans to use AI wherever the technology can help. And this time, with strictly professional hardware.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How Generative AI is rocking the tech business—in a good way

Featured content

How Generative AI is rocking the tech business—in a good way

With ChatGPT the newest star of tech, generative AI has emerged as a major market opportunity for traditional hardware and software suppliers. Here’s some of what you can expect from AMD and Supermicro.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The seemingly overnight adoption of generative AI systems such as ChatGPT is transforming the tech industry.

A year ago, AI tech suppliers focused mainly on providing systems for training. For good reason: AI training is technically demanding.

But now the focus has shifted onto large language model (LLM) inferencing and generative AI.

Take ChatGPT, the AI chatbot built on a large language model. In just the first week after its launch, ChatGPT gained over a million users. Since then, it has attracted more than 100 million users who now generate some 10 million queries a day. OpenAI, ChatGPT’s developer, says the system has thus far processed approximately 300 billion words from over a million conversations.

It's not all fun and games, either. In a new Gartner poll of 2,500 executive leaders, nearly half the respondents said all the publicity around ChatGPT has prompted their organizations to increase their AI spending.

In the same survey, nearly 1 in 5 respondents already have generative AI in either pilot or production mode. And 7 in 10 are experimenting with or otherwise exploring the technology.

Top priority

This virtual explosion has gotten the attention of mainstream tech providers such as AMD. During the company’s recent first-quarter earnings call, CEO Lisa Su said, “We’re very excited about our opportunity in AI. This is our No. 1 strategic priority.”

And AMD is doing a lot more than just talking about AI. For one, the company has consolidated all its disparate AI activities into a single group that will be led by Victor Peng. He was previously general manager of AMD’s adaptive and embedded products group, which recently reported record first-quarter revenue of $1.6 billion, a year-on-year increase of 163%.

This new AI group will focus mainly on strengthening AMD’s AI software ecosystem. That will include optimized libraries, models and frameworks spanning all of the company’s compute engines.

Hardware for AI

AMD is also offering a wide range of AI hardware products for everything from mobile devices to powerful servers.

For data center customers, AMD’s most exciting hardware product is its Instinct MI300 Accelerator. Designed for both supercomputing HPC and AI workloads, the device is unusual in that it contains both a CPU and GPU. The MI300 is now being sampled with selected large customers, and general shipments are set to begin in this year’s second half.

Other AMD hardware components for AI include its “Genoa” EPYC processors for servers, Alveo accelerators for inference-optimized solutions, and embedded Versal AI Core series.

Several of AMD’s key partners are offering important AI products, too. That includes Supermicro. It now offers Universal GPU systems powered by AMD Instinct MI250 accelerator and optional EPYC CPUs.

These systems include the Supermicro AS 4124GQ-TNMI server. It’s powered by dual AMD EPYC 7003 Series processors and up to four AMD Instinct MI250 accelerators.

Help for AI developers

AMD has also made important moves on the developer front. Also during its Q1 earnings call, AMD announced expanded capabilities for developers to build robust AI solutions leveraging its products.

The moves include new updates to PyTorch 2.0. This open-source framework now offers native support for ROCm software and the latest TensorFlow-ZenDNN plug-in, which enables neural-network inferencing on AMD EPYC CPUs.

ROCm is an open software platform allowing researchers to tap the power of AMD Instinct accelerators to drive scientific discoveries. The latest version, ROCm 5.0, supports major machine learning (ML) frameworks, including TensorFlow and PyTorch. This helps users accelerate AI workloads.

TensorFlow is an end-to-end platform designed to make it easy to build and deploy ML models. And ZenDNN is a deep neural network library that includes basic APIs optimized for AMD CPU architectures.

Just the start

Busy as AMD and Supermicro have been with AI products, you should expect even more. As Gartner VP Francis Karamouzis says, “The generative AI frenzy shows no sign of abating.”

That sentiment gained support from AMD’s Su during the company’s Q1 earnings call.

“It’s a multiyear journey,” Su said in response to an analyst’s question about AI. “This is the beginning for what we think is a significant market opportunity for the next 3 to 5 years.”

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research roundup: AI edition

Featured content

Research roundup: AI edition

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AI is busting out all over. AI is getting prioritized over all other digital investments. The AI market is forecast to grow by over 20% a year through 2030. AI worries Americans about the potential impact on hiring. And AI needs to be safeguarded against the risk of misuse.

That’s some of the latest AI research from leading market watchers. And here’s your research roundup.

The AI priority

Nearly three-quarters (73%) of companies are prioritizing AI over all other digital investments, finds a new report from consultants Accenture. For these AI projects, the No. 1 focus area is improving operational resilience; it was cited by 90% of respondents.

Respondents to the Accenture survey also say the business benefits of AI are real. While only 9% of companies have achieved maturity across all 6 areas of AI operations, they averaged 1.4x higher operating margins than others. (Those 6 areas, by the way, are AI, data, processes, talent, collaboration and stakeholder experiences.)

Compared with less-mature AI operations, these companies also drove 42% faster innovation, 34% better sustainability and 30% higher satisfaction scores.

Accenture’s report is based on its recent survey of 1,700 executives in 12 countries and 15 industries. About 7 in 10 respondents held C-suite-level job titles.

The AI market

It’s no surprise that the AI market is big and growing rapidly. But just how big and how rapidly might surprise you.

How big? The global market for all AI products and services, worth some $428 billion last year, is on track to top $515 billion this year, predicts market watcher Fortune Business Insights.

How fast? Looking ahead to 2030, Fortune Insights expects the global AI market that year to hit $2.03 trillion. If so, that would mark a compound annual growth rate (CAGR) of nearly 22%.

What’s driving this big, rapid growth? Several factors, says Fortune, including the surge in the number of applications, increased partnering and collaboration, a rise in small-scale providers, and demand for hyper-personalized services.

The AI impact

What, me worry? About six in 10 Americans (62%) believe AI will have a major impact on workers in general. But only 28% believe AI will have a major effect on them personally.

So finds a recent poll by Pew Research of more than 11,000 U.S. adults.

Digging a bit deeper, Pew found that nearly a third of respondents (32%) believe AI will hurt workers more than help; the same percentage believe AI will equally help and hurt; about 1 in 10 respondents (13%) believe AI will help more than hurt; and roughly 1 in 5 of those answering (22%) aren’t sure.

Respondents also widely oppose the use of AI to augment regular management duties. Nearly three-quarters of Pew’s respondents (71%) oppose the use of AI for making a final hiring decision. Six in 10 (61%) oppose the use of AI for tracking workers’ movements while they work. And nearly as many (56%) oppose the use of AI for monitoring workers at their desks.

Facial-recognition technology fared poorly in the survey, too. Fully 7 in 10 respondents were opposed to using the technology to analyze employees’ facial expressions. And over half (52%) were opposed to using facial recognition to track how often workers take breaks. However, a small majority (45%) favored the use of facial recognition to track worker attendance; about a third (35%) were opposed and one in five (20%) were unsure.

The AI risk

Probably the hottest form of AI right now is generative AI, as exemplified by the ChatGPT chatbot. But given the technology’s risks around security, privacy, bias and misinformation, some experts have called for a pause or even a halt on its use.

Because that’s unlikely to happen, one industry watcher is calling for new safeguards. “Organizations need to act now to formulate an enterprisewide strategy for AI trust, risk and security management,” says Avivah Litan, a VP and analyst at Gartner.

What should you do? Two main things, Litan says.

First, monitor out-of-the-box usage of ChatGPT. Use your existing security controls and dashboards to catch policy violations. Also, use your firewalls to block unauthorized use, your event-management systems to monitor logs for violations, and your secure web gateways to monitor disallowed API calls.

Second, for prompt engineering usage—which uses tools to create, tune and evaluate prompt inputs and outputs—take steps to protect the sensitive data used to engineer prompts. A good start, Litan says, would be to store all engineered prompts as immutable assets.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: How does generative AI generate?

Featured content

Tech Explainer: How does generative AI generate?

Generative AI systems such as ChatGPT are grabbing the headlines. Find out how this super-smart technology actually works. 

Learn More about this topic
  • Applications:

Generative AI refers to a type of artificial intelligence that can create or generate new content, such as images, music, and text, based on patterns learned from large amounts of data. Generative AI models are designed to learn the underlying distribution of a dataset and then use this knowledge to generate new samples that are similar to those in the original dataset.

This emerging tech is well on its way to becoming a constant presence in everyday life. In fact, the preceding paragraph was generated by ChatGPT. Did you notice?

The growth of newly minted household names like ChatGPT may be novel, headline-grabbing news today. But soon they should be so commonplace, they’ll hardly garner a sidebar in Wired magazine.

So, if the AI bots are here to stay, what makes them tick?

Generating intelligence

Generative AI—the AI stands for artificial intelligence, but you knew that already—lets a user generate content quickly by providing various types of inputs. These inputs can include text, sounds, images, animations and 3D models. Those are also the possible forms of outputs.

Data scientists have been working on generative AI since the early 1960s. That’s when Joseph Weizenbaum created the Eliza chat-bot. A bot is a software application that runs automated tasks, usually in a way that simulates human activity.

Eliza, considered the world’s first generative AI, was programmed to respond to human statements almost like a therapist. However, the program did not actually understand what was being said.

Since then, we’ve come a long way. Today’s modern generative AI feeds on large language models (LLMs) that bear only a glimmer of resemblance to the relative simplicity of early chatbots. These LLMs contain billions, even trillions, of parameters, the aggregate of which provides limitless permutations that enable AI models to learn and grow.

AI graphic generators like the popular DALL-E or Fotor can produce images based on small amounts of text. Type “red tuba on a rowboat on Lake Michigan,” and voila! an image appears in seconds.

Beneath the surface

The human interface of an AI bot such as ChatGPT may be simple, but the technical underpinnings are complex. The process of parsing, learning from and responding to our input is so resource-intensive, it requires powerful computers, often churning incredible amounts of data 24x7.

These computers use graphical processing units (GPUs) to power neural networks tasked with identifying patterns and structures within existing data and using it to generate original content.

GPUs are particularly good at this task because they can contain thousands of cores. Each individual core can complete only one task at a time. But the core can work simultaneously with all the other cores in the GPU to collectively process huge data sets.

How generative AI generates...stuff

Today’s data scientists rely on multiple generative AI models. These models can be either deployed discreetly or combined to create new models greater—and more powerful—than the sum of their parts.

Here are the three most common AI models in use today:

  • Diffusion models use a two-step process: forward diffusion and reverse diffusion. Forward diffusion adds noise to training data; reverse diffusion removes that noise to reconstruct data samples. This learning process allows the AI to generate new data that, while similar to the original data, also includes unique variations.
    • For instance, to create a realistic image, a diffusion model can take in a random set of pixels and gradually refine them. It’s similar to the way a photograph shot on film develops in the darkroom, becoming clearer and more defined over time.
  • Variational autoencoders (VAEs) use two neural networks, the encoder and the decoder. The encoder creates new versions of the input data, keeping only the information necessary to perform the decoding process. Combining the two processes teaches the AI how to create simple, efficient data and generate novel output.
    • If you want to create, say, novel images of human faces, you could show the AI an original set of faces; then the VAE would learn their underlying patterns and structures. The VAE would then use that information to create new faces that look like they belong with the originals.
  • Generative adversarial networks (GANs) were the most commonly used model until diffusion models came along. A GAN plays two neural networks against each other. The first network, called the generator, creates data and tries to trick the second network, called the discriminator, into believing that data came from the real world. As this feedback loop continues, both networks learn from their experiences and get better at their jobs.
    • Over time, the generator can become so good at fooling the discriminator that it is finally able to create novel texts, audio, images, etc., that can also trick humans into believing they were created by another human.

Words, words, words

It’s also important to understand how generative AI forms word relationships. In the case of a large language model such as ChatGPT, the AI includes a transformer. This is a mechanism that provides a larger context for each individual element of input and output, such as words, graphics and formulas.

The transformer does this by using an encoder to determine the semantics and position of, say, a word in a sentence. It then employs a decoder to derive the context of each word and generate the output.

This method allows generative AI to connect words, concepts and other types of input, even if the connections must be made between elements that are separated by large groups of unrelated data. In this way, the AI interprets and produces the familiar structure of human speech.

The future of generative AI

When discussing the future of these AI models and how they’ll impact our society, two words continually get mentioned: learning and disruption.

It’s important to remember that these AI systems spend every second of every day learning from their experiences, growing more intelligent and powerful. That’s where the term machine learning (ML) comes into play.

This type of learning has the potential to upend entire industries, catalyze wild economic fluctuations, and take on many jobs now done by humans.

On the bright side, AI may also become smart enough to help us cure cancer and reverse climate change. And if AI has to take our jobs, perhaps it can also figure out a way to provide income for all.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: What is AI Training?

Featured content

Tech Explainer: What is AI Training?

Although AI systems are smart, they still need to be trained. The process isn’t easy. But it’s pretty straightforward with just 3 main steps.

Learn More about this topic
  • Applications:

Artificial intelligence (AI) training is the process of teaching an AI system to perceive, interpret and learn from data. That way, the AI will later be capable of inferencing—making decisions based on information it’s provided.

This type of training requires 3 important components: a well-designed AI model; large amounts of high-quality and accurately annotated data; and a powerful computing platform.

Properly trained, an AI’s potential is nearly limitless. For example, AI models can help anticipate our wants and needs, autonomously navigate big cities, and produce scientific breakthroughs.

It’s already happening. You experience the power of well-trained AI when you use Netflix’s recommendation engine to help decide which TV show or movie you want to watch next.

Or you can ride with AI in downtown Phoenix, Ariz. It’s home to the robotaxi service operated by Waymo, the autonomous-vehicle developer owned by Google’s parent company, Alphabet.

And let’s not forget ChatGPT, the current belle of the AI ball. This year has seen its fair share of fascination and fawning over this new generative AI, which can hold a remarkably human conversation and regurgitate every shred of information the internet offers—regardless of its accuracy.

AI can also be used for nefarious purposes, such as creating weapons, methods of cybercrime and tools that some nation states use to surveil and control their citizens. As is true for most technologies, it’s the humans who wield AI who get to decide whether it’s used for good or evil.

3 steps to train AI

AI training is technically demanding. But years of research aided by the latest technology are helping even novice developers harness the power of original AI models to create new software like indie video games.

The process of training enterprise-level AI, on the other hand, is incredibly difficult. Data scientists may spend years creating a single new AI model and training it to perform complex tasks such as autonomous navigation, speech recognition and language translation.

Assuming you have the programming background, technology and financing to train your desired type of AI, the 3-step process is straightforward:

Step 1: Training. The AI model is fed massive amounts of data, then asked to make decisions based on the information. Data scientists analyze these decisions and make adjustments based on the AI output’s accuracy.

 

Step 2: Validation. Trainers validate their assumptions based on how the AI performs when given a new data set. The questions they ask include: Does the AI perform as expected? Does the AI need to account for additional variables? Does the AI suffer from overfitting, a problem that occurs when a machine learning model memorizes data rather than learning from it?

 

Step 3: Testing. The AI is given a novel dataset without the tags and targets initially used to help it learn. If the AI can make accurate decisions, it passes the test. If not, it’s back to step 1.

Future of AI Training

New AI training theories are coming online quickly. As the market heats up and AI continues to find its way out of the laboratory and onto our computing devices, Big Tech is working feverishly to make the most of the latest gold rush.

One new AI training technique coming to prominence is known as Reinforcement Learning (RL). Rather than teaching an AI model using a static dataset, RL trains the AI as though it were a puppy, rewarding the system for a job well done.

Instead of offering doggie treats, however, RL gives the AI a line of code known as a “reward function.” This is a dynamic and powerful training method that some AI experts believe will lead to scientific breakthroughs.

Advances in AI training, high-performance computing and data science will continue to make our sci-fi dreams a reality. For example, one AI can now teach other AI models. One day, this could make AI training just another autonomous process.

Will the next era of AI bring about the altruism of Star Trek or the evil of The Matrix? One thing’s likely: We won’t have to wait long to find out.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

What is the AMD Instinct MI300A APU?

Featured content

What is the AMD Instinct MI300A APU?

Accelerate HPC and AI workloads with the combined power of CPU and GPU compute. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A APU, set to ship in this year’s second half, combines the compute power of a CPU with the capabilities of a GPU. Your data-center customers should be interested if they run high-performance computing (HPC) or AI workloads.

More specifically, the AMD Instinct MI300A is an integrated data-center accelerator that combines AMD Zen 4 cores, AMD CDNA3 GPUs and high-bandwidth memory (HBM) chiplets. In all, it has more than 146 billion transistors.

This AMD component uses 3D die stacking to enable extremely high bandwidth among its parts. In fact, nine 5nm chiplets that are 3D-stacked on top of four 6nm chiplets with significant HBM surrounding it.

And it’s coming soon. The AMD Instinct MI300A is currently in AMD’s labs. It will soon be sampled with customers. And AMD says it’s scheduled for shipments in the second half of this year. 

‘Most complex chip’

The AMD Instinct MI300A was publicly displayed for the first time earlier this year, when AMD CEO Lisa Su held up a sample of the component during her CES 2023 keynote. “This is actually the most complex chip we’ve ever built,” Su told the audience.

A few tech blogs have gotten their hands on early samples. One of them, Tom’s Hardware, was impressed by the “incredible data throughput” among the Instinct MI300A’s CPU, GPU and memory dies.

The Tom’s Hardware reviewer added that will let the CPU and GPU work on the same data in memory simultaneously, saving power, boosting performance and simplifying programming.

Another blogger, Karl Freund, a former AMD engineer who now works as a market researcher, wrote in a recent Forbes blog post that the Instinct MI300 is a “monster device” (in a good way). He also congratulated AMD for “leading the entire industry in embracing chiplet-based architectures.”

Previous generation

The new AMD accelerator builds on a previous generation, the AMD Instinct MI200 Series. It’s now used in a variety of systems, including Supermicro’s A+ Server 4124GQ-TNMI. This completely assembled system supports the AMD Instinct MI250 OAM (OCP Acceleration Module) accelerator and AMD Infinity Fabric technology.

The AMD Instinct MI200 accelerators are designed with the company’s 2nd gen AMD CDNA Architecture, which encompasses the AMD Infinity Architecture and Infinity Fabric. Together, they offer an advanced platform for tightly connected GPU systems, empowering workloads to share data fast and efficiently.

The MI200 series offers P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric Links with up to 800 GB/sec. of peak total theoretical I/O bandwidth. That’s 2.4x the GPU P2P theoretical bandwidth of the previous generation.

Supercomputing power

The same kind of performance now available to commercial users of the AMD-Supermicro system is also being applied to scientific supercomputers.

The AMD Instinct MI25X accelerator is now used in the Frontier supercomputer built by the U.S. Dept. of Energy. That system’s peak performance is rated at 1.6 exaflops—or over a billion billion floating-point operations per second.

The AMD Instinct MI250X accelerator provides Frontier with flexible, high-performance compute engines, high-bandwidth memory, and scalable fabric and communications technologies.

Looking ahead, the AMD Instinct MI300A APU will be used in Frontier’s successor, known as El Capitan. Scheduled for installation late this year, this supercomputer is expected to deliver at least 2 exaflops of peak performance.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Learn, Earn and Win with AMD Arena

Featured content

Learn, Earn and Win with AMD Arena

Channel partners can learn about AMD products and technologies at the AMD Arena site. It’s your site for AMD partner training courses, redeemable points and much more.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Interested in learning more about AMD products while also earning points you can redeem for valuable merch? Then check out the AMD Arena site.

There, you can:

  • Stay current on the latest AMD products with training courses, sales tools, webinars and quizzes;
  • Earn points, unlock levels and secure your place in the leaderboard;
  • Redeem those points for valuable products, experiences and merchandise in the AMD Rewards store.

Registering for AMD Arena is quick, easy and free. Once you’re in, you’ll have an Arena Dashboard as your control center. It’s where you can control your profile, begin a mission, track your progress, and view your collection of badges.

Missions are made of learning objectives that take you through training courses, sales tools, webinars and quizzes. Complete a mission, and you can earn points, badges and chips; unlock levels; and climb the leaderboard.

The more missions you complete, the more rewards you’ll earn. These include points you can redeem for merchandise, experiences and more from the AMD Arena Rewards Store.

Courses galore

Training courses are at the heart of the AMD Arena site. Here are just 3 of the many training courses waiting for you now:

  • AMD EPYC Processor Tool: Leverage the AMD processor-selector and total cost of ownership (TCO) tools to match your customers’ needs with the right AMD EPYC processor.
  • AMD EPYC Processor – Myth Busters: Get help fighting the myths and misconceptions around these powerful CPUs. Then show your data-center customers the way AMD EPYC delivers performance, security and scalability.

Get started

There’s lots more training in AMD Arena, too. The site supports virtually all AMD products across all business segments. So you can learn about both products you already sell as well as new products you’d like to cross-sell in the future.

To learn more, you can take this short training course: Introducing AMD Arena. In just 10 minutes, this course covers how to register for an AMD Arena account, use the Dashboard, complete missions and earn rewards.

Ready to learn, earn and win with AMD Arena? Visit AMD Arena now

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages