Understanding the Rising Significance of FPGAs and GPUs in a CPU World

CPUs are getting help for applications that make higher demands of their services. Complementary processors, such as GPUs and FPGAs make a big difference on some workloads. Find out why.

  • October 25, 2022 | Author: David Strom
Learn More about this topic

Article Key

The PC has been around for more than 40 years, but many of us likely presume that a computer’s heart is its central processing unit or CPU. But as applications have become more demanding, the nature of the processor has evolved.

 

For many performance-intensive applications these days, you’ll find computers using multiple processors to complement the CPU, including field-programmable gate arrays (FPGAs) and graphical processing units (GPUs). Let’s look at this trend toward using complementary processors and see why they make sense for some types of applications.

Complementary processing is not a new idea. Among the first such chips for desktop computers was Intel’s 8087 math co-processor, which was available on some 8086 and 8088 CPU-based PCs beginning in the early 1980s. The CPU could offload math-intensive calculations, leaving it free to pursue other tasks. Of course, what was considered “math intensive” back then – with CPUs that operated at Megahertz rates and hardware that's 40 years old – is vastly different from today’s processors, which run applications at exascale speeds and operate with terabytes of data.

 

Some of today’s applications, such as finite element analysis, image and natural language processing and also digital simulations need a lot more horsepower. There are two types of applications that ordinary CPUs – even those fitted to higher-end models – don’t perform well: parallel processing tasks and those requiring higher data throughputs.
 

FPGAs and GPUs


The FPGA consists of specialized integrated circuits with a reprogrammable design. In the past, FPGAs were inexpensive and designed for specific applications. But today’s chips are more flexible with hardware description language that makes them configurable for a range of specific applications.

GPUs consist of thousands of processor cores, so they are able to divide and conquer performance-intensive tasks. In contrast, the typical CPU has perhaps a dozen or so cores.

As one example, thanks to their strong parallel processing capabilities, GPUs can speed up data-science model development and training. So, models can be built – and refined – more quickly, with individual runs taking mere hours instead of multiple days.

Another reason that GPUs are popular is because the major cloud platform providers now offer GPU-based instances. These are very expensive machines to purchase, so “renting time” on them in the cloud is a boon to application developers, as discussed in: Microsoft Azure’s More Capable Compute Instances Take Advantage of the Latest AMD EPYC™ Processors.

 

Related Content