Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

For Greener Data Centers, Look to Energy-Efficient Components

Featured content

For Greener Data Centers, Look to Energy-Efficient Components

Energy-efficient systems can help your customers lower their data-center costs while supporting a cleaner environment. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Creating a more energy-efficient data center isn’t only good for the environment, but also a great way for your customers to lower their total cost of ownership (TCO).

In many organizations, the IT department is the single biggest consumer of power. Data centers are filled with power-hungry components, including servers, storage devices, air conditioning and cooling systems.

The average data center uses anywhere from 2 to 4 Terawatt hours (TWh) of electricity per year. That works out to nearly 3% of total global energy use, according to Supermicro. Looking ahead, that’s forecast to reach as high as 8% by 2030.

One important measure of data-center efficiency is Power Usage Effectiveness (PUE). It’s calculated by taking the total electricity in a data center and dividing it by the electricity used by center’s IT components. The difference is how much electricity is being used for cooling, lighting and other non-IT components.

The lower a data center’s PUE, the better. The most energy-efficient data centers have a PUE of 1.0 or lower. The average PUE worldwide last year was 1.55, says the Uptime Institute, a benchmarking organization. That marked a slight improvement over 2021, when the average PUE was 1.57.

Costly power

All that power is expensive, too. Among the short list of ways your customers can lower that cost, moving to energy-efficient server CPUs is especially effective.

For example, AMD says that 11 servers based on of its 4th gen AMD EPYC processors can use up to 29% less power a year than the 17 servers based on competitive CPUs required to handle the same workload volume. And that can help reduce an organization’s capital expenditures by up to 46%, according to AMD.

As that example shows, CPUs with more cores can also reduce power needs by handling the same workloads with fewer physical servers.

Yes, a high-core CPU typically consumes more power than one with fewer cores, especially when run at the same frequency. But by handling more workload volume, a high-core CPU lets your customer do the same or more work with fewer racks. That can also reduce the real estate footprint and lower the need for cooling.

Greener tactics

Other tactics can contribute to a greener data center, too.

One approach involves what Supermicro calls a “disaggregated” server architecture. Essentially, this means that a server’s subsystems—including its CPU, memory and storage—can be upgraded without having to replace the entire chassis. For a double benefit, this lowers TCO while reducing E-waste.

Another approach involves designing servers that can share certain resources, such as power supplies and fans. This can lower power needs by up to 10%, Supermicro says.

Yet another approach is designing servers for maximum airflow, another Supermicro feature. This allows the CPU to operate at higher temperatures, reducing the need for air cooling.

It can also lower the load on a server’s fans. That’s a big deal, because a server’s fans can consume up to 15% of its total power.

Supermicro is also designing systems for liquid cooling. This allows a server’s fan to run at a lower speed, reducing its power needs. Liquid cooling can also lower the need for air conditioning, which in turn lowers PUE.

Liquid cooling functions similarly to a car’s radiator system. It’s basically a circular system involving an external “chiller” that cools the liquid and a series of pipes. The liquid is pumped to run through one or more pipes over a server’s CPU and GPU. The heat from those components warms the liquid. Then the now-hot liquid is sent back to the chiller for cooling and then recirculation.

Green vendors

Leading suppliers can help you help your customers go green.

AMD, for one, has pledged itself to delivering a 30x increase in energy efficiency for its processors and accelerators by 2025. That should translate into a 97% reduction in energy use per computation.

Similarly, Supermicro is working hard to help customers create green data centers. The company participates in industry consortia focused on new cooling alternatives and is a leader in the Liquid Cooling Standing Working Group of The Green Grid, a membership organization that fosters energy-efficient data centers.

Supermicro also offers products using its disaggregated rack-scale design approach to offer higher efficiency and lower costs.

Do more:

 

Featured videos


Follow


Related Content

Learn, Earn and Win with AMD Arena

Featured content

Learn, Earn and Win with AMD Arena

Channel partners can learn about AMD products and technologies at the AMD Arena site. It’s your site for AMD partner training courses, redeemable points and much more.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Interested in learning more about AMD products while also earning points you can redeem for valuable merch? Then check out the AMD Arena site.

There, you can:

  • Stay current on the latest AMD products with training courses, sales tools, webinars and quizzes;
  • Earn points, unlock levels and secure your place in the leaderboard;
  • Redeem those points for valuable products, experiences and merchandise in the AMD Rewards store.

Registering for AMD Arena is quick, easy and free. Once you’re in, you’ll have an Arena Dashboard as your control center. It’s where you can control your profile, begin a mission, track your progress, and view your collection of badges.

Missions are made of learning objectives that take you through training courses, sales tools, webinars and quizzes. Complete a mission, and you can earn points, badges and chips; unlock levels; and climb the leaderboard.

The more missions you complete, the more rewards you’ll earn. These include points you can redeem for merchandise, experiences and more from the AMD Arena Rewards Store.

Courses galore

Training courses are at the heart of the AMD Arena site. Here are just 3 of the many training courses waiting for you now:

  • AMD EPYC Processor Tool: Leverage the AMD processor-selector and total cost of ownership (TCO) tools to match your customers’ needs with the right AMD EPYC processor.
  • AMD EPYC Processor – Myth Busters: Get help fighting the myths and misconceptions around these powerful CPUs. Then show your data-center customers the way AMD EPYC delivers performance, security and scalability.

Get started

There’s lots more training in AMD Arena, too. The site supports virtually all AMD products across all business segments. So you can learn about both products you already sell as well as new products you’d like to cross-sell in the future.

To learn more, you can take this short training course: Introducing AMD Arena. In just 10 minutes, this course covers how to register for an AMD Arena account, use the Dashboard, complete missions and earn rewards.

Ready to learn, earn and win with AMD Arena? Visit AMD Arena now

 

 

Featured videos


Follow


Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Follow


Related Content

Perspective: Looking Back on the Rise of Supercomputing

Featured content

Perspective: Looking Back on the Rise of Supercomputing

Learn More about this topic
  • Applications:
  • Featured Technologies:

We’ve come a long way on the development of high performance computing. Back in 2004, I attended an event held in the gym at the University of San Francisco. The goal was to crowd-source computing power by connecting the PCs of volunteers who were participating in the first “Flash Mob Computing” cluster computing event. Several hundred PCs were networked together in the hope that they would create one of the largest supercomputers, albeit for a few hours.

 

I brought two laptops for the cause. The participation rules stated that the  data on our hard drives would remain intact. Each computer would run a specially crafted boot CD that ran a benchmark called Linpack, a software library for performing numerical linear algebra running on Linux. It was used to measure the collective computing power.

 

The event attracted people with water-cooled overclocked PCs, naked PCs (no cases, just the boards and other components) and custom-made rigs with fancy cases. After a few hours, we had roughly 650 PCs on the floor of the gym. Each PC was connected to a bunch of Foundry BigIron super-switches that were located around the room.

 

The 2004 experiment brought out several industry luminaries, such as Gordon Bell, who was the father of the Digital Equipment Corporation VAX minicomputer, and Jim Gray, who was one of the original designers behind the TPC benchmark while he was at Tandem. Both men at the time were Microsoft fellows. Bell was carrying his own laptop but had forgotten to bring his CD drive, so he couldn’t connect to the mob.

 

Network shortcomings

 

What was most interesting to me, and what gave rise to the mob’s eventual undoing, were the networking issues involved with assembling and running such a huge collection of gear. The mob used ordinary 100BaseT Ethernet, which was a double-edged sword. While easy to set up, it was difficult to debug when network problems arose. The Linpack benchmark requires all the component machines to be running concurrently during the test, and the organizers had trouble getting all 600-plus PCs to operate online flawlessly. The best benchmark accomplished was a peak rate of 180 gigaflops using 256 computers, but that wasn’t an official score as one node failed during the test.

 

To give you an idea of where this stood in terms of overall supercomputing prowess, it was better than the Cray supercomputers of the early 1990s, which delivered around 16 gigaflops.If you lo

 

At the website top500.org (which tracks the fastest supercomputers around the globe), you can see that all the current top 500 machines are measured in petaflops (1 million gigaflops). The Oak Ridge National Laboratory’s Frontier machine, which has occupied the number one spot this year, weighs in at more than 1,000 petaflops and uses 8 million cores. To make the fastest 500 list back in 2004, the mob would have had to achieve a benchmark of over 600 gigaflops. Because of the networking problems, we’ll never know for sure.Still, it was an impressive achievement, given the motley mix of machines. All of the world’s top 500 supercomputers are custom built and carefully curated and assembled to attain that level of computing performance.

 

Another historical note: back in 2004, one of the more interesting entries came in third on the top500.org list: a collection of several thousand Apple Macintoshes running at Virginia Polytechnic University. Back in the present, as you might imagine, almost all the fastest 500 supercomputers are based on a combination of CPU and GPU chip architectures.

 

Today, you can buy your own supercomputer on the retail market, such as the Supermicro SuperBlade® models. And of course, you can routinely run much faster networking protocols than 100-megabit Ethernet.

Featured videos


Follow


Related Content

Supermicro SuperBlades®: Designed to Power Through Distributed AI/ML Training Models

Featured content

Supermicro SuperBlades®: Designed to Power Through Distributed AI/ML Training Models

Running heavy AI/ML workloads can be a challenge for any server, but the SuperBlade has extremely fast networking options, upgradability, the ability to run two AMD EPYC™ 7000-series 64-core processors and the Horovod open-source framework for scaling deep-learning training across multiple GPUs.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Running the largest artificial intelligence (AI) and machine learning (ML) workloads is a job for the higher-performing systems. Such loads are often tough for even more capable machines. Supermicro’s SuperBlade combines blades using AMD EPYC™ CPUs with competing GPUs into a single rack-mounted enclosure (such as the Supermicro SBE-820H-822). That leverages an extremely fast networking architecture for these demanding applications that need to communicate with other servers to complete a task.

 

The Supermicro SuperBlade fits everything into an 8U chassis that can host up to 20 individual servers. This means a single chassis can be divided into separate training and model processing jobs. The components are key: servers can take advantage of the 200G HDR InfiniBand network switch without losing any performance. Think of this as delivering a cloud-in-a-box, providing both easier management of the cluster along with higher performance and lower latencies.

 

The Supermicro SuperBlade is also designed as a disaggregated server, meaning that components can be upgraded with newer and more efficient CPUs or memory as technology progresses. This feature significantly reduces E-waste.


The SuperBlade line supports a wide selection of various configurations, including both CPU-only and mixed CPU/GPU models, such as the SBA-4119SG, which comes with up to two AMD EPYC™ 7000-series 64-core CPUs. These components are delivered on blades that can easily slide right in. Plus, they slide out as easily when you need to replace the blades or the enclosure. The SuperBlade servers support a wide network selection as well, ranging from 10G to 200G Ethernet connections.

 

The SuperBlade employs the Horovod distributed model-training, message-passing interface to let multiple ML sessions run in parallel, maximizing performance. In a sample test of two SuperBlade nodes, the solution was able to process 3,622 GoogleNet images/second, and eight nodes were able to scale up to 13,475 GoogleNet images/second.


As you can see, Supermicro’s SuperBlade improves performance-intensive computing and boosts AI and ML use cases, enabling larger models and data workloads. The combined solution enables higher operational efficiency to automatically streamline processes, monitor for potential breakdowns, apply fixes, more efficiently facilitate the flow of accurate and actionable data and scale up training across multiple nodes.

Featured videos


Follow


Related Content

Red Hat’s OpenShift Runs More Efficiently with Supermicro’s SuperBlade® Servers

Featured content

Red Hat’s OpenShift Runs More Efficiently with Supermicro’s SuperBlade® Servers

The Supermicro SuperBlade's advantage for the Red Hat OCP environment is that it supports a higher-density infrastructure and lower-latency network configuration, along with benefits from reduced cabling, power and shared cooling features. SuperBlades feature multiple AMD EPYC™ processors using fast DDR4 3200MHz memory modules.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Red Hat

Red Hat’s OpenShift Container Platform (OCP) provides enterprise Kubernetes-bundled devops pipelines. It automates builds and container deployments and lets developers focus on application logic while leveraging best-of-class enterprise infrastructure.

 

OpenShift supports a broad range of programming languages, web frameworks, databases, connectors to mobile devices and external back ends. OCP supports cloud-native, stateless applications and traditional applications. Because of its flexibility and utility in running advanced applications, OCP has become one of the go-to places that support high-performance computing.

 

Red Hat’s OCP comes in several deployment packages, including as a managed service running on the major cloud platforms, as virtual machines, and on “bare metal” servers, meaning a user installs all the software needed for the platform and is the sole tenant of the server.

 

It’s that last use case in which Supermicro’s SuperBlade servers are especially useful. Their advantage is that they support a higher-density infrastructure and lower-latency network configuration, along with benefits from reduced cabling, power and shared cooling features.

 

The SuperBlade comes in an 8U chassis with room to accommodate up to 20 hot-pluggable nodes (processor, network and storage) in a variety of more than a dozen models that support serial-attached SCSI, ordinary SATA drives, and GPU processor modules. It sports multiple AMD EPYC™ processors using fast DDR4 3200MHz memory modules.

A chief advantage of the SuperBlade is that it can support a variety of higher-capacity OCP workload configurations and do so within a single server chassis. This is critical because OCP requires a variety of server roles to deliver its overall functionality, and having these roles working inside of a chassis means performance  and latency benefits. For example, you could partition a SuperBlade’s 20 nodes into various OCP components such as administrative, management, storage, worker, infrastructure and load balancer nodes, all operating within a single chassis. For deeper detail about running OCP on the SuperBlade, check out this Supermicro white paper.

Featured videos


Follow


Related Content

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Featured content

Offering Distinct Advantages: The AMD Instinct™ MI210 and MI250 Series GPU Accelerators and Supermicro SuperBlades

Using six nanometer processes and the CDNA2 graphics dies, AMD has created the third generation of GPU accelerators, which have more than twice the performance of previous GPU processors and deliver 181 teraflops of mixed precision peak computing power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD and Supermicro have made it easier to exploit the most advanced combination of GPU and CPU technologies.

Derek Bouius, a senior product manager at AMD, said “Using six nanometer processes and the CDNA2 graphics dies, we created the third generation of GPU chipsets that have more than twice the performance of previous GPU processors. They deliver 181 teraflops of mixed precision peak computing power.” Called the AMD Instinct MI210™ and AMD Instinct MI250™, they have twice the memory (64 GB) to work with and deliver data at the rate of 1.6 TB/sec. Both these accelerators are packaged as fourth generation PCIe expansion cards and come with direct connectors to Infinity Fabric bridges for faster I/O throughput between GPU cards -- without having their traffic go through the standard PCIe bus.

The Instinct accelerators have immediate benefit for improving performance in the most complex computational applications, such as molecular dynamics, computer-aided engineering, weather and oil and gas modeling.

"We provided optimized containerized applications that are pre-built to support the accelerator and run them out of the box," Bouius said. “It is a very easy lift to go from existing solutions to the AMD accelerator,” he added. It’s accomplished by bringing together AMD’s ROCm™ support libraries and tools with its HIP programming language and device drivers – all of which are open source. They can unlock the GPU performance enhancements to make it easier for software developers to take advantage of its latest processors. AMD offers a catalog of dozens of currently available applications.

Supermicro’s SuperBlade product line combines the new AMD Instinct™ GPU accelerators and AMD EPYC™ processors to deliver higher performance with lower latency for its enterprise customers.

One packaging option is to combine six chassis with 20 blades each, delivering 120 servers that provide a total of more than 3,000 teraflops of combined processing power. This equipment delivers more power efficiency in less space with fewer cables, providing a lower cost of ownership. The blade servers are all hot-pluggable and come with two onboard front-mounted 25 gigabit and two 10 gigabit Ethernet connectors.

“Everything is faster now for running enterprise workloads,” says Shanthi Adloori, senior director of product management for Supermicro. “This is why our Supermicro servers have won the world record in performance from the Standard Performance Evaluation Corp. three years in row.” Another popular design for the SuperBlade is to provide an entire “private cloud in a box” that combines administration and worker nodes and handles deploying a Red Hat Openshift platform to run Kubernetes-based deployments with minimal provisioning.

Related Resources

Featured videos


Follow


Related Content

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Featured content

AMD and Supermicro Work Together to Produce the Latest High-Performance Computers

Learn More about this topic
  • Applications:
  • Featured Technologies:

Solving some of business’ bigger computing challenges requires a solid partnership between CPU vendor, system builders and channel partners. That is what AMD and Supermicro have brought to the market with the third generation of AMD's EPYC™ processors with AMD 3D V-Cache™ and AMD Instinct™ MI200 series GPU accelerators wrapped up in SuperBlade servers built by Supermicro.

 

“This has immediate benefits for particular fields such as crash and digital circuit simulations and electronic design automation,” said David Weber, Senior Manager for AMD. “It means we can create virtual chips and track workflows and performance before we design and build the silicon." The same situation holds for computational fluid dynamics, he added, "in which we can determine the virtual air and water flows across wings and through water pumps and save a lot of time and money, and the AMD 3D V-Cache™ makes this process a lot faster.” Without any software coding changes, these applications are seeing 50% to 80% performance improvement, Weber said.

 

The chips are not just fast, they come with several built-in security features, including support for Zen 3 and Shadow Stack. Zen 3 is the overall name for a series of improvements to the AMD higher-end CPU line that have shown a 19% improvement in instructions per clock, lower latency for doubled cache delivery when compared to the earlier Zen 2 architecture chips.

 

These processors also support Microsoft’s Hardware-enforced Stack Protection to help detect and thwart control-flow attacks by checking the normal program stack against a secured hardware-stored copy. This helps to boot securely, protect the computer from firmware vulnerabilities, shield the operating system from attacks, and prevent unauthorized access to devices and data with advanced access controls and authentication systems.

 

Supermicro offers its SuperBlade servers that take advantage of all these performance and security improvements. For more information, see this webcast.

Featured videos


Follow


Related Content

Queensland Educational Foundation Boosts IT Security with Supermicro Computers Using AMD EPYC™ CPUs

Featured content

Queensland Educational Foundation Boosts IT Security with Supermicro Computers Using AMD EPYC™ CPUs

In South Africa, the Queensland Education Foundation supports 11 different schools for the first 12 primary grades. In an effort to transform the region into a marquee digital environment, it has built a series of fully networked and online classrooms. The network is used both to supply connectivity and as a pedagogical tool to teach students enterprise IT concepts and provide hands-on instruction.

Learn More about this topic
  • Applications:
  • Featured Technologies:

In South Africa, The Queensland Education Foundation's legacy security infrastructure – including dedicated firewalls – was overloaded and operating at close to maximum capacity.

 

The Queensland Education Foundation (QEF) supports 11 different schools for the first 12 primary grades. In an effort to transform the region into a marquee digital environment, it has built a series of fully networked and online classrooms. The network is used both to supply connectivity and as a pedagogical tool to teach students enterprise IT concepts and provide hands-on instruction about their use. Combine that with the increased demands that COVID-19 placed on students to learn from home, the foundation needed to beef up its wide-area network with a higher-capacity fiber ring and better security software.

 

The Foundation's IT team went looking for a single-socket computer solution to simplify support, and conserve power and cooling requirements. This would be used to run the Arista Edge Threat Management software firewall and other security tools to protect their networks and help support student file sharing across the member schools.

 

The IT team experimented with an earlier Supermicro server to test the concept, "but it wasn’t powerful enough," said Johan Bester, one of the IT managers for the QEF. Eventually, the team selected the Supermicro A+ server powered by the AMD EPYC™ 7502 CPU with 128GB of RAM.

 

The server also contains four 10Gbps Ethernet switch ports to boost I/O performance. "With this server, we are able to offer our students a safe environment while encouraging collaborative projects among different schools," he said. The team was attracted to the A+ server because of its price/performance ratio. Plus, its specs met the foundation’s existing service level agreements while delivering increased functionality. The Supermicro system can also be used as a template that can be easily replicated across other South African school networks.

For more detail on Queensland Educational Foundation's adoption of Supermicro and AMD computing technologies, see the QEF case study on the Supermicro website.

Featured videos


Follow


Related Content

Pages