Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

Do you know why 64 cores really matters?

Featured content

Do you know why 64 cores really matters?

In a recent test, Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors ran engineering simulations nearly as fast as a dual-processor system, but needed only two-thirds as much power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

More cores per CPU sounds good, but what does it actually mean for your customers?

In the case of certain Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors, it means running engineering simulations with dual-processor performance from a single-socket system. And with further cost savings due to two-thirds lower power consumption.

That’s according to tests recently conducted by MVConcept, a consulting firm that provides hardware and software optimizations. The firm tested two Supermicro systems, the AS-5014A-TT SuperWorkstation and AS-2114GT-DPNR server.

A solution brief based on MVConcept’s testing is now available from Supermicro.

Test setup

For these tests, the Supermicro server and workstation were both tested in two AMD configurations:

  • One with the AMD Ryzen Threadripper PRO 5995WX processor
  • The other with an older, 2nd gen AMD Ryzen Threadripper PRO 3995WX processor

In the tests, both AMD processors were used to run 32-core as well as 64-core operations.

The Supermicro systems were tested running Ansys Fluent, fluid simulation software from Ansys Inc. Fluent models fluid flow, heat, mass transfer and chemical reactions. Benchmarks for the testing included aircraft wing, oil rig and pump.

The results

Among the results: The Supermicro systems delivered nearly dual-CPU performance with a single processor, while also consuming less electricity.

What’s more, the 3rd generation AMD 5995WX CPU delivered significantly better performance than the 2nd generation AMD 3995WX.

Systems with larger cache saw performance improved the most. So a system with L3 cache of 256MB outperformed one with just 128MB.

BIOS settings proved to be especially important for realizing the optimal performance from the AMD Ryzen Threadripper PRO when running the tested applications. Specifically, Supermicro recommends using NPS=4 and SMT=OFF when running Ansys Fluent with AMD Ryzen Threadripper PRO. (NPS = non-uniform memory access (NUMA) per socket; and SMT = symmetric multithreading.)

Another cool factor involves taking advantage of the Supermicro AS-2114GT-DPNR server’s two hot-pluggable nodes. First, one node can be used to pre-process the data. Then the other node can be used to run Ansys Fluid.

Put it all together, and you get a powerful takeaway for your customers: These AMD-powered Supermicro systems offer data-center power on both the desktop and server rack, making them ideal for SMBs and enterprises alike.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Try before you buy with Supermicro’s H13 JumpStart remote access program

Featured content

Try before you buy with Supermicro’s H13 JumpStart remote access program

The Supermicro H13 JumpStart Remote Access program lets you and your customers test data-center workloads on Supermicro systems based on 4th Gen AMD EPYC 9004 Series processors. Even better, the program is free.

Learn More about this topic
  • Applications:
  • Featured Technologies:

You and your customers can now try out systems based on 4th Gen AMD EPYC 9004 Series processors at no cost with the Supermicro remote access program.

Called H13 JumpStart, the free program offers remote access to Supermicro’s top-end H13 systems.

Supermicro’s H13 systems are designed for today’s advanced data-center workloads. They feature 4th Gen AMD EPYC 9004 Series processors with up to 96 Zen 4 cores per socket, DDR5 memory, PCIe 5.0, and support for Compute Express Link (CXL) 1.1+ peripherals.

The H13 JumpStart program lets you and your customers validate, test and benchmark workloads on either of two Supermicro systems:

●      Hyper AS-2025HS-TNR: Features dual AMD EPYC processors, 24 DIMMS, up to 3 accelerator cards, AIOM network adapter, and 12 hot-swap NVMe/SAS/SATA drive bays.

●      CloudDC AS-2015CS-TNR: Features a single AMD processor, 12 DIMMS, 4 accelerator cards, dual AIOM network adapters, and a 240GB solid state drive.

Simple startup

Getting started with Supermicro’s H13 JumpStart program is simple. Just sign up with your name, email and a brief description of what you plan to do with the system.

Next, Supermicro will verify your information and your request. Assuming you qualify, you’ll receive a welcome email from Supermicro, and you’ll be scheduled to gain access to the JumpStart server.

Next, you’ll be given a unique username, password and URL to access your JumpStart account.

Run your test. Once you’re done, Supermicro will also ask you to complete a quick survey for your feedback on the program.

Other details

The JumpStart program does have a few limitations. One is the number of sessions you can have open at once. Currently, it’s limited to 1 VNC (virtual network computing), 1 SSH (secure shell), and 1 IPMI (intelligent platform management interface) session per user.

Also, the JumpStart test server is not directly addressable to the internet. However, the servers can reach out to the internet to get files.

You should test with JumpStart using anonymized data only. That’s because the Supermicro server’s security policies may differ from those of your organization.

But rest assured, once you’re done with your JumpStart demo, the server storage is manually erased, the BIOS and firmware are reflashed, and the OS is re-installed with new credentials. So your data and personal information are completely removed.

Get started

Ready to get a jump-start with Supermicro’s H13 JumpStart Remote Access program? Apply now to secure access.

Want to learn more about Supermicro’s H13 system portfolio? Check out a 5-part video series featuring Linus Sebastian of Linus Tech Tips. He takes a deep dive into how these Supermicro systems run faster and greener. 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

How rackscale integration can help your customers get productive faster

Featured content

How rackscale integration can help your customers get productive faster

Supermicro’s rack integration and deployment service can help your customers get productive sooner.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

How would your key data-center customers like to improve their server performance, speed their rate of innovation, and lower their organization’s environmental impact—all while getting productive sooner?

Those are among the key benefits of Supermicro’s rack integration and deployment service. It’s basically a one-stop shop for a defined process with experts to design and create an effective and efficient cloud and enterprise hardware solution.

Supermicro’s dedicated team can provide everything from early design to onsite integration. That includes design, assembly, configuration, testing and delivery.

Hardware covered by Supermicro’s rack integration service includes servers, storage, switches and rack products. That includes systems based on the latest 4th Generation AMD EPYC server processors. Supermicro’s experts can also work closely with your customer to design a test plan that includes application loading, performance tuning and testing.

All these can be used for a wide range of optimized solutions. These include AI and deep learning, big data and Hadoop refreshes, and vSAN.

Customers of Supermicro’s rackscale systems can also opt for liquid cooling. This can reduce your customer’s operating expenses by more than 40%. And by lowering fan speeds, liquid cooling can further reduce their power needs, delivering a PUE (power usage effectiveness metric) of close to 1.0. All that typically provides an ROI in just 1 year, according to Supermicro.

Five-phase integration

When your customers work with Supermicro on rack integration, they’ll get support through 5 phases:

  • Design: Supermicro learns your customer’s business problems and requirements, develops a proof-of-concept to validate the solution, then selects the most suitable hardware and works with your customer on power requirements and budgets. Then it creates a bill of materials, followed by a detailed rack-level engineering diagram.
  • Assembly: Supermicro technicians familiar with the company’s servers assemble the system, either on your customer’s site or pre-shipment at a Supermicro facility. This includes all nodes, racks, cabling and third-party equipment.
  • Configuration: Each server’s BIOS is updated, optimized and tested. Firmware gets updated, too. OSes and custom images are pre-installed or deployed to specific nodes as needed.
  • Testing: This includes a performance analysis, a check for multi-vendor compatibility, and full rack burn-in testing for a standard 8 hours.
  • Logistics: Supermicro ships the complete system to your customer’s site, can install it, and provides ongoing customer service.

Big benes

For your customers, the benefits of working with Supermicro and AMD can include better performance per watt and per dollar, faster time to market with IT innovation, a reduced environmental impact, and lower costs.

Further, once the system is installed, Supermicro’s support can significantly reduce lead times to fix system issues. The company keeps the whole process from L6 to L12 in-house, and it maintains a vast inventory of spare parts on campus.

Wherever your customers are located, Supermicro likely has an office nearby. With a global footprint, Supermicro operates across the U.S., EMEA and Taiwan. Supermicro has invested heavily in rack-integration testing facilities, too. These centers are now being expanded to test rack-level air and liquid cooling.

For your customers with cloud-based systems, there are additional benefits. These include optimizing the IT environment for their clouds, and meeting co-location requirements.

There’s business for channel partners, too. You can add specific software to the rack system. And you can work with your customer on training and more.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD-based servers support enterprise applications — and break OLTP records

Featured content

AMD-based servers support enterprise applications — and break OLTP records

AMD EPYC server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD EPYC™ server processors are designed to help your data-center customers get their workloads done faster and with fewer computing resources.

AMD EPYC server processors offer a consistent set of features across a range of choices from 8 to 96 cores. This balanced set of resources found in AMD EPYC processors lets your customers right-size server configurations to fit their workloads.

What’s more, these AMD CPUs include models that offer high per-core performance optimized for frequency-sensitive and single-threaded workloads. This can help reduce the TCO for core-based software licenses.

AMD introduced the 4th Generation AMD EPYC processors in late 2022. The first of this generation are the AMD EPYC 9004 series CPUs. They’ve been designed to support performance and efficiency, help keep data secure, and use the latest industry features and architectures.

AMD continues to ship and support the previous 3rd Generation AMD EPYC 7002 and 7003 series processors. These processors power servers that are now available from a long list of leading hardware suppliers, including Supermicro.

Record-breaking

Good as all that may sound, you and your customers still need hard evidence that AMD processors can truly speed up their enterprise applications. Well, a new independent test of AMD-based Supermicro servers has provided just that.

The test was performed by the Telecommunications Technology Association (TTA), an IT standardization association based in Seongnam, South Korea. The TTA tested several Supermicro database and web servers powered by 3rd Gen AMD EPYC 7343 processors.

The results: The Supermicro servers set a world record for performance by a non-cluster system of 507,802 transactions per minute (tpmC).

That test was conducted using the TPC Benchmark, which measures a server’s online transaction processing (OLTP) performance. The tpmC metric measures how many new-order transactions a system can generate in a minute while executing business transactions under specific response-time requirements.

What’s more, when compared with servers based on the previous 2nd Gen AMD EPYC processors, the newer Supermicro servers were 33% faster, as shown in the chart below:

DATA: Telecommunications Technology Association

All that leads the TTA to conclude that Supermicro servers powered by the latest AMD processors “empower organizations to create deployments that deliver data insights faster than ever before.”

Do more:

Note:

1. https://www.tpc.org/1809

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Protect Customer Data Centers with AMD Infinity Guard

Featured content

Protect Customer Data Centers with AMD Infinity Guard

AMD’s 4th Gen EPYC server processors can keep your customers safe with Infinity Guard, a set of innovative and powerful security features.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When AMD released its 4th generation EPYC server processors, the company also doubled down on its commitment to enterprise data-center security. AMD did so with a set of security features it calls AMD Infinity Guard.

The latest EPYC processors—previously code-named Genoa—include an array of silicon-level security assets designed to resist increasingly sophisticated cyberattacks.

CIOs and IT managers who deploy AMD’s latest security tech may sigh with relief as they sidestep mounting threats such as ransomware, malicious virtual machines (VMs) and hypervisor-based attacks like data replay and memory re-mapping.

Growing concerns

Hackers are relentless. Beguiled by the siren song of easy riches through cybercrime, they spend countless hours devising new ways to exploit even the slightest hardware vulnerability. The bigger the organization, the more money these cyber criminals can extort—which is why they often target enterprise data centers.

AMD took this into account when designing the EPYC server processor series. The company had three goals: to address hardware-level vulnerabilities, eliminate likely threat vectors, and deny hackers access to any surface they could exploit.

Perhaps just as vital, AMD set a goal of addressing security concerns without impacting system performance. This is especially important for modern application workloads that require both high performance and low latency.

For instance, organizations that offer streaming content and mass storage could be just as easily crushed by glitches and malfunctions as they could by a significant security breach.

Security tech within

AMD is taking a decidedly ain’t-messin’-around approach to its latest security tech. Rather than paying lip service to IT Ops’ concerns, AMD engineers went deep down into the heart of their processor architecture to identify and remedy threat vectors.

The impressive security portfolio includes 4 primary tools to guard against threats:

  • Secure Encrypted Virtualization: SEV provides individual encryption for every virtual machine on a given server. Each VM is assigned one of up to 509 unique encryption keys known only to the processor. This protects data confidentiality in the event that a malicious VM breaches a system’s memory, or a compromised hypervisor reaches into a guest VM.
  • Secure Memory Encryption: Full memory encryption protects against internal and physical attacks such as the dreaded cold boot attack. There, an attacker with physical access to a computer conducts a memory dump by performing a hard reset of the target machine. SME ensures that the data remains encrypted even if the main memory is physically removed from a server.
  • Secure Boot: To help mitigate the threat of malware, AMD EPYC processors employ an embedded security checkpoint called a “root of trust.” This validates the initial BIOS software boot without corruption.
  • Shadow Stack: It may sound like a Marvel superhero, but in fact this guards against threat vectors such as return-oriented programming (ROP) attacks. Shadow Stack does this by compiling a record of return addresses so a comparison can be made to help ensure software-code integrity.

A well-rounded engine

A modern server processor serves many masters. While addressing security concerns is vitally important, so are ensuring high performance, impressive energy efficiency and a decent return on investment (ROI).

Your customers may appreciate knowing that AMD’s latest EPYC processor series addresses these factors. Rather than focusing solely on headline-grabbing tech like speeds & feeds, AMD took a more holistic approach, addressing many issues endemic to modern data-center operations.

EPYC CPUs also boast broad ecosystem support. For AMD, this means fostering collaboration with a network of solution providers. And for your customers, this means worry-free migration and seamless integration with their existing x86 infrastructures.

Your data-center customers are probably concerned about security. Who isn’t, these days? So talk to them about AMD Infinity Guard. After all, a secure customer is a happy customer.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

For Greener Data Centers, Look to Energy-Efficient Components

Featured content

For Greener Data Centers, Look to Energy-Efficient Components

Energy-efficient systems can help your customers lower their data-center costs while supporting a cleaner environment. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Creating a more energy-efficient data center isn’t only good for the environment, but also a great way for your customers to lower their total cost of ownership (TCO).

In many organizations, the IT department is the single biggest consumer of power. Data centers are filled with power-hungry components, including servers, storage devices, air conditioning and cooling systems.

The average data center uses anywhere from 2 to 4 Terawatt hours (TWh) of electricity per year. That works out to nearly 3% of total global energy use, according to Supermicro. Looking ahead, that’s forecast to reach as high as 8% by 2030.

One important measure of data-center efficiency is Power Usage Effectiveness (PUE). It’s calculated by taking the total electricity in a data center and dividing it by the electricity used by center’s IT components. The difference is how much electricity is being used for cooling, lighting and other non-IT components.

The lower a data center’s PUE, the better. The most energy-efficient data centers have a PUE of 1.0 or lower. The average PUE worldwide last year was 1.55, says the Uptime Institute, a benchmarking organization. That marked a slight improvement over 2021, when the average PUE was 1.57.

Costly power

All that power is expensive, too. Among the short list of ways your customers can lower that cost, moving to energy-efficient server CPUs is especially effective.

For example, AMD says that 11 servers based on of its 4th gen AMD EPYC processors can use up to 29% less power a year than the 17 servers based on competitive CPUs required to handle the same workload volume. And that can help reduce an organization’s capital expenditures by up to 46%, according to AMD.

As that example shows, CPUs with more cores can also reduce power needs by handling the same workloads with fewer physical servers.

Yes, a high-core CPU typically consumes more power than one with fewer cores, especially when run at the same frequency. But by handling more workload volume, a high-core CPU lets your customer do the same or more work with fewer racks. That can also reduce the real estate footprint and lower the need for cooling.

Greener tactics

Other tactics can contribute to a greener data center, too.

One approach involves what Supermicro calls a “disaggregated” server architecture. Essentially, this means that a server’s subsystems—including its CPU, memory and storage—can be upgraded without having to replace the entire chassis. For a double benefit, this lowers TCO while reducing E-waste.

Another approach involves designing servers that can share certain resources, such as power supplies and fans. This can lower power needs by up to 10%, Supermicro says.

Yet another approach is designing servers for maximum airflow, another Supermicro feature. This allows the CPU to operate at higher temperatures, reducing the need for air cooling.

It can also lower the load on a server’s fans. That’s a big deal, because a server’s fans can consume up to 15% of its total power.

Supermicro is also designing systems for liquid cooling. This allows a server’s fan to run at a lower speed, reducing its power needs. Liquid cooling can also lower the need for air conditioning, which in turn lowers PUE.

Liquid cooling functions similarly to a car’s radiator system. It’s basically a circular system involving an external “chiller” that cools the liquid and a series of pipes. The liquid is pumped to run through one or more pipes over a server’s CPU and GPU. The heat from those components warms the liquid. Then the now-hot liquid is sent back to the chiller for cooling and then recirculation.

Green vendors

Leading suppliers can help you help your customers go green.

AMD, for one, has pledged itself to delivering a 30x increase in energy efficiency for its processors and accelerators by 2025. That should translate into a 97% reduction in energy use per computation.

Similarly, Supermicro is working hard to help customers create green data centers. The company participates in industry consortia focused on new cooling alternatives and is a leader in the Liquid Cooling Standing Working Group of The Green Grid, a membership organization that fosters energy-efficient data centers.

Supermicro also offers products using its disaggregated rack-scale design approach to offer higher efficiency and lower costs.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

What are Your Server Customers Looking For? It Depends on Who They Are

Featured content

What are Your Server Customers Looking For? It Depends on Who They Are

While hyperscalers and enterprises both buy servers powered by the latest CPUs, their purchase decisions are based on very different criteria. Knowing who you’re selling to, and what they’re looking for, can make all the difference.

Learn More about this topic
  • Applications:
  • Featured Technologies:
Think all buyers of servers powered by the latest-generation CPUs are all looking for the same thing? Think again.
 
It pays to think of these customers as falling into one of two major groups. On the one hand are the so-called hyperscalers, those large providers of public cloud services. On the other are CIOs and other IT executives at large enterprises who are looking to improve their on-premises data centers. 
 
Customers in both groups are serious buyers of the latest, greatest servers. But their buying criteria? Two very different things.
 
Hyperscalers: TCO, x86, VM
 
When it comes to cutting-edge servers, hyperscalers including Amazon Web Services (AWS), Microsoft Azure and Google Cloud are attracted to the cost advantage.
 
As Mark Papermaster, chief technology officer at AMD, explained in a recent technology conference sponsored by Morgan Stanley, “For the hyperscalers, new server processors are an easy transition. Because they’re massive buyers, hyperscalers see the TCO [total cost of ownership] advantage.”
 
Hyperscalers also like the fact that most if not all new server CPUs still adhere to the x86 family of instruction-set architectures. “For their workloads,” Papermaster said, “it lifts and shifts.”
 
Big hyperscalers are also big implementers of containers and virtual machines. That’s an efficient workload application for today’s high-density CPUs. The higher the CPU density, the more VMs can be supported on a single server. 
 
For example, AMD’s 4th gen EPYC processors (formerly code-named Genoa) pack in 96 cores, or 50% more than the previous generation. That kind of density suits hyperscalers well, because they have such extensive inventories of VMs.
 
Enterprise CIOs: different priorities
 
For CIOs and other enterprise IT executives, server priorities and buying criteria are quite different. These buyers are looking mainly for ease of migration, broad ecosystem support, robust security and energy efficiency (which can also be a component of TCO). 
 
CIOs also need to keep their CFOs and boards happy, so they’re also looking for a clear and easily explainable return on investment (ROI). They may also need to tie this calculation to their organization’s strategic goals. For example, if a company were looking to increase its market share, the CIO might want to explain how purchasing new servers could help achieve that goal. 
 
One relatively new and increasingly important priority is energy efficiency. Enterprises increasingly need to demonstrate their support for “green” initiatives. One way a company can do that is by showing how their computer technology gets more done with less electric power.
 
Also, many data centers are already receiving as much electric power as they’re configured for. In other words, they can’t add power to get more work done. But they can add energy-efficient servers able to get more work done with the same or even less power than the systems they replace.
 
A third group, too
 
During his recent Morgan Stanley presentation, Papermaster of AMD also discussed a third group of server buyers: Organizations with hybrid IT orchestrations, both cloud and on-premises, that want the ability to move workloads back and forth. Essentially, this means mimicking the cloud in an on-prem environment.
 
Looking ahead, Papermaster discussed a forthcoming EPYC processor, code-named Bergamo, which he said is “right on track” to ship in this year’s first half. 
 
The new CPU will be aimed at cloud-native applications that need high levels of both throughput and per-socket performance. As previously announced, Bergamo will have up to 128 “Zen 4c” cores, and will come with the same software and security features as Genoa. 
 
“We listen to our customers,” Papermaster said, “and we see where workloads are going.” That’s a good practice for channel partners, too.
 
Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

What is the AMD Instinct MI300A APU?

Featured content

What is the AMD Instinct MI300A APU?

Accelerate HPC and AI workloads with the combined power of CPU and GPU compute. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

The AMD Instinct MI300A APU, set to ship in this year’s second half, combines the compute power of a CPU with the capabilities of a GPU. Your data-center customers should be interested if they run high-performance computing (HPC) or AI workloads.

More specifically, the AMD Instinct MI300A is an integrated data-center accelerator that combines AMD Zen 4 cores, AMD CDNA3 GPUs and high-bandwidth memory (HBM) chiplets. In all, it has more than 146 billion transistors.

This AMD component uses 3D die stacking to enable extremely high bandwidth among its parts. In fact, nine 5nm chiplets that are 3D-stacked on top of four 6nm chiplets with significant HBM surrounding it.

And it’s coming soon. The AMD Instinct MI300A is currently in AMD’s labs. It will soon be sampled with customers. And AMD says it’s scheduled for shipments in the second half of this year. 

‘Most complex chip’

The AMD Instinct MI300A was publicly displayed for the first time earlier this year, when AMD CEO Lisa Su held up a sample of the component during her CES 2023 keynote. “This is actually the most complex chip we’ve ever built,” Su told the audience.

A few tech blogs have gotten their hands on early samples. One of them, Tom’s Hardware, was impressed by the “incredible data throughput” among the Instinct MI300A’s CPU, GPU and memory dies.

The Tom’s Hardware reviewer added that will let the CPU and GPU work on the same data in memory simultaneously, saving power, boosting performance and simplifying programming.

Another blogger, Karl Freund, a former AMD engineer who now works as a market researcher, wrote in a recent Forbes blog post that the Instinct MI300 is a “monster device” (in a good way). He also congratulated AMD for “leading the entire industry in embracing chiplet-based architectures.”

Previous generation

The new AMD accelerator builds on a previous generation, the AMD Instinct MI200 Series. It’s now used in a variety of systems, including Supermicro’s A+ Server 4124GQ-TNMI. This completely assembled system supports the AMD Instinct MI250 OAM (OCP Acceleration Module) accelerator and AMD Infinity Fabric technology.

The AMD Instinct MI200 accelerators are designed with the company’s 2nd gen AMD CDNA Architecture, which encompasses the AMD Infinity Architecture and Infinity Fabric. Together, they offer an advanced platform for tightly connected GPU systems, empowering workloads to share data fast and efficiently.

The MI200 series offers P2P connectivity with up to 8 intelligent 3rd Gen AMD Infinity Fabric Links with up to 800 GB/sec. of peak total theoretical I/O bandwidth. That’s 2.4x the GPU P2P theoretical bandwidth of the previous generation.

Supercomputing power

The same kind of performance now available to commercial users of the AMD-Supermicro system is also being applied to scientific supercomputers.

The AMD Instinct MI25X accelerator is now used in the Frontier supercomputer built by the U.S. Dept. of Energy. That system’s peak performance is rated at 1.6 exaflops—or over a billion billion floating-point operations per second.

The AMD Instinct MI250X accelerator provides Frontier with flexible, high-performance compute engines, high-bandwidth memory, and scalable fabric and communications technologies.

Looking ahead, the AMD Instinct MI300A APU will be used in Frontier’s successor, known as El Capitan. Scheduled for installation late this year, this supercomputer is expected to deliver at least 2 exaflops of peak performance.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Learn, Earn and Win with AMD Arena

Featured content

Learn, Earn and Win with AMD Arena

Channel partners can learn about AMD products and technologies at the AMD Arena site. It’s your site for AMD partner training courses, redeemable points and much more.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Interested in learning more about AMD products while also earning points you can redeem for valuable merch? Then check out the AMD Arena site.

There, you can:

  • Stay current on the latest AMD products with training courses, sales tools, webinars and quizzes;
  • Earn points, unlock levels and secure your place in the leaderboard;
  • Redeem those points for valuable products, experiences and merchandise in the AMD Rewards store.

Registering for AMD Arena is quick, easy and free. Once you’re in, you’ll have an Arena Dashboard as your control center. It’s where you can control your profile, begin a mission, track your progress, and view your collection of badges.

Missions are made of learning objectives that take you through training courses, sales tools, webinars and quizzes. Complete a mission, and you can earn points, badges and chips; unlock levels; and climb the leaderboard.

The more missions you complete, the more rewards you’ll earn. These include points you can redeem for merchandise, experiences and more from the AMD Arena Rewards Store.

Courses galore

Training courses are at the heart of the AMD Arena site. Here are just 3 of the many training courses waiting for you now:

  • AMD EPYC Processor Tool: Leverage the AMD processor-selector and total cost of ownership (TCO) tools to match your customers’ needs with the right AMD EPYC processor.
  • AMD EPYC Processor – Myth Busters: Get help fighting the myths and misconceptions around these powerful CPUs. Then show your data-center customers the way AMD EPYC delivers performance, security and scalability.

Get started

There’s lots more training in AMD Arena, too. The site supports virtually all AMD products across all business segments. So you can learn about both products you already sell as well as new products you’d like to cross-sell in the future.

To learn more, you can take this short training course: Introducing AMD Arena. In just 10 minutes, this course covers how to register for an AMD Arena account, use the Dashboard, complete missions and earn rewards.

Ready to learn, earn and win with AMD Arena? Visit AMD Arena now

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

Featured content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

The Student Cluster Computing challenge made its 16th appearance at the SuperComputer 22 (SC22) event in Dallas. The two student teams that were running AMD EPYC™ CPUs and AMD Instinct™ GPUs were the two teams that aced the Linpack benchmark. That's the test used to determined the TOP500 supercomputers in the world.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last month, the annual Supercomputing Conference 2022 (SC22) was held in Dallas. The Student Cluster Competition (SCC), which began in 2007, was also performed again. The SCC offers an immersive high-performance computing (HPC) experience to undergraduate and high school students.

 

According to the SC22 website: Student teams design and build small clusters, learn scientific applications, apply optimization techniques for their chosen architectures and compete in a non-stop, 48-hour challenge at the SC conference to complete real-world scientific workloads, showing off their HPC knowledge for conference attendees and judges.

 

Each team has six students, at least one faculty advisor, a sutdent team leader, and is associated with vendor sponsors, which provide the equipment. AMD and Supermicro jointly sponsored both the Massachusetts Green Team from MIT, Boston University and Northeastern University and the 2MuchCache team from UC San Diego (UCSD) and the San Diego Supercomputer Center (SDSC). Running AMD EPYC™ CPUs and AMD Instinct™-based GPUs supplied by AMD and Supermicro, the two teams came in first and second in the SCC Linpack test.

 

The Linpack benchmarks measure a system's floating-point computing power, according to Wikipedia. The latest version of these benchmarks is used to determine the TOP500 list, ranks the world's most powerful supercomputers.

 

In addition to chasing high scores on benchmarks, the teams must operate their systems without exceeding a power limit. For 2022, the competition used a variable power limit: at times, the power available to each team for its competition hardware was as high as 4000-watts (but was usually lower) and at times it was as low as 1500-watts (but was usually higher).

 

The “2MuchCache” team offers a poster page with extensive detail about their competition hardware. They used two third-generation AMD EPYC™ 7773X CPUs with 64 cores, 128 threads and 768MB of stacked-die cache. Team 2MuchCache used one AS-4124GQ-TNMI system with four AMD Instinct™ MI250 GPUs with 53 simultaneous threads.

 

The “Green Team’s” poster page also boasts two instances of third-generation AMD 7003-series EPYC™ processors, AMD Instinct™ 1210 GPUs with AMD Infinity fabric. The Green Team utilized two Supermicro AS-4124GS-TNR GPU systems.

 

The Students of 2MuchCache:

Longtian Bao, role: Lead for Data Centric Python, Co-lead for HPCG

Stefanie Dao, role: Lead for PHASTA, Co-lead for HPL

Michael Granado, role: Lead for HPCG, Co-lead for PHASTA

Yuchen Jing, role: Lead for IO500, Co-lead for Data Centric Python

Davit Margarian, role: Lead for HPL, Co-lead for LAMMPS

Matthew Mikhailov Major, role: Team Lead, Lead for LAMMPS, Co-lead for IO500

 

The Students of Green Team:

Po Hao Chen, roles: Team leader, theory & HPC, benchmarks, reproducibility

Carlton Knox, roles: Computer Arch., Benchmarks, Hardware

Andrew Nguyen, roles: Compilers & OS, GPUs, LAMMPS, Hardware

Vance Raiti, roles: Mathematics, Computer Arch., PHASTA

Yida Wang, roles: ML & HPC, Reproducibility

Yiran Yin, roles: Mathematics, HPC, PHASTA

 

Congratulations to both teams!

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages