Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

AMD and Supermicro: Pioneering AI Solutions

Featured content

AMD and Supermicro: Pioneering AI Solutions

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Bringing AMD Instinct to the Forefront

In the constantly evolving landscape of AI and machine learning, the synergy between hardware and software is paramount. Enter AMD and Supermicro, two industry titans who have joined forces to empower organizations in the new world of AI with cutting-edge solutions. Their shared vision? To enable organizations to unlock the full potential of AI workloads, from training massive language models to accelerating complex simulations.

The AMD Instinct MI300 Series: Changing The AI Acceleration Paradigm

At the heart of this collaboration lies the AMD Instinct MI300 Series—a family of accelerators designed to redefine performance boundaries. These accelerators combine high-performance AMD EPYC™ 9004 series CPUs with the powerful AMD InstinctTM MI300X GPU accelerators and 192GB of HBM3 memory, creating a formidable force for AI, HPC, and technical computing.

Supermicro’s H13 Generation of GPU Servers

Supermicro’s H13 generation of GPU Servers serves as the canvas for this technological masterpiece. Optimized for leading-edge performance and efficiency, these servers integrate seamlessly with the AMD Instinct MI300 Series. Let’s explore the highlights:

8-GPU Systems for Large-Scale AI Training:

  • Supermicro’s 8-GPU servers, equipped with the AMD Instinct MI300X OAM accelerator, offer raw acceleration power. The AMD Infinity Fabric™ Links enable up to 896GB/s of peak theoretical P2P I/O bandwidth, while the 1.5TB HBM3 GPU memory fuels large-scale AI models.
  • These servers are ideal for LLM Inference and training language models with trillions of parameters, minimizing training time and inference latency, lowering the TCO and maximizing throughput.

Benchmarking Excellence

But what about real-world performance? Fear not! Supermicro’s ongoing testing and benchmarking efforts have yielded remarkable results. The continued engagement between AMD and Supermicro performance teams enabled Supermicro to test pre-release ROCm versions with the latest performance optimizations and publicly released optimization like Flash Attention 2 and vLLM. The Supermicro AMD-based system AS -8125GS-TNMR2 showcases AI inference prowess, especially on models like Llama-2 70B, Llama-2 13B, and Bloom 176B. The performance? Equal to or better than AMD’s published results from the Dec. 6 Advancing AI event.

Image - Blog - AMD and Supermicro Pioneering AI Solutions

Charles Liang’s Vision

In the words of Charles Liang, President and CEO of Supermicro:

“We are very excited to expand our rack scale Total IT Solutions for AI training with the latest generation of AMD Instinct accelerators. Our proven architecture allows for fully integrated liquid cooling solutions, giving customers a competitive advantage.”

Conclusion

The AMD-Supermicro partnership isn’t just about hardware and software stacks; it’s about pushing boundaries, accelerating breakthroughs, and shaping the future of AI. So, as we raise our virtual glasses, let’s toast to innovation, collaboration, and the relentless pursuit of performance and excellence.

Featured videos


Follow


Related Content

Where Are Blockchain and Web3 Taking Us? — Part 2: Delving Deeper into Blockchain

Featured content

Where Are Blockchain and Web3 Taking Us? — Part 2: Delving Deeper into Blockchain

This is the second in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3.

Learn More about this topic
  • Applications:

Part 1: First There Was Blockchain  |  Part 3: Web3 Emerging  |  Part 4: The Web3 and Blockchain FAQ

To get a sound understanding of blockchain, you should be aware of some of the nagging issues and criticisms. For example, blockchain has no governance. It could really use the guidance of a small representative group of industry visionaries to help it chart a course, but that might lead to a more centralized orientation. You should also familiarize yourself with the related tools and technologies and what they do. NFTs, in particular, work hand in hand with blockchain and add protection for those who create.

 

Getting NFTs

 

It has been effectively open season on digital content on the internet from the get-go. DRM technology didn’t solve the problem. Will the non-fungible token (NFT) make inroads? Its long-term success or lack thereof will largely be dependent on the success of blockchain. Make no mistake, blockchain is here to stay. It’s too useful a tool to leave behind. But Web3’s premise — that blockchain-based servers might someday run the internet — is by no means certain. (Come back for Part 3 which explores Web3.)

 

What are NFTs? “NFTs facilitate non-fraudulent trade for digital asset producers and consumers or collectors,” said Eric Frazier, senior solutions manager, Supermicro.

 

An NFT is a digital asset authentication system located on a blockchain that gives the holder proof of ownership of digital creations. It does this via metadata that make each NFT unique. Plus, no two people can own the same NFT, which also can’t be changed or destroyed.

 

Applications include digital artwork, but an NFT (sometimes called a "nifty") can be used for a wide variety of uses in music, gaming, entertainment, popular culture items (such as sports merchandise), virtual real estate, prevention of counterfeit products, domain name provenance and others. Down the road, NFTs may have a significant effect on software licensing, intellectual property rights and copyright. Land registry, birth and death certificates, and many other types of records are also potential future beneficiaries of NFTs.

 

If you’re wondering whether NFTs can be traded for cryptocurrency, they can be. What they are not is interchangeable. You may have an NFT for a piece of art that was sold as multiple copies by its owner. But each of those NFTs has unique meta data, so they may not be exchanged one for the other.

 

Smart Contracts Execute

 

A smart contract is blockchain-based, self-executing contract containing code that runs automatically when predetermined conditions are met as set out in an agreement or transaction. So, a hypothetical example might be: on January 15, transfer X value of cryptocurrency in payment for a specific NFT owned by a specific person. Smart contracts are autonomous, trustless, traceable, transparent and irreversible. Key hallmarks of the Smart Contract are that they exclude intermediaries and third parties like lawyers and notaries. And they usually use simple language, require fewer steps and involve less paperwork.

 

Blockchain Power Consumption

 

Some blockchains gobble up electricity and are heavy users of compute and storage resources. But blockchains are not all created equally. Bitcoin is known to be resource in hungry, while “Filecoin’s needs are materially less,” said to Michael Fair, chief revenue officer and longtime channel expert, PiKNiK.

 

It’s also possible to make changes to some blockchains to make them less power hungry. For example, Ethereum switched from the Proof-of-Work (PoW) to the Proof-of-Stake (PoS) algorithm a few months ago, which reduced power consumption by over 99%. However, Ethereum is less decentralized as a result because it is now 80% hosted on AWS. (See the discussion on Understanding Decentralized in Part 1.)

 

“With the algorithm switch from PoW to PoS, Ethereum’s decentralization took a big hit because the majority of transactions and validations are running on Amazon’s cloud” said Jörg Roskowetz, director of blockchain technology, AMD. “From my point of view, hybrid systems like Lightning on the Bitcoin network will keep all the parameters improving — scalability, latency and power-consumption challenges. This will likely take years to be developed and improved.

 

Can Web3 Remain Decentralized?

 

Is the blockchain movement viable going forward? There are those who are skeptical: For example, Scott Nover writing in Quartz and Moxie Marlinspike. Both stories were published almost a year ago in January 2022, well before the change at Ethereum.

 

Nover writes: “Even if blockchains are decentralized, the Web3 services that interact with them are controlled by a very small number of privately held companies. In fact, the industry emerging to support the decentralized web is highly consolidated, potentially undermining the promise of Web3.”

 

These are real concerns. But it’s not like the expectation was that Web3 would exist in a world free of potentially undermining factors, including the consolidation of Web3 blockchain companies as well as some interaction with Web 2.0 companies. If Web3 succeeds, it will need to support a good user experience and be resilient enough to develop additional ways of shielding tself from centralizing influences. It’s not going to exist in a vacuum.

 

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Where Are Blockchain and Web3 Taking Us? — Part 1: First There Was Blockchain

Featured content

Where Are Blockchain and Web3 Taking Us? — Part 1: First There Was Blockchain

This is the first story in a four-part series on blockchain’s many facets, including being the primary pillar of the emerging Web3. 

Learn More about this topic
  • Applications:

 |  Part 2: Delving Deeper into Blockchain  |  Part 3: Web3 Emerging  |  Part 4: The Web3 and Blockchain FAQ

There has been a lot of buzz about blockchain over the past five years, and yet seemingly not much movement. Long, long ago I concluded that the amount of truth to the reported value of a new technology was inversely proportional to the level of din its hype made. But as with so much else about blockchain, it defies conventional wisdom. Blockchain is a bigger deal than is generally realized.

 

Basic Blockchain Definition and Introduction

 

(Source: Wikipedia): Blockchain is a peer-to-peer (P2P) or publicly decentralized ledger (shared distributed database) that consists of blocks of data bound together with cryptography. Each block contains a cryptographic hash of the previous block, a time stamp and a transaction date. Because each block contains information from the previous block, they effectively form a chain – hence the name blockchain.

 

Blockchain transactions resist being altered once they are recorded because the data in any given block cannot be altered retroactively without altering all subsequent blocks that duplicate that data. As a P2P publicly distributed ledger, nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks.

 

“A blockchain is a system of recording information in a way that makes it difficult or impossible to change, cheat or hack the system,” said Eric Frazier, senior solutions manager, Supermicro. “It is a digital ledger that is duplicated and distributed to a network of multiple nodes on the blockchain.”

 

Michael Fair, PiKNiK’s chief revenue officer and longtime channel expert added, “In the blockchain, data is immutable. It’s actually sealed within the network, which is monitored by the blockchain 24 x 7 x 365 days a year.”

 

Blockchain was created in 2008 under the apparent pseudonym, Satoshi Nakamoto. Its original use was to provide a public distributed ledger for the bitcoin cryptocurrency also created by the same entity. But the true promise of blockchain goes way beyond cryptocurrency. The downside is that blockchain operations are computationally intensive and tend to use lots of power. This issue will be covered in more detail later in the series.

 

Understanding “Decentralized”

 

The term decentralized is probably the most important tenet of Web3 and it is at least partially delivered by blockchain. The word has a specific set of meanings, although it’s become something of a buzzword, which tends to blur its meaning.

 

Gavin Wood is an Ethereum Cofounder, Polkadot founder and the person who coined the term Web3 in 2014. Based on comments made by Wood in a January 2022 YouTube video by CNBC International, as well as other sources, decentralized means that no one company’s servers exclusively own a crucial part of the internet. There are two related meanings for decentralized that get confused sometimes:

 

1. In its most basic form, decentralized is about keeping data safe from monopolization by using blockchain and other technologies to make data and content independent. Data in a blockchain is copied to servers all over the world, which cannot change that information unilaterally. There’s no one place that this data exists and that protects it. Blockchain makes it immutable.

 

2. Decentralized also means what Wood called “political decentralization,” wherein “no one will have the power to turn off content,” the way top execs could (in theory) at companies like Google, Facebook, Amazon, Microsoft and Twitter. Decentralization could potentially kick these and other companies out of the “Your Data” business. A key phrase that relates to this meaning of the term is highly consolidated. How many companies have Google, Amazon, Microsoft, and Facebook purchased over the past couple of decades? Google purchased YouTube. Facebook bought Instagram. Microsoft nabbed LinkedIn. But that’s just the tip of the iceberg. Where once there were many companies, now there are a few, very large companies exerting control over the internet. That’s what highly consolidated refers to. It’s term that’s often used to describe the opposite of decentralized.

 

Blockchain Uses

 

Since 2019 or so, new ideas for blockchain applications have arrived fast and furiously. And while many are plausible theories, others have been actively produced. If your company’s sector of the marketplace happens to be one of the areas that blockchain has been identified with, chances are good that blockchain is at least on your company’s radar.

 

Many organizations are looking to blockchain to rejuvenate their product pipelines. The future of blockchain will very likely be determined by technocrats and developers who harness it to chase profits. In other words, thousands of enterprises are developing blockchain products and services to their own needs, and if they succeed, many others will likely follow.

 

Beyond supporting cryptocurrency, three early uses of blockchain have been:

  • Financial services
  • Government use of blockchain for voting
  • Helping to keep track of supply chains. There’s a synergy in the way they work that makes blockchain and supply chain ideal for one another.

Blockchain has quickly spread to several areas of financial services like tokenizing assets and fiat currencies, P2P lending backed by assets, decentralized finance (DeFi) and self-enforcing smart contracts to name a few.

 

Blockchain voting could help put a stop to the corruption surrounding elections. Countries like Sierra Leone and Russia were early to it. But several other countries have tried it – including the U.S.

 

In healthcare, a handful of companies are attempting to revolutionize e-records by developing them on blockchain-based decentralized ledgers instead of stored away in some company’s database. The medical community is looking at it to store DNA information.

 

Storage systems are an early and important blockchain application. Companies like PiKNiK offer decentralized blockchain storage on a BTB basis.

 

Other Stories in this Series:

Part 1: First There Was Blockchain

Part 2: Delving Deeper into Blockchain

Part 3: Web3 Emerging

Part 4: The Web3 and Blockchain FAQ

 

Featured videos


Follow


Related Content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Featured content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Energy company Petrobas, based in Brazil, is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. Petrobas used system integrator Atos to provide more than 250 Supermicro SuperServers. The cluster is ranked 33 on the current top500 list and goes by the name Pegaso.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Atos

Brazilian energy company Petrobas is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. These techniques can help reduce costs and make finding and extracting new hydrocarbon deposits quicker. Petrobras' geoscientists and software engineers quickly modify algorithms to take advantage of new capabilities as new CPU and GPU technologies become available.

 

The energy company used system integrator Atos to provide more than 250 Supermicro SuperServer AS-4124GO-NART+ servers running dual AMD EPYC™ 7512 processors. The cluster goes by the name Pegaso (which in Portuguese means the mythological horse Pegasus) and is currently listed at number 33 on the top500 list of fastest computing systems. Atos is a global leader in digital transformation with 112,000 world-wide employees. They have built other systems that appeared on the top500 list, and AMD powers 38 of them.

 

Petrobas has had three other systems listed on previous iterations of the Top500 list, using other processors. Pegaso is now the largest supercomputer in South America. It is expected to become fully operational next month.  Each of its servers runs CentOS and has 2TB of memory, for a total of 678TB. The cluster contains more than 230,000 core processors, is running more than 2,000 GPUs and is connected via an InfiniBand HDR networking system running at 400Gb/s. To give you an idea of how much gear is involved with Pegaso, it took more than 30 truckloads to deliver and consists of over 30 tons of hardware.

 

The geophysics team has a series of applications that require all this computing power, including seismic acquisition apps that collect data and is then processed to deliver high-resolution subsurface imaging to precisely locate the oil and gas deposits. Having the GPU accelerators in the cluster helps to reduce the processing time, so that the drilling teams can locate their rigs more precisely.

 

For more information, see this case study about Pegaso.

Featured videos


Follow


Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Follow


Related Content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Featured content

Manage Your HPC Resources with Supermicro's SuperCloud Composer

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • GigaIO

Today’s data center has numerous challenges: provisioning hardware and cloud workloads, balancing the needs of performance-intensive applications across compute, storage and network resources, and having a consistent monitoring and analytics framework to feed intelligent systems management. Plus, you may have the need to deploy or re-deploy all these resources as needs shift, moment to moment.

Supermicro has created its own tool to assist with these decisions to monitor and manage this broad IT portfolio, called the SuperCloud Composer (SCC). It combines a standardized web-based interface using an Open Distributed Infrastructure Management interface with a unified dashboard based on the RedFish message bus and service agents.

SCC can track the various resources and assign them to different pools with its own predictive analytics and telemetry. It delivers a single intelligent management solution that covers both existing on-premises IT equipment as well as a more software-defined cloud collection. Additional details can be found in this SuperCloud Composer white paper.

SuperCloud Composer makes the use of a cluster-level PCIe network using the FabreX software from GigaIO Networks. It has the capability to flexibly scale up and out storage systems while using the lowest latency paths available.

It also supports Weka.IO cluster members, which can be deployed across multiple systems simultaneously. See our story The Perfect Combination: The Weka Next-Gen File System, Supermicro A+ Servers and AMD EPYC™ CPUs.

SCC can create automated installation playbooks in Ansible, including a software boot image repository that can quickly deploy new images across the server infrastructure. It has a fast-deploy feature that allows a new image to be deployed within seconds.

SuperCloud Composer offers a robust analytics engine that collects historical and up-to-date analytics stored in an indexed database within its framework. This data can produce a variety of charts, graphs and tables so that users can better visualize what is happening with their server resources. Each end-user is provided with analytic capable charting represented by IOPS, network, telemetry, thermal, power, composed node status, storage allocation and system status.

Last but not least, SCC also has both network provisioning and storage fabric provisioning features where build plans are pushed to data or fabric switches either as single-threaded or multithreaded operations, such that multiple switches can be updated simultaneously by shared or unique build plan templates.

For more information, watch this short SCC explainer video. Or schedule an online demo of SCC and request a free 90-day trial of the software.

Featured videos


Follow


Related Content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Featured content

Supermicro Debuts New H13 Server Solutions Using AMD’s 4th-Gen EPYC™ CPUs

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last week, Supermicro announced its new H13 A+ server solutions, featuring the latest fourth-generation AMD EPYC™ processors. The new AMD “Genoa”-class Supermicro A+ configurations will be able to handle up to 96 Zen4 CPU cores running up to 6TB of 12-channel DDR5 memory, using a separate channel for each stick of memory.

The various systems are designed to support the highest performance-intensive computing workloads over a wide range of storage, networking and I/O configuration options. They also feature tool-less chassis and hot-swappable modules for easier access to internal parts as well as I/O drive trays on both front and rear panels. All the new equipment can handle a range of power conditions, including 120 to 480 AC volt operation and 48 DC power attachments.

The new H13 systems have been optimized for AI, machine learning and complex calculation tasks for data analytics and other kinds of HPC applications. Supermicro’s 4th-Gen AMD EPYC™ systems employ the latest PCIe 5.0 connectivity throughout their layouts to speed data flows and provide high network and cluster internetworking performance. At the heart of these systems is the AMD EPYC™ 9004 series CPUs, which were also announced last week.

The Supermicro H13 GrandTwin® systems can handle up to six SATA3 or NVMe drive bays, which are hot-pluggable. The H13 CloudDC systems come in 1U and 2U chassis that are designed for cloud-based workloads and data centers that can handle up to 12 hot-swappable drive bays and support the Open Compute Platform I/O modules. Supermicro has also announced its H13 Hyper configuration for dual-socketed systems. All of the twin-socket server configurations support 160 PCIe 5.0 data lanes.

There are several GPU-intensive configurations for another series of both 4U and 8U sized servers that can support up to 10 GPU PCIe accelerator cards, including the latest graphic processors from AMD and Nvidia. The 4U family of servers support both AMD Infinity Fabric Link and NVIDIA NVLink Bridge technologies so users can choose the right balance of computation, acceleration, I/O and local storage specifications.

To get a deep dive on H13 products, including speeds, feeds and specs, download this whitepaper from the Supermicro site: Supermicro H13 Servers Enable High-Performance Data Centers.

Featured videos


Follow


Related Content

Perspective: Looking Back on the Rise of Supercomputing

Featured content

Perspective: Looking Back on the Rise of Supercomputing

Learn More about this topic
  • Applications:
  • Featured Technologies:

We’ve come a long way on the development of high performance computing. Back in 2004, I attended an event held in the gym at the University of San Francisco. The goal was to crowd-source computing power by connecting the PCs of volunteers who were participating in the first “Flash Mob Computing” cluster computing event. Several hundred PCs were networked together in the hope that they would create one of the largest supercomputers, albeit for a few hours.

 

I brought two laptops for the cause. The participation rules stated that the  data on our hard drives would remain intact. Each computer would run a specially crafted boot CD that ran a benchmark called Linpack, a software library for performing numerical linear algebra running on Linux. It was used to measure the collective computing power.

 

The event attracted people with water-cooled overclocked PCs, naked PCs (no cases, just the boards and other components) and custom-made rigs with fancy cases. After a few hours, we had roughly 650 PCs on the floor of the gym. Each PC was connected to a bunch of Foundry BigIron super-switches that were located around the room.

 

The 2004 experiment brought out several industry luminaries, such as Gordon Bell, who was the father of the Digital Equipment Corporation VAX minicomputer, and Jim Gray, who was one of the original designers behind the TPC benchmark while he was at Tandem. Both men at the time were Microsoft fellows. Bell was carrying his own laptop but had forgotten to bring his CD drive, so he couldn’t connect to the mob.

 

Network shortcomings

 

What was most interesting to me, and what gave rise to the mob’s eventual undoing, were the networking issues involved with assembling and running such a huge collection of gear. The mob used ordinary 100BaseT Ethernet, which was a double-edged sword. While easy to set up, it was difficult to debug when network problems arose. The Linpack benchmark requires all the component machines to be running concurrently during the test, and the organizers had trouble getting all 600-plus PCs to operate online flawlessly. The best benchmark accomplished was a peak rate of 180 gigaflops using 256 computers, but that wasn’t an official score as one node failed during the test.

 

To give you an idea of where this stood in terms of overall supercomputing prowess, it was better than the Cray supercomputers of the early 1990s, which delivered around 16 gigaflops.If you lo

 

At the website top500.org (which tracks the fastest supercomputers around the globe), you can see that all the current top 500 machines are measured in petaflops (1 million gigaflops). The Oak Ridge National Laboratory’s Frontier machine, which has occupied the number one spot this year, weighs in at more than 1,000 petaflops and uses 8 million cores. To make the fastest 500 list back in 2004, the mob would have had to achieve a benchmark of over 600 gigaflops. Because of the networking problems, we’ll never know for sure.Still, it was an impressive achievement, given the motley mix of machines. All of the world’s top 500 supercomputers are custom built and carefully curated and assembled to attain that level of computing performance.

 

Another historical note: back in 2004, one of the more interesting entries came in third on the top500.org list: a collection of several thousand Apple Macintoshes running at Virginia Polytechnic University. Back in the present, as you might imagine, almost all the fastest 500 supercomputers are based on a combination of CPU and GPU chip architectures.

 

Today, you can buy your own supercomputer on the retail market, such as the Supermicro SuperBlade® models. And of course, you can routinely run much faster networking protocols than 100-megabit Ethernet.

Featured videos


Follow


Related Content

Unlocking the Value of the Cloud for Mid-size Enterprises

Featured content

Unlocking the Value of the Cloud for Mid-size Enterprises

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Microsoft Azure

Organizations around the world are requiring new options for their next-generation computing environments. Mid-size organizations, in particular, are facing increasing pressure to deliver cost-effective, high-performance solutions within their hyperconverged infrastructures (HCI). Recent collaboration between Supermicro, Microsoft Azure and AMD, leveraging their collective technologies, has created a fresh approach that lets enterprises maintain performance at a lower operational cost while helping to reduce the organization’s carbon footprint in support of sustainability initiatives. This cost-effective, 1U system (a 2U version is available) offers both power, flexibility and modularity in large-scale GPU deployments.

The results of the collaboration combine the latest technologies, supporting multiple CPU, GPU, storage and networking options optimized to deliver uniquely configured and highly scalable systems. The product can be optimized for SQL and Oracle databases, VDI, productivity applications and database analytics. This white paper explores why this universal GPU architecture is an intriguing and cost-effective option for CTOs and IT administrators who are planning to rapidly implement hybrid cloud, data center modernization, branch office/edge networking or Kubernetes deployments at scale.

Get the 7-page white paper that provides the detail to assess the solution for yourself, including the new Azure Stack HCI certified system, specifications, cost justification and more.

 

Featured videos


Follow


Related Content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Featured content

Register to Watch Supermicro's Sweeping A+ Launch Event on Nov. 10

Join Supermicro online Nov. 10th to watch the unveiling of the company’s new A+ systems -- featuring next-generation AMD EPYC™ processors. They can't tell us any more right now. But you can register for a link to the event by scrolling down and signing-up on this page.
Learn More about this topic
  • Applications:
  • Featured Technologies:

Featured videos


Follow


Related Content

Pages