Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Need help turning your customers’ data into actionable insights?

Featured content

Need help turning your customers’ data into actionable insights?

Your customers already have plenty of data. What they need now are insights. Supermicro, AMD and Cloudera are here to help.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers already have plenty of data. What they need now are insights.

Data just sits there, taking up costly storage and real estate. But actionable insights can help your customers strengthen their overall business, improve their business processes, and create new products and services.

Increasingly, these insights are based on data captured at the edge. For example, a retailer might collect customer and sales data using the point-of-sale terminals in its stores.

Supermicro is here to help. Its edge systems, including the latest WIO and short-depth servers powered by AMD processors, have been designed to collect data at the business edge.

These servers are powered by AMD’s EPYC 8004 Series processors. Introduced in September, these CPUs extend the company’s ‘Zen4c’ architecture into lower-core-count processors designed for edge servers and form factors.

GrandTwin too

For more insights, tell your customers to check out Supermicro’s GrandTwin servers. They’re powered by AMD EPYC 9004 processors and can run Cloudera Data Flow (CDF), a scalable, real-time streaming analytics platform.

The Supermicro GrandTwin systems provide a multi-node rackmount platform for cloud data centers. They come in 2U with 4 nodes for optimal deployment.

These systems offer AMD’s 4th Gen EPYC 9004 Series of general-purpose processors, which support DDR-5 4800 memory and PCI Express Gen 5 I/O.

Distributed yet united

If you’re unfamiliar with Cloudera, the company’s approach is based on a simple idea: single clouds are passé. Instead, Cloudera supports a hybrid data platform, one that can be used with any cloud, any analytics and any data.

The company’s idea is that data-management components should be physically distributed, but treated as a cohesive whole with AI and automation.

Cloudera’s CDF solution ingests, curates and analyzes data for key insights and immediate actionable information. That can include issues or defects that need remediating. And AI and machine learning systems can use the data to suggest real-time improvements.

More specifically, CDF delivers flow management, edge management, streams processing, streams management, and streaming analytics.

The upshot: Your customers need actionable insights, not more data. And to get those insights, they can check out the powerful combination of Supermicro servers, AMD processors and Cloudera solutions.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Supermicro celebrates 30 years of business

Featured content

Supermicro celebrates 30 years of business

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro Inc. is celebrating its 30th year of research, development and manufacturing.

At the company, formed in 1993, some things remain the same. Founder Charles Liang remains Supermicro’s president and CEO. And the company is still based in California’s Silicon Valley.

Of course, in 30 years a lot has also changed, too. For one, AI is now a critical component. And Supermicro, with help from component makers including AMD, is offering a range of solutions designed with AI in mind. Also, Supermicro has stated its intention to be a leader in the newer field of generative AI.

Another recent change is the industry’s focus on “green computing” and sustainability. Here, too, Supermicro has had a vision. The company’s Green IT initiative helps customers lowers data-center TCO, take advantage of recyclable materials, and do more work with lower power requirements.

Another change is just how big Supermicro has grown. Revenue for its most recent fiscal year totaled $7.12 billion, a year-on-year increase of 37%. Looking ahead, Supermicro has told investors it expects an even steeper 47% revenue growth in the current fiscal year, for total revenue of $9.5 billion to $10.5 billion. 

All that growth has also led Supermicro to expand its manufacturing facilities. The company now runs factories in Silicon Valley, Taiwan and the Netherlands, and it has a new facility coming online in Malaysia. All that capacity, the company says, means Supermicro can now deliver more than 4,000 racks a month.

Top voices

Industry leaders are joining the celebration.

“Supermicro has been and continues to be my dream work,” CEO Liang wrote in an open letter commemorating the company’s 30th anniversary.

Looking ahead, Liang writes that the company’s latest initiative, dubbed “Supermicro 4.0,” will focus on AI, energy saving, and time to market.

AMD CEO Lisa Su adds, “AMD and Supermicro have a long-standing history of delivering leadership computing solutions. I am extremely proud of the expansive portfolio of data center, edge and AI solutions we have built together, our leadership high-performance computing solutions and our shared commitment to sustainability.”

Happy 30th anniversary, Supermicro!

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is the intelligent edge? Part 2

Featured content

Tech Explainer: What is the intelligent edge? Part 2

The intelligent edge has emerged as an essential component of the internet of things. By moving compute and storage close to where data is generated, the intelligent edge provides greater control, flexibility, speed and even security.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The Internet of Things (IoT) is all around us. It’s in the digital fabric of a big city, the brain of a modern factory, the way your smart home can be controlled from a tablet, and even the tech telling your fridge it’s time to order a quart of milk.

As these examples show, IoT is fast becoming a must-have. Organizations and individuals alike turn to the IoT to gain greater control and flexibility over the technologies they regularly use. Increasingly, they’re doing it with the intelligent edge.

The intelligent edge moves command and control from the core to the edge, closer to where today’s smart devices and sensors actually are installed. That’s needed because so many IoT devices and connections are now active, with more coming online every day.

Communicating with millions of connected devices via a few centralized data centers is the old way of doing things. The new method is a vast network of local nodes capable of collecting, processing, analyzing, and making decisions from the IoT information as close to its origin as possible.

Controlling IoT

To better understand the relationship between IoT and intelligent edge, let’s look at two use cases: manufacturing and gaming.

Modern auto manufacturers like Tesla and Rivian use IoT to control their industrial robots. Each robot is fitted with multiple sensors and actuators. The sensors report their current position and condition, and the actuators control the robot’s movements.

In this application, the intelligent edge acts as a small data center in or near the factory where the robots work. This way, instead of waiting for data to transfer to a faraway data center, factory managers can use the intelligent edge to quickly capture, analyze and process data—and then act just as quickly.

Acting on that data may include performing preventative or reactive maintenance, adjusting schedules to conserve power, or retasking robots based on product configuration changes. 

The benefits of a hyper-localized setup like this can prove invaluable for manufacturers. Using the intelligent edge can save them time, money and person-hours by speeding both analysis and decision-making.

For manufacturers, the intelligent edge can also add new layers of security. That’s because data is significantly more vulnerable when in transit. Cut the distance the data travels and the use of external networks, and you also eliminate many cybercrime threat vectors.

Gaming is another marquee use case for the intelligent edge. Resource-intensive games such as “Fortnite” and “World of Warcraft” demand high-speed access to the data generated by the game itself and a massive online gaming community of players. With speed at such a high premium, waiting for that data to travel to and from the core isn’t an option.

Instead, the intelligent edge lets game providers collect and process data near their players. The closer proximity lowers latency by limiting the distance the data travels. It also improves reliability. The resulting enhanced data flow makes gameplay faster and more responsive.

Tech at the edge

The intelligent edge is sometimes described as a network of localized data centers. That’s true as far as it goes, but it’s not the whole story. In fact, the intelligent edge infrastructure’s size, function and location come with specific technological requirements.

Unlike a traditional data center architecture, the edge is often better served by rugged form factors housing low-cost, high-efficiency components. These components, including the recently released AMD EPYC 8004 Series processors, feature fewer cores, less heat and lower prices.

The AMD EPYC 8004 Series processors share the same 5nm ‘Zen4c’ core complex die (CCD) chiplets and 6nm AMD EPYC I/O Die (IOD) as the more powerful AMD EPYC 9004 Series.

However, the AMD EPYC 8004s offers a more efficiency-minded approach than its data center-focused cousins. Nowhere is this better illustrated than the entry-level AMD EPYC 8042 processor, which provides a scant 8 cores and a thermal design power (TDP) of just 80 watts. AMD says this can potentially save customers thousands of dollars in energy costs over a five-year period.

To deploy the AMD silicon, IT engineers can choose from an array of intelligent edge systems from suppliers, including Supermicro. The selection includes expertly designed form factors for industrial, intelligent retail and smart-city deployments.

High-performance rack mount servers like the Supermicro H13 WIO are designed for enterprise-edge deployments that require data-center-class performance. The capacity to house multiple GPUs and other hardware accelerators makes the Supermicro H13 an excellent choice for deploying AI and machine learning applications at the edge.

The future of the edge

The intelligent edge is another link in a chain of data capture and analysis that gets longer every day. As more individuals and organizations deploy IoT-based solutions, an intelligent edge infrastructure helps them store and mine that information faster and more efficiently.

The insights provided by an intelligent edge can help us improve medical diagnoses, better control equipment, and more accurately predict human behavior.

As the intelligent edge architecture advances, more businesses will be able to deploy solutions that enable them to cut costs and improve customer satisfaction simultaneously. That kind of deal makes the journey to the edge worthwhile.

Part 1 of this two-part blog series on the intelligent edge looked at the broad strokes of this emerging technology and how organizations use it to increase efficiency and reliability. Read Part 1 now.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is the intelligent edge? Part 1

Featured content

Tech Explainer: What is the intelligent edge? Part 1

The intelligent edge moves compute, storage and networking capabilities close to end devices, where the data is being generated. Organizations gain the ability to process and act on that data in real time, and without having to first transfer that data to the a centralized data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The term intelligent edge refers to remote server infrastructures that can collect, process and act on data autonomously. In effect, it’s a small, remote data center.

Compared with a more traditional data center, the intelligent edge offers one big advantage: It locates compute, storage and networking capabilities close to the organization’s data collection endpoints. This architecture speeds data transactions. It also makes them more secure.

The approach is not entirely new. Deploying an edge infrastructure has long been an effective way to gather data in remote locations. What’s new with an intelligent edge is that you gain the ability to process and act on that data (if necessary) in real time—without having to first transfer that data to the cloud.

The intelligent edge can also save an organization money. Leveraging the intelligent edge makes sense for organizations that spend a decent chunk of their operating budget transferring data from the edge to public and private data centers, which could be a cloud infrastructure (often referred to as “the core”). Reducing bandwidth in both directions and storage charges helps them control costs.

3 steps to the edge

Today, an intelligent edge typically gets applied in one of three areas:

  • Operational Technology (OT): Hardware and software used to monitor and control industrial equipment, processes and events.
  • Information Technology (IT): Digital infrastructure—including servers, storage, networking and other devices—used to create, process, store, secure and transfer data.
  • Internet of Things (IoT): A network of smart devices that communicate and can be controlled via the internet. Examples include smart speakers, wearables, autonomous vehicles and smart-city infrastructure.

The highly efficient edge

There’s yet another benefit to deploying intelligent edge tech: It can help an organization become more efficient.

One way the intelligent edge does this is by obviating the need to transfer large amounts of data. Instead, data is stored and processed close to where it’s collected.

For example, a smart lightbulb or fridge can communicate with the intelligent edge instead of contacting a data center. Staying in constant contact with the core is unnecessary for devices that don’t change much from minute to minute.

Another way the intelligent edge boosts efficiency is by reducing the time needed to analyze and act on vital information. This, in turn, can lead to enhanced business intelligence that informs and empowers stakeholders. It all gets done faster and more efficiently than with traditional IT architectures and operations.

For instance, imagine that an organization serves a large customer base from several locations. By deploying an intelligent edge infrastructure, the organization could collect and analyze customer data in real time.

Businesses that gain insights from the edge instead of from the core can also respond quickly to market changes. For example, an energy company could analyze power consumption and weather conditions at the edge (down to the neighborhood), then determine whether there's be a power outage.

Similarly, a retailer could use the intelligent edge to support inventory management and analyze customers’ shopping habits. Using that data, the retailer could then offer customized promotions to particular customers, or groups of customers, all in real time.

The intelligent edge can also be used to enhance public infrastructure. For instance, smart cities can gather data that helps inform lighting, public safety, maintenance and other vital services, which could then be used for preventive maintenance or the allocation of city resources and services as needed.

Edge intelligence

As artificial intelligence (AI) becomes increasingly ubiquitous, many organizations are deploying machine learning (ML) models at the edge to help analyze data and deliver insights in real time.

In one use case, running AI and ML systems at the edge can help an organization reduce the service interruptions that often come with transferring large data sets to and from the cloud. Intelligent Edge is able to keep things running locally, giving distant data centers a chance to catch up. This, in turn, can help the organization provide a better experience for the employees and customers who rely on that data.

Deploying AI at the edge can also help with privacy, security and compliance issues. Transferring data to and from the core presents an opportunity for hackers to intercept data in transit. Eliminating this data transfer deprives cyber criminals of a threat vector they could otherwise exploit.

Part 2 of this two-part blog series dives deep into the biggest, most popular use of the intelligent edge today—namely, the internet of things (IoT). We also look at the technology that powers the intelligent edge, as well as what the future may hold for this emerging technology.

Do more:

 

 

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Featured content

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Supermicro has introduced five Supermicro H13 WIO and short-depth servers powered by the new AMD EPYC 8004 Series processors. These servers are designed for intelligent edge and telco applications.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro is supporting the new AMD EPYC 8004 Series processors (previously code-named Siena) on five Supermicro H13 WIO and short-depth telco servers. Taking advantage of the new AMD processor, these new single-socket servers are designed for use with intelligent edge and telco applications.

The new AMD EPYC 8004 processors enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

The new AMD processors also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

Compact yet capacious

Supermicro’s new 1U short-depth version is designed with I/O in the front and a form factor that’s compact yet still offers enough room for three PCIe 5.0 slots. It also has the option of running on either AC or DC power.

The short-depth systems also feature a NEBS-compliant design for telco operations. NBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.

The new WIO servers use Titanium power supplies for increased energy efficiency, and Supermicro says that will deliver higher performance/watt for the entire system.

Supermicro WIO systems offer a wide range of I/O options to deliver optimized systems for specific requirements. Users can optimize the storage and networking alternatives to accelerate performance, increase efficiency and find the perfect fit for their applications.

Here are Supermicro’s five new models:

  • AS -1015SV-TNRT: Supermicro H13 WIO system in a 1U format
  • AS -1115SV-TRNT: Supermicro H13 WIO system in a 1U format
  • AS -2015SV-TNRT: Supermicro H13 WIO system in a 2U format
  • AS -1115S-FWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, running on AC power and including system-management features
  • AS -1115S-FDWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, this one running on DC power

Shipments of the new Supermicro servers supporting AMD EPYC 8004 processors start now.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Meet the new AMD EPYC 8004 family of CPUs

Featured content

Meet the new AMD EPYC 8004 family of CPUs

The new 4th gen AMD EPYC 8004 family extends the ‘Zen4c’ core architecture into lower-count processors with TDP ranges as low as 80W. The processors are designed especially for edge-server deployments and form factors.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD has introduced a family of EPYC processors for space- and power-constrained deployments: the 4th Generation AMD EPYC 8004 processor family. Formerly code-named Siena, these lower core-count CPUs can be used in traditional data centers as well as for edge compute, retail point-of-sale and running a telco network.

The new AMD processors have been designed to run at the edge with better energy efficiency and lower operating costs. The CPUs enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. These new CPUs also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

The AMD EPYC 8004 processors are purpose-built to deliver high performance and are energy-efficient in an optimized, single-socket package. They use the new SP6 socket. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

AMD says these features, along with streamlined memory and I/O feature set, lets servers based on this new processor family deliver compelling system cost/performance metrics.

Heat-tolerant

The AMD EPYC 8004 family is also designed to run in environments with fluctuating and at times high ambient temperatures. That includes outdoor “smart city” settings and NEBS-compliant communications network sites. (NEBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.) What AMD is calling “NEBS-friendly” models have an operating range of -5 C (23 F) to 85 C (185 F).

The new AMD processors can also run in deployments where both the power levels and available physical space are limited. That can include smaller data centers, retail stores, telco installations, and the intelligent edge.

The performance gains are impressive. Using the SPECpower benchmark, which measures power efficiency, the AMD EPYC 8004 CPUs deliver more than 2x the energy efficiency of the top competitive product for telco. This can result in 34% lower energy costs over five years, saving organizations thousands of dollars.

Multiple models

In all, the AMD EPYC 8004 family currently offers 12 SKUs. Those ending with the letter “P” support single-CPU designs. Those ending “PN” support NEBS-friendly designs and offer broader operating temperature ranges.

The various models offer a choice of 8, 16, 24, 48 or 64 ‘Zen4c’ cores; from 16 to 128 threads; and L3 cache sizes ranging from 32MB to 128MB. All the SKUs offer 6 channels of DDR memory with a maximum capacity of 1.152TB; a maximum DDR5 frequency of 4800 MHz; and 96 lanes of PCIe Gen 5 connectivity. Security features are offered by AMD Infinity Guard.

Selected AMD partners have already announced support for the new EPYC 8004 family. This includes Supermicro, which introduced new WIO based on the new AMD processors for diverse data center and edge deployments.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 1

Featured content

Tech Explainer: What’s the difference between Machine Learning and Deep Learning? Part 1

What’s the difference between machine learning and deep learning? That’s the subject of this 2-part Tech Explainer. Here, in Part 1, learn more about ML. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

As the names imply, machine learning and deep learning are types of smart software that can learn. Perhaps not the way a human does. But close enough.

What’s the difference between machine and deep learning? That’s the subject of this 2-part Tech Explainer. Here in Part 1, we’ll look in depth at machine learning. Then in Part 2, we’ll look more closely at deep learning.

Both, of course, are subsets of artificial intelligence (AI). To understand their differences, it helps to first understand something of the AI hierarchy.

At the very top is overarching AI technology. It powers both popular generative AI models such as ChatGPT and less famous but equally helpful systems such as the suggestion engine that tells you which show to watch next on Netflix.

Machine learning is a subset of AI. It can perform specific tasks without first needing explicit instructions.

As for deep learning, it’s actually a subset of machine learning. DL is powered by so-called neural networks, multiple node layers that form a system inspired by the structure of the human brain.

Machine learning for smarties

Machine learning is defined as the use and development of computer systems designed to learn and adapt without following explicit instructions.

Instead of requiring human input, ML systems use algorithms and statistical models to analyze and draw inferences from patterns they find in large data sets.

This form of AI is especially good at identifying patterns from structured data. Then it can analyze those patterns to make predictions, usually reliable.

For example, let’s say an organization wants to predict when a particular customer will unsubscribe from its service. The organization could use ML to make an educated guess based on previous data about customer churn.

The machinery of ML

Like all forms of AI, machine learning uses lots of compute and storage resources. Enterprise-scale ML models are powered by data centers packed to the gills with cutting-edge tech. The most vital of these components are GPUs and AI data-center accelerators.

GPUs, though initially designed to process graphics, have become the preferred tool for AI development. They offer high core counts—sometimes numbering in the thousands—as well as massive parallel processes. That makes them ideally suited to process a vast number of simple calculations simultaneously.

As AI gained acceptance, IT managers sought ever more powerful GPUs. The logical conclusion was the advent of new technologies like AMD’s Instinct MI200 Series accelerators. These purpose-built GPUs have been designed to power discoveries in mainstream servers and supercomputers, including some of the largest exascale systems in use today.

AMD’s forthcoming Instinct MI300X will go one step further, combining a GPU and AMD EPYC CPU in a single component. It’s set to ship later this year.

State-of-the-art CPUs are important for ML-optimized systems. The CPUs need as many cores as possible, running at high frequencies to keep the GPU busy. AMD’s EPYC 9004 Series processors excel at this.

In addition, the CPUs need to run other tasks and threads of the application. When looking at a full system, PCIe 5.0 connectivity and DDR4 memory are important, too.

The GPUs that power AI are often installed in integrated servers that have the capacity to house their constituent components, including processors, flash storage, networking tech and cooling systems.

One such monster server is the Supermicro AS -4125GS-TNRT. It brings together eight direct attached, double-width, full-length GPUs; up to 6TB of RAM; and two dozen 2.5-inch solid-state drives (SSDs). This server also supports the AMD Instinct MI210 accelerator.

ML vs. DL

The difference between machine learning and deep learning begins with their all-important training methods. ML is trained using four primary methods: supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning.

Deep learning, on the other hand, requires more complex training methods. These include convolutional neural networks, recurrent neural networks, generative adversarial networks and autoencoders.

When it comes to performing real-world tasks, ML and DL offer different core competencies. For instance, ML is the type of AI behind the most effective spam filters, like those used by Google and Yahoo. Its ability to adapt to varying conditions allows ML to generate new rules based on previous operations. This functionality helps it keep pace with highly motivated spammers and cybercriminals.

More complex inferencing tasks like medical imaging recognition are powered by deep learning. DL models can capture intricate relationships within medical images, even when those relationships are nonlinear or difficult to define. In other words, deep learning can quickly and accurately identify abnormalities not visible to the human eye.

Up next: a Deep Learning deep dive

In Part 2, we’ll explore more about deep learning. You’ll find out how data scientists develop new models, how various verticals leverage DL, and what the future holds for this emerging technology.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

What’s inside Supermicro’s new Petascale storage servers?

Featured content

What’s inside Supermicro’s new Petascale storage servers?

Supermicro has a new class of storage servers that support E3.S Gen 5 NVMe drives. They offer up to 256TB of high-throughput, low-latency storage in a 1U enclosure, and up to half a petabyte in a 2U.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro has introduced a new class of storage servers that support E3.S Gen 5 NVMe drives. These storage servers offer up to 256TB of high-throughput, low-latency storage in a 1U enclosure, and up to half a petabyte in a 2U.

Supermicro has designed these storage servers to be used with large AI training and HPC clusters. Those workloads require that unstructured data, often in extremely large quantities, be delivered quickly to the system’s CPUs and GPUs.

To do this, Supermicro has developed a symmetrical architecture that reduces latency. It does so in 2 ways. One, by ensuring that data travels the shortest possible signal path. And two, by providing the maximum airflow over critical components, allowing them to run as fast and cool as possible.

1U and 2U for you 

Supermicro’s new lineup of optimized storage systems includes 1U servers that support up to 16 hot-swap E3.S drives. An alternate configuration could be up to eight E3.S drives, plus four E3.S 2T 16.8mm bays for CMM and other emerging modular devices.

(CMM is short for Chassis Management Module. These devices provide management and control of the chassis, including basic system health, inventory information and basic recovery operations.)

The E3.S form factor calls for a short and thin NVMe SSD drive that is 76mm high, 112.75mm long, and 7.5mm thick.

In the 2U configuration, Supermicro’s servers support up to 32 hot-swap E3.S drives. A single-processor system, it support the latest 4th Gen AMD EPYC processors.

Put it all together, and you can have a standard rack that stores up to an impressive 20 petabytes of data for high-throughput NVMe over fabrics (NVMe-oF) configurations.

30TB drives coming

When new 30TB drives become available—a move expected later this year—the new Supermicro storage servers will be able to handle them. Those drives will bring the storage total to 1 petabyte in a compact 2U server.

Two storage-drive vendors working closely with Supermicro are Kioxia America and Solidigm, both of which make E3.S solid-state drives (SSDs). Kioxia has announced a 30.72TB SSD called the Kioxia CD8P Series. And Solidigm says its D5-P5336 SSD will ship in an E3.S form factor with up to 30.72TB in the first half of 2024.

The new Supermicro Petascale storage servers are shipping now in volume worldwide.

Learn more about the Supermicro E3.S Petascale All-Flash NVMe Storage Systems.

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Can liquid-cooled servers help your customers?

Featured content

Can liquid-cooled servers help your customers?

Liquid cooling can offer big advantages over air cooling. According to a new Supermicro solution guide, these benefits include up to 92% lower electricity costs for a server’s cooling infrastructure, and up to 51% lower electricity costs for an entire data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The previous thinking was that liquid cooling was only for supercomputers and high-end gaming PCs. No more.

Today, many large-scale cloud, HPC, analytics and AI servers combine CPUs and GPUs in a single enclosure, generating a lot of heat. Liquid cooling can carry away the heat that’s generated, often with less overall cost and more efficiently than air.

According to a new Supermicro solution guide, liquid’s advantages over air cooling include:

  • Up to 92% lower electricity costs for a server’s cooling infrastructure
  • Up to 51% lower electricity costs for the entire data center
  • Up to 55% less data center server noise

What’s more, the latest liquid cooling systems are turnkey solutions that support the highest GPU and CPU densities. They’re also fully validated and tested by Supermicro under demanding workloads that stress the server. And unlike some other components, they’re ready to ship to you and your customers quickly, often in mere weeks.

What are the liquid-cooling components?

Liquid cooling starts with a cooling distribution unit (CDU). It incorporates two modules: a pump that circulates the liquid coolant, and a power supply.

Liquid coolant travels from the CDU through flexible hoses to the cooling system’s next major component, the coolant distribution manifold (CDM). It’s a unit with distribution hoses to each of the servers.

There are 2 types of CDMs. A vertical manifold is placed on the rear of the rack, is directly connected via hoses to the CDU, and delivers coolant to another important component, the cold plates. The second type, a horizontal manifold, is placed on the front of the rack, between two servers; it’s used with systems that have inlet hoses on the front.

The cold plates, mentioned above, are placed on top of the CPUs and GPUs in place of their typical heat sinks. With coolant flowing through their channels, they keep these components cool.

Two valuable CDU features are offered by Supermicro. First, the company’s CDU has a cooling capacity of 100kW, which enables very high rack compute densities. Second, Supermicro’s CDU features a touchscreen for monitoring and controlling the rack operation via a web interface. It’s also integrated with the company’s Super Cloud Composer data-center management software.

What does it work on?

Supermicro offers several liquid-cooling configurations to support different numbers of servers in different size racks.

Among the Supermicro servers available for liquid cooling is the company’s GPU systems, which can combine up to eight Nvidia GPUs and AMD EPYC 9004 series CPUs. Direct-to-chip (D2C) coolers are mounted on each processor, then routed through the manifolds to the CDU. 

D2C cooling is also a feature of the Supermicro SuperBlade. This system supports up to 20 blade servers, which can be powered by the latest AMD EPYC CPUs in an 8U chassis. In addition, the Supermicro Liquid Cooling solution is ideal for high-end AI servers such as the company’s 8-GPU 8125GS-TNHR.

To manage it all, Supermicro also offers its SuperCloud Composer’s Liquid Cooling Consult Module (LCCM). This tool collects information on the physical assets and sensor data from the CDU, including pressure, humidity, and pump and valve status.

This data is presented in real time, enabling users to monitor the operating efficiency of their liquid-cooled racks. Users can also employ SuperCloud Composer to set up alerts, manage firmware updates, and more.

Do more:

 

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Featured content

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Supermicro’s H13 Petascale Storage Systems is a compact 1U rackmount system powered by the AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

 

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers can now implement Supermicro Petascale Storage, an all-Flash NVMe storage system powered by the latest 4th gen AMD EPYC 9004 series processors.

The Supermicro system has been specifically designed for AI, HPC, private and hybrid cloud, in-memory computing and software-defined storage.

Now Supermicro is offering the first of these systems. It's the Supermicro H13 Petascale Storage System. This compact 1U rackmount system is powered by an AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

For organizations with data-storage requirements approaching petascale capacity, the Supermicro system was designed with a new chassis and motherboard that support a single AMD EPYC processor, 24 DIMM slots for up to 6TB of main memory, and 16 hot-swap ES.3 slots. That's the Enterprise and Datacenter Standard Form Factor (EDSFF), part of the E3 family of SSD form factors designed for specific use cases. ES.3 is short and thin. It uses 25W and 7.5mm-wide storage media designed with a PCIe 5.0 interface.

The Supermicro Petascale Storage system can deliver more than 200 GB/sec. bandwidth and over 25 million input-output operations per second (IOPS) from a half-petabyte of storage.

Here's why 

Why might your customers need such a storage system? Several reasons, depending on what sorts of workloads they run:

  •  Training AI/ML applications requires massive amounts of data for creating reliable models.
  • HPC projects use and generate immense amounts of data, too. That's needed for real-world simulations, such as predicting the weather or simulating a car crash.
  • Big-data environments need susbstantial datasets. These gain intelligence from real-world observations ranging from sensor inputs to business transactions.
  • Enterprise applications need to locate large amounts of data close to computing over NVMe-over-Fabrics (NVMeoF) speeds.

Also, the Supermicro H13 Petascale Storage System offers significant performance, capacity, throughput and endurance--all while keeping excellent power efficiencies.

Do more:

Featured videos


Events




Find AMD & Supermicro Elsewhere

Related Content

Pages