Sponsored by:

Visit AMD Visit Supermicro

Performance Intensive Computing

Capture the full potential of IT

Tech Explainer: What is the intelligent edge? Part 1

Featured content

Tech Explainer: What is the intelligent edge? Part 1

The intelligent edge moves compute, storage and networking capabilities close to end devices, where the data is being generated. Organizations gain the ability to process and act on that data in real time, and without having to first transfer that data to the a centralized data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The term intelligent edge refers to remote server infrastructures that can collect, process and act on data autonomously. In effect, it’s a small, remote data center.

Compared with a more traditional data center, the intelligent edge offers one big advantage: It locates compute, storage and networking capabilities close to the organization’s data collection endpoints. This architecture speeds data transactions. It also makes them more secure.

The approach is not entirely new. Deploying an edge infrastructure has long been an effective way to gather data in remote locations. What’s new with an intelligent edge is that you gain the ability to process and act on that data (if necessary) in real time—without having to first transfer that data to the cloud.

The intelligent edge can also save an organization money. Leveraging the intelligent edge makes sense for organizations that spend a decent chunk of their operating budget transferring data from the edge to public and private data centers, which could be a cloud infrastructure (often referred to as “the core”). Reducing bandwidth in both directions and storage charges helps them control costs.

3 steps to the edge

Today, an intelligent edge typically gets applied in one of three areas:

  • Operational Technology (OT): Hardware and software used to monitor and control industrial equipment, processes and events.
  • Information Technology (IT): Digital infrastructure—including servers, storage, networking and other devices—used to create, process, store, secure and transfer data.
  • Internet of Things (IoT): A network of smart devices that communicate and can be controlled via the internet. Examples include smart speakers, wearables, autonomous vehicles and smart-city infrastructure.

The highly efficient edge

There’s yet another benefit to deploying intelligent edge tech: It can help an organization become more efficient.

One way the intelligent edge does this is by obviating the need to transfer large amounts of data. Instead, data is stored and processed close to where it’s collected.

For example, a smart lightbulb or fridge can communicate with the intelligent edge instead of contacting a data center. Staying in constant contact with the core is unnecessary for devices that don’t change much from minute to minute.

Another way the intelligent edge boosts efficiency is by reducing the time needed to analyze and act on vital information. This, in turn, can lead to enhanced business intelligence that informs and empowers stakeholders. It all gets done faster and more efficiently than with traditional IT architectures and operations.

For instance, imagine that an organization serves a large customer base from several locations. By deploying an intelligent edge infrastructure, the organization could collect and analyze customer data in real time.

Businesses that gain insights from the edge instead of from the core can also respond quickly to market changes. For example, an energy company could analyze power consumption and weather conditions at the edge (down to the neighborhood), then determine whether there's be a power outage.

Similarly, a retailer could use the intelligent edge to support inventory management and analyze customers’ shopping habits. Using that data, the retailer could then offer customized promotions to particular customers, or groups of customers, all in real time.

The intelligent edge can also be used to enhance public infrastructure. For instance, smart cities can gather data that helps inform lighting, public safety, maintenance and other vital services, which could then be used for preventive maintenance or the allocation of city resources and services as needed.

Edge intelligence

As artificial intelligence (AI) becomes increasingly ubiquitous, many organizations are deploying machine learning (ML) models at the edge to help analyze data and deliver insights in real time.

In one use case, running AI and ML systems at the edge can help an organization reduce the service interruptions that often come with transferring large data sets to and from the cloud. Intelligent Edge is able to keep things running locally, giving distant data centers a chance to catch up. This, in turn, can help the organization provide a better experience for the employees and customers who rely on that data.

Deploying AI at the edge can also help with privacy, security and compliance issues. Transferring data to and from the core presents an opportunity for hackers to intercept data in transit. Eliminating this data transfer deprives cyber criminals of a threat vector they could otherwise exploit.

Part 2 of this two-part blog series dives deep into the biggest, most popular use of the intelligent edge today—namely, the internet of things (IoT). We also look at the technology that powers the intelligent edge, as well as what the future may hold for this emerging technology.

Do more:

 

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Featured content

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Supermicro has introduced five Supermicro H13 WIO and short-depth servers powered by the new AMD EPYC 8004 Series processors. These servers are designed for intelligent edge and telco applications.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro is supporting the new AMD EPYC 8004 Series processors (previously code-named Siena) on five Supermicro H13 WIO and short-depth telco servers. Taking advantage of the new AMD processor, these new single-socket servers are designed for use with intelligent edge and telco applications.

The new AMD EPYC 8004 processors enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

The new AMD processors also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

Compact yet capacious

Supermicro’s new 1U short-depth version is designed with I/O in the front and a form factor that’s compact yet still offers enough room for three PCIe 5.0 slots. It also has the option of running on either AC or DC power.

The short-depth systems also feature a NEBS-compliant design for telco operations. NBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.

The new WIO servers use Titanium power supplies for increased energy efficiency, and Supermicro says that will deliver higher performance/watt for the entire system.

Supermicro WIO systems offer a wide range of I/O options to deliver optimized systems for specific requirements. Users can optimize the storage and networking alternatives to accelerate performance, increase efficiency and find the perfect fit for their applications.

Here are Supermicro’s five new models:

  • AS -1015SV-TNRT: Supermicro H13 WIO system in a 1U format
  • AS -1115SV-TRNT: Supermicro H13 WIO system in a 1U format
  • AS -2015SV-TNRT: Supermicro H13 WIO system in a 2U format
  • AS -1115S-FWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, running on AC power and including system-management features
  • AS -1115S-FDWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, this one running on DC power

Shipments of the new Supermicro servers supporting AMD EPYC 8004 processors start now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Meet the new AMD EPYC 8004 family of CPUs

Featured content

Meet the new AMD EPYC 8004 family of CPUs

The new 4th gen AMD EPYC 8004 family extends the ‘Zen4c’ core architecture into lower-count processors with TDP ranges as low as 80W. The processors are designed especially for edge-server deployments and form factors.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD has introduced a family of EPYC processors for space- and power-constrained deployments: the 4th Generation AMD EPYC 8004 processor family. Formerly code-named Siena, these lower core-count CPUs can be used in traditional data centers as well as for edge compute, retail point-of-sale and running a telco network.

The new AMD processors have been designed to run at the edge with better energy efficiency and lower operating costs. The CPUs enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. These new CPUs also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

The AMD EPYC 8004 processors are purpose-built to deliver high performance and are energy-efficient in an optimized, single-socket package. They use the new SP6 socket. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

AMD says these features, along with streamlined memory and I/O feature set, lets servers based on this new processor family deliver compelling system cost/performance metrics.

Heat-tolerant

The AMD EPYC 8004 family is also designed to run in environments with fluctuating and at times high ambient temperatures. That includes outdoor “smart city” settings and NEBS-compliant communications network sites. (NEBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.) What AMD is calling “NEBS-friendly” models have an operating range of -5 C (23 F) to 85 C (185 F).

The new AMD processors can also run in deployments where both the power levels and available physical space are limited. That can include smaller data centers, retail stores, telco installations, and the intelligent edge.

The performance gains are impressive. Using the SPECpower benchmark, which measures power efficiency, the AMD EPYC 8004 CPUs deliver more than 2x the energy efficiency of the top competitive product for telco. This can result in 34% lower energy costs over five years, saving organizations thousands of dollars.

Multiple models

In all, the AMD EPYC 8004 family currently offers 12 SKUs. Those ending with the letter “P” support single-CPU designs. Those ending “PN” support NEBS-friendly designs and offer broader operating temperature ranges.

The various models offer a choice of 8, 16, 24, 48 or 64 ‘Zen4c’ cores; from 16 to 128 threads; and L3 cache sizes ranging from 32MB to 128MB. All the SKUs offer 6 channels of DDR memory with a maximum capacity of 1.152TB; a maximum DDR5 frequency of 4800 MHz; and 96 lanes of PCIe Gen 5 connectivity. Security features are offered by AMD Infinity Guard.

Selected AMD partners have already announced support for the new EPYC 8004 family. This includes Supermicro, which introduced new WIO based on the new AMD processors for diverse data center and edge deployments.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: AI chip sales, AI data centers, sustainability services, manufacturing clouds, tech-savvy or not

Featured content

Research Roundup: AI chip sales, AI data centers, sustainability services, manufacturing clouds, tech-savvy or not

Learn More about this topic
  • Applications:

Sales of AI semiconductors are poised for big growth. AI is transforming the data center. Sustainability services are hot. Manufacturers are saving big money with cloud. And Americans are surprisingly lacking in tech understanding.

That’s some of the latest IT market research. And here’s your Performance Intensive Computing roundup.

AI chip sales to rise 21% this year

Sales of semiconductors designed to execute AI workloads will rise this calendar year by 20.9% over last year, reaching a worldwide total of $53.4 billion, predicts research firm Gartner.

Looking further ahead, Gartner expects worldwide sales of AI chips in 2024 to reach $67.1 billion, a 25% increase over the projected figure for this year.

And by 2027, Gartner forecasts, those sales will top $119 billion, or more than double this year’s market size.

What’s behind the rapid rise? Two main factors, says Gartner: Generative AI, and the spread of AI-based applications in data centers, edge infrastructure and endpoint devices.

AI transforming data centers

Generative AI is transforming the data center, says Lucas Beran, a market analyst with Dell’Oro Group. Last month, his research group predicted that AI infrastructure spending will propel the data center CapEx to over a half-trillion dollars by 2027, an increase of 15%. (That figure is larger than Gartner’s because it includes more than just chips.) Now Dell’Oro says AI is ushering in a new era for data center physical infrastructure.

Here’s some of what Beran of Dell’Oro expects:

  • Due the substantial power consumption of AI systems, end users will adopt intelligent rack power distribution units (PDUs) that can remotely monitor and manage power consumption and environmental factors.
  • Liquid cooling will come into its own. Some users will retrofit existing cooling systems with closed-loop assisted liquid cooling systems. These use liquid to capture heat generated inside the rack or server, then blow it into a hot aisle. By 2025, global sales of liquid cooling systems will approach $2 billion.
  • A lack of power availability could slow AI adoption. Data centers need more energy than utilities can supply. One possible solution: BYOP – bring your own power.

Sustainability services: $65B by 2027

Speaking of power and liquid cooling, a new forecast from market researcher IDC has total sales of environmental, social and governance (ESG) services rising from $37.7 billion this year to nearly $65 billion by 2027, for a compound annual growth rate (CAGR) of nearly 15%.

For its forecast, IDC looked at ESG services that include consulting, implementation, engineering and IT services.

These services include ESG strategy development and implementation, sustainable operations consulting, reporting services, circularity consulting, green IT implementation services, and managed sustainability performance services. What they all share is the common goal of driving sustainability-related outcomes.

Last year, nearly two-thirds of respondents surveyed by IDC said they plan to allocate more than half their professional-services spending on sustainability services. Looking ahead, IDC expects that to rise to 60% by 2027.

"Pressure for [ESG] change is more prescient than ever,” says IDC research analyst Dan Versace. “Businesses that fail to act face risk to their brand image, financial performance, and even their infrastructure due to the ever-present threat of extreme weather events and resource shortages caused by climate change.”

Manufacturers finally see the cloud

For manufacturers, IT is especially complicated. Unlike banks and other purely digital businesses, manufacturers have to tie IT systems and networks with physical plants and supply chains.

That’s one reason why manufacturers have been comparatively slow to adopt cloud computing. Now that’s changing. In part, as a new report from ABI Research points out, because manufacturers that switch to cloud-based systems can enjoy up to 60% reductions in overhead costs relating to data storage, says James Iversen, an ABI industry analyst.

Iversen predicts that industrial cloud platform revenue in manufacturing will enjoy a nearly 23% CAGR for the coming decade.

Another benefit for manufacturers: The cloud can eliminate the data fragmentation common with external data warehouses. “Cloud manufacturing providers are eliminating these concerns by interconnecting applications bi-directionally,” Iversen says, “leading to sharing and communication between applications and their data.”

How tech-savvy are your customers?

If they’re like most Americans, not very.

A Pew Research Center poll of about 5,100 U.S. adults, conducted this past spring and just made public, found that fewer than a third (32%) knew that large language models such as ChatGPT produce answers from data already published on the internet.

Similarly, only about one in five (21%) knew that U.S. websites are prohibited from collecting data on minors under the age of 13.

Fewer than half of those polled (42%) knew what a deepfake is. And only a similar minority (48%) could identify an example of two-factor authentication.

What tech info do they know? Well, 80% of respondents correctly identified Elon Musk as the boss of Tesla and Twitter (now X). And nearly as many (77%) knew that Facebook had changed its name to Meta.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Can liquid-cooled servers help your customers?

Featured content

Can liquid-cooled servers help your customers?

Liquid cooling can offer big advantages over air cooling. According to a new Supermicro solution guide, these benefits include up to 92% lower electricity costs for a server’s cooling infrastructure, and up to 51% lower electricity costs for an entire data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The previous thinking was that liquid cooling was only for supercomputers and high-end gaming PCs. No more.

Today, many large-scale cloud, HPC, analytics and AI servers combine CPUs and GPUs in a single enclosure, generating a lot of heat. Liquid cooling can carry away the heat that’s generated, often with less overall cost and more efficiently than air.

According to a new Supermicro solution guide, liquid’s advantages over air cooling include:

  • Up to 92% lower electricity costs for a server’s cooling infrastructure
  • Up to 51% lower electricity costs for the entire data center
  • Up to 55% less data center server noise

What’s more, the latest liquid cooling systems are turnkey solutions that support the highest GPU and CPU densities. They’re also fully validated and tested by Supermicro under demanding workloads that stress the server. And unlike some other components, they’re ready to ship to you and your customers quickly, often in mere weeks.

What are the liquid-cooling components?

Liquid cooling starts with a cooling distribution unit (CDU). It incorporates two modules: a pump that circulates the liquid coolant, and a power supply.

Liquid coolant travels from the CDU through flexible hoses to the cooling system’s next major component, the coolant distribution manifold (CDM). It’s a unit with distribution hoses to each of the servers.

There are 2 types of CDMs. A vertical manifold is placed on the rear of the rack, is directly connected via hoses to the CDU, and delivers coolant to another important component, the cold plates. The second type, a horizontal manifold, is placed on the front of the rack, between two servers; it’s used with systems that have inlet hoses on the front.

The cold plates, mentioned above, are placed on top of the CPUs and GPUs in place of their typical heat sinks. With coolant flowing through their channels, they keep these components cool.

Two valuable CDU features are offered by Supermicro. First, the company’s CDU has a cooling capacity of 100kW, which enables very high rack compute densities. Second, Supermicro’s CDU features a touchscreen for monitoring and controlling the rack operation via a web interface. It’s also integrated with the company’s Super Cloud Composer data-center management software.

What does it work on?

Supermicro offers several liquid-cooling configurations to support different numbers of servers in different size racks.

Among the Supermicro servers available for liquid cooling is the company’s GPU systems, which can combine up to eight Nvidia GPUs and AMD EPYC 9004 series CPUs. Direct-to-chip (D2C) coolers are mounted on each processor, then routed through the manifolds to the CDU. 

D2C cooling is also a feature of the Supermicro SuperBlade. This system supports up to 20 blade servers, which can be powered by the latest AMD EPYC CPUs in an 8U chassis. In addition, the Supermicro Liquid Cooling solution is ideal for high-end AI servers such as the company’s 8-GPU 8125GS-TNHR.

To manage it all, Supermicro also offers its SuperCloud Composer’s Liquid Cooling Consult Module (LCCM). This tool collects information on the physical assets and sensor data from the CDU, including pressure, humidity, and pump and valve status.

This data is presented in real time, enabling users to monitor the operating efficiency of their liquid-cooled racks. Users can also employ SuperCloud Composer to set up alerts, manage firmware updates, and more.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Research Roundup: spending rises on global IT, public cloud and cybersec; 8 in 10 finance firms breached

Featured content

Research Roundup: spending rises on global IT, public cloud and cybersec; 8 in 10 finance firms breached

Catch up on the latest market research on IT spending, public cloud and cybersecurity.

Learn More about this topic
  • Applications:

The worldwide IT market this year will be top $4.5 trillion. Spending on public cloud rose nearly 23% in Q1. And although security spending rose nearly 12% earlier this year, nearly 8 in 10 financial-services firms have suffered a cyber breach.

That’s some of the latest tech market research. And here’s your Performance Intensive roundup.

Worldwide IT market

How big is the worldwide IT market? Big indeed—about $4.7 trillion. That’s the forecast for this year from advisory firm Gartner.

Assuming Gartner’s right, that would mark an increase over last year’s spending of 4.3%.

Some sectors are growing faster than others. Take software. Gartner expects the global software spend will rise 13.5% this year over last, for a worldwide total of $911 billion. Looking ahead to next year, Gartner expects more of the same: software spending in 2024 will rise 14%, exceeding $1 trillion.

The second-fastest growing sector is IT services. For this sector, Gartner predicts spending will rise nearly 9% this year over last, for a 2023 global total of $1.4 trillion. And next year, Gartner expects, services spending will rise by an even higher 11%, totaling $1.58 trillion worldwide.

How about spending on the new hot technology, generative AI? Surprisingly, Gartner says it has not yet made a significant impact. Instead, says Gartner analyst John-David Lovelock, “most enterprises will incorporate generative AI in a slow and controlled manner through upgrades to tools already built into their IT budgets.”

Public cloud

Public-cloud spending is on a tear. Last year, according to market watcher IDC, worldwide revenue for public-cloud services rose nearly 23% over 2021’s level, for a total of $545.8 billion.

The largest segment by revenue was SaaS applications, accounting for more than 45% of the total, or about $246 million. It was followed by IaaS (21% market share), PaaS (17%) and SaaS system infrastructure software (16%), IDC says.

By vendor, just 5 suppliers—Microsoft, AWS, Salesforce, Google and Oracle—collectively captured more than 40% of the 2022 global public-cloud market. The No. 1 spot was held by Microsoft, with a market share of nearly 17%.

Being on top is important. “Most organizations,” says Lara Greden, an IDC researcher, “rank their public-cloud provider as their most strategic technology partner.”

Finance cyber breaches

Cyber breaches used to be rare events. No more. A new report finds that nearly 8 in 10 financial-services organizations (78%) have experienced a cyber breach, cyber threat and data theft.

The report was compiled by Skyhigh Security, a cloud-native security vendor that worked with market researcher Vanson Bourne to poll nearly 125 IT decision-makers in 9 countries, including the U.S. and Canada. Respondents all worked for large financial-services organizations with at least 500 employees.

Why is financial services such a big target for cybercrooks? Because, as Willie Sutton reportedly quipped when asked why he robbed banks, “that’s where the money is.”

Skyhigh’s survey also found that about 6 in 10 financial-services firms store sensitive data in the public cloud, although Skyhigh didn’t correlate that with the high percentage of companies that have been cybercrime targets. But one way to secure cloud data, using a cloud access security broker, is employed by fewer than half the respondents (44%).

Also, more than 8 in 10 survey respondents believe that “shadow IT”—the practice of non-IT business units acquiring tech hardware, software and services without the IT department’s approval or knowledge—impairs their ability to keep data secure.

Cyber spending

All those attacks are certainly not due to a lack of spending. Indeed, global spending on cybersecurity products and services rose by 12.5% year-on-year in this year’s first quarter, according to market watcher Canalys.

Spending growth was fastest among midsize organizations, those with 100 to 499 employees, Canalys finds. Within this group, cybersec spending in Q1 rose 13.5% year-on-year.

Spending rose almost as fast for large organizations, those with 500 or more employees: an increase of 13.3%. For small businesses, those with 10 to 99 employees, cybersec spending in Q1 rose just 7.5%, Canalys says.

Market concentration is evident here, too. Nearly half of all cybersec spending (48.6%) went to just 12 vendors, Canalys finds. Three in particular dominated during Q1: Palo Alto Networks (8.7% market share), Fortinet (7%) and Cisco (6.1%).

By region, Canalys finds, North America to have been the largest market for cybersecurity products and services in Q1, at $9.7 billion. But both EMEA and Latin America saw faster sales growth: 13.4% for EMEA and 15.2% for LatAm, compared with 12.3% for North America.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: Green Computing, Part 3 – Why you should reduce, reuse & recycle

Featured content

Tech Explainer: Green Computing, Part 3 – Why you should reduce, reuse & recycle

The new 3Rs of green computing are reduce, reuse and recycle. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

To help your customers meet their environmental, social and governance (ESG) goals, it pays to focus on the 3 Rs of green computing—reduce, reuse and recycle.

Sure, pursuing these goals can require some additional R&D and reorganization. But tech titans such as AMD and Supermicro are helping.

AMD, Supermicro and their vast supply chains are working to create a new virtuous circle. More efficient tech is being created using recycled materials, reused where possible, and then once again turned into recycled material.

For you and your customers, the path to green computing can lead to better corporate citizenship as well as higher efficiencies and lower costs.

Green server design

New disaggregated server technology is now available from manufacturers like Supermicro. This tech makes it possible for organizations of every size to increase their energy efficiency, better utilize data-center space, and reduce capital expenditures.

Supermicro’s SuperBlade, BigTwin and EDSFF SuperStorage are exemplars of disaggregated server design. The SuperBlade multi-node server, for instance, can house up to 20 server blades and 40 CPUs. And it’s available in 4U, 6U and 8U rack enclosures.

These efficient designs allow for larger, more efficient shared fans and power supplies. And along with the chassis itself, many elements can remain in service long past the lifespans of the silicon components they facilitate. In some cases, an updated server blade can be used in an existing chassis.

Remote reprogramming

Innovative technologies like adaptive computing enable organizations to adopt a holistic approach to green computing at the core, the edge and in end-user devices.

For instance, AMD’s adaptive computing initiative offers the ability to optimize hardware based on applications. Then your customers can get continuous updates after production deployment, adapting to new requirements without needing new hardware.

The key to adaptive computing is the Field Programmable Gate Array (FPGA). It’s essentially a blank canvas of hardware, capable of being configured into a multitude of different functions. Even after an FPGA has been deployed, engineers can remotely access the component to reprogram various hardware elements.

The FPGA reprogramming process can be as simple as applying security patches and bug fixes—or as complex as a wholesale change in core functionality. Either way, the green computing bona fides of adaptive computing are the same.

What’s more, adaptive tech like FPGAs significantly reduces e-waste. This helps to lower an organization’s overall carbon footprint by obviating the manufacturing and transportation necessary to replace hardware already deployed.

Adaptive computing also enables organizations to increase energy efficiency. Deploying cutting-edge tech like the AMD Instinct MI250X Accelerator to complete AI training or inferencing can significantly reduce the overall electricity needed to complete a task.

Radical recycling

Even in organizations with the best green computing initiatives, elements of the hardware infrastructure will eventually be ready for retirement. When the time comes, these organizations have yet another opportunity to go green—by properly recycling.

Some servers can be repurposed for other, less-demanding tasks, extending their lifespan. For example, a system that had been used for HPC applications that may no longer have the required FP64 performance could be repurposed to host a database or email application.

Quite a lot of today’s computer hardware can be recycled. This includes glass from monitors; plastic and aluminum from cases; copper in power supplies; precious metals used in circuitry; even the cardboard, wood and other materials used in packaging.

If that seems like too much work, there are now third-party organizations that will oversee your customers’ recycling efforts for a fee. Later, if all goes according to plan, these recycled materials will find their way back into the manufacturing supply chain.

Tech suppliers are working to make recycling even easier. For example, AMD is one of the many tech leaders whose commitment to environmental sustainability extends across its entire value chain. For AMD, that includes using environmentally preferable packing materials, such as recycled materials and non-toxic dyes.

Are you 3R?

Your customers understand that establishing and adhering to ESG goals is more than just a good idea. In fact, it’s vital to the survival of humanity.

Efforts like those of AMD and Supermicro are helping to establish a green computing revolution—and not a moment too soon.

In other words, pursuing green computing’s 3 Rs will be well worth the effort.

Also read:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Featured content

Meet Supermicro’s Petascale Storage, a compact rackmount system powered by the latest AMD EPYC processors

Supermicro’s H13 Petascale Storage Systems is a compact 1U rackmount system powered by the AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

 

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers can now implement Supermicro Petascale Storage, an all-Flash NVMe storage system powered by the latest 4th gen AMD EPYC 9004 series processors.

The Supermicro system has been specifically designed for AI, HPC, private and hybrid cloud, in-memory computing and software-defined storage.

Now Supermicro is offering the first of these systems. It's the Supermicro H13 Petascale Storage System. This compact 1U rackmount system is powered by an AMD EPYC 97X4 processor (formerly codenamed Bergamo) with up to 128 cores.

For organizations with data-storage requirements approaching petascale capacity, the Supermicro system was designed with a new chassis and motherboard that support a single AMD EPYC processor, 24 DIMM slots for up to 6TB of main memory, and 16 hot-swap ES.3 slots. That's the Enterprise and Datacenter Standard Form Factor (EDSFF), part of the E3 family of SSD form factors designed for specific use cases. ES.3 is short and thin. It uses 25W and 7.5mm-wide storage media designed with a PCIe 5.0 interface.

The Supermicro Petascale Storage system can deliver more than 200 GB/sec. bandwidth and over 25 million input-output operations per second (IOPS) from a half-petabyte of storage.

Here's why 

Why might your customers need such a storage system? Several reasons, depending on what sorts of workloads they run:

  •  Training AI/ML applications requires massive amounts of data for creating reliable models.
  • HPC projects use and generate immense amounts of data, too. That's needed for real-world simulations, such as predicting the weather or simulating a car crash.
  • Big-data environments need susbstantial datasets. These gain intelligence from real-world observations ranging from sensor inputs to business transactions.
  • Enterprise applications need to locate large amounts of data close to computing over NVMe-over-Fabrics (NVMeoF) speeds.

Also, the Supermicro H13 Petascale Storage System offers significant performance, capacity, throughput and endurance--all while keeping excellent power efficiencies.

Do more:

Featured videos


Events


Find AMD & Supermicro Elsewhere

How AMD and Supermicro are working together to help you deliver AI

Featured content

How AMD and Supermicro are working together to help you deliver AI

AMD and Supermicro are jointly offering high-performance AI alternatives with superior price and performance.

Learn More about this topic
  • Applications:
  • Featured Technologies:

When it comes to building AI systems for your customers, a certain GPU provider with a trillion-dollar valuation isn’t the only game in town. You should also consider the dynamic duo of AMD and Supermicro, which are jointly offering high-performance AI alternatives with superior price and performance.

Supermicro’s Universal GPU systems are designed specifically for large-scale AI and high-performance computing (HPC) applications. Some of these modular designs come equipped with AMD’s Instinct MI250 Accelerator and have the option of being powered by dual AMD EPYC processors.

AMD, with a newly formed AI group led by Victor Peng, is working hard to enable AI across many environments. The company has developed an open software stack for AI, and it has also expanded its partnerships with AI software and framework suppliers that now include the PyTorch Foundation and Hugging Face.

AI accelerators

In addition, AMD’s Instinct MI300A data-center accelerator is due to ship in this year’s fourth quarter. It’s the successor to AMD’s MI200 series, based on the company’s CDNA 2 architecture and first multi-die CPU, which powers some of today’s fastest supercomputers.

The forthcoming Instinct MI300A is based on AMD’s CDNA 3 architecture for AI and HPC workloads, which uses 5nm and 6nm process tech and advanced chiplet packaging. Under the MI300A’s hood, you’ll find 24 processor cores with Zen 4 tech, as well as 128GB of HBM3 memory that’s shared by the CPU and GPU. And it supports AMD ROCm 5, a production-ready, open source HPC and AI software stack.

Earlier this month, AMD introduced another member of the series, the AMD Instinct MI300X. It replaces three Zen 4 CPU chiplets with two CDNA 3 chiplets to create a GPU-only system. Announced at AMD’s recent Data Center and AI Technology Premier event, the MI300X is optimized for large language models (LLMs) and other forms of AI.

To accommodate the demanding memory needs of generative AI workloads, the new AMD Instinct MI300X also adds 64GB of HBM3 memory, for a new total of 192GB. This means the system can run large models directly in memory, reducing the number of GPUs needed, speeding performance, and reducing the user’s total cost of ownership (TCO).

AMD also recently introduced the AMD Instinct Platform, which puts eight MI300X systems and 1.5TB of memory in a standard Open Compute Project (OCP) infrastructure. It’s designed to drop into an end user’s current IT infrastructure with only minimal changes.

All this is coming soon. The AMD MI300A started sampling with select customers earlier this quarter. The MI300X and Instinct Platform are both set to begin sampling in the third quarter. Production of the hardware products is expected to ramp in the fourth quarter.

KT’s cloud

All that may sound good in theory, but how does the AMD + Supermicro combination work in the real world of AI?

Just ask KT Cloud, a South Korea-based provider of cloud services that include infrastructure, platform and software as a service (IaaS, PaaS, SaaS). With the rise of customer interest in AI, KT Cloud set out to develop new XaaS customer offerings around AI, while also developing its own in-house AI models.

However, as KT embarked on this AI journey, the company quickly encountered three major challenges:

  • The high cost of AI GPU accelerators: KT Cloud would need hundreds of thousands of new GPU servers.
  • Inefficient use of GPU resources in the cloud: Few cloud providers offer GPU virtualization due to overhead. As a result, most cloud-based GPUs are visible to only 1 virtual machine, meaning they cannot be shared by multiple users.
  • Difficulty using large GPU clusters: KT is training Korean-language models using literally billions of parameters, requiring more than 1,000 GPUs. But this is complex: Users would need to manually apply parallelization strategies and optimizations techniques.

The solution: KT worked with Moreh Inc., a South Korean developer of AI software, and AMD to design a novel platform architecture powered by AMD’s Instinct MI250 Accelerators and Moreh’s software.

The entire AI software stack was developed by Moreh from PyTorch and TensorFlow APIs to GPU-accelerated primitive operations. This overcomes the limitations of cloud services and large AI model training.

Users do not need to insert or modify even a single line of existing source code for the MoAI platform. They also do not need to change the method of running a PyTorch/TensorFlow program.

Did it work?

In a word, yes. To test the setup, KT developed a Korean language model with 11 billion parameters. Training was then done on two machines: one using Nvidia GPUs, the other being the AMD/Moreh cluster equipped with AMD Instinct MI250 accelerators, Supermicro Universal GPU systems, and the Moreh AI platform software.

Compared with the Nvidia system, the Moreh solution with AMD Instinct accelerators showed 116% throughput (as measured by tokens trained per second), and 2.05x higher cost-effectiveness (measured as throughput per dollar).

Other gains are expected, too. “With cost-effective AMD Instinct accelerators and a pay-as-you-go pricing model, KT Cloud expects to be able to reduce the effective price of its GPU cloud service by 70%,” says JooSung Kim, VP of KT Cloud.

Based on this test, KT built a larger AMD/Moreh cluster of 300 nodes—with a total of 1,200 AMD MI250 GPUs—to train the next version of the Korean language model with 200 billion parameters.

It delivers a theoretical peak performance of 434.5 petaflops for fp16/bf16 (a native 16-bit format for mixed-precision training) matrix operations. That should make it one of the top-tier GPU supercomputers in the world.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Tech Explainer: Green Computing, Part 2 — Holistic strategies

Featured content

Tech Explainer: Green Computing, Part 2 — Holistic strategies

Holistic green computing strategies can help both corporate and individual users make changes for the better.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Green computing allows us to align the technology that powers our lives with the sustainability goals necessary to battle the climate crisis.

In Part 1 of our Tech Explainer on green computing, we looked at data-center architecture best practices and component-level green engineering. Now we’ll investigate holistic green computing strategies that can help both corporate and individual users change for the better.

Green manufacturing and supply chain

The manufacturing process can account for up to 70% of the natural resources used in the lifecycle of a PC, server or other digital device. And an estimated 76% of all global trade passes through a supply chain. So it’s more important than ever to reform processes that could harm the environment.

AMD’s efforts to advance environmental sustainability in partnership with its suppliers is a step in the right direction. The AMD Supply Chain is currently on track to ensure two important goals: that 80% of its suppliers source renewable energy, and that 100% make public their emissions-reduction goals, both by 2025.

To reduce the environmental impact of IT manufacturing, tech providers are replacing the toxic chemicals used in computer manufacturing with alternatives that are more environmentally friendly.

Materials such as the brominated flame retardants found in plastic casings are giving way to eco-friendly, non-toxic silicone compounds. Traditional non-recyclable plastic parts are being replaced by parts made from both bamboo and recyclable plastics, such as polycarbonate resins. And green manufacturers are working to eliminate other toxic chemicals, including lead in solder and cadmium and selenium in circuit boards.

Innovation in green manufacturing can identify and improve hundreds, if not thousands, of industry-standard practices. No matter how small an improvement is when employed to create millions of devices, it can make a big difference.

Green enterprise

Today’s enterprise data-center managers are working to maximize server performance while also minimizing their environmental impact. Leading-edge green methodologies include two important moves: reducing power usage at the server level and extending hardware lifecycles to create less waste.

Supermicro, an authority on energy-efficient data center design, is empowering this movement by creating new servers engineered for green computing.

One such server is Supermicro’s 4-node BigTwin. The BigTwin features disaggregated server architecture that reduces e-waste by enabling subsystem upgrades.

As technology improves, IT managers can replace components like the CPU, GPU and memory. This extends the life of the chassis, power supplies and cooling systems that might otherwise end up in a landfill.

Twin and Blade server architectures are more efficient because they share power supplies and fans. This can significantly lower their power usage, making them a better choice for green data centers.

The upgraded components that go into these servers now include high-efficiency processors like the AMD EPYC 9654. The infographic below, courtesy of AMD, shows how 4th Gen AMD EPYC processors can power 2,000 virtual machines using up to 35% fewer servers than the competition:

EPYC green infographic

As shown, the potential result is up to 29% less energy consumed annually. That kind of efficiency can save an estimated 35 tons of carbon dioxide—the equivalent of 38 acres of U.S. forest carbon sequestration every year.

Green data centers also employ advanced cooling systems. For instance, Supermicro’s servers include optional liquid cooling. Using fluid to carry heat away from critical components allows IT managers to lower fan speeds inside each server and reduce HVAC usage in data centers.

Deploying efficient cooling systems like these lowers a data center’s Power Usage Effectiveness (PUE), thus reducing carbon emissions from power generation.

Changing for the better, together

No single person, corporation or government can stave off the worst effects of climate crisis. If we are to win this battle, we must work together.

Engineers, industrial designers and data scientists have their work cut out for them. By fueling the evolution of green computing, they—and their corporate managers—can provide us with the tools we need to go green and safeguard our environment for generations to come.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Pages