Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

Tech Explainer: How does design simulation work? Part 2

Featured content

Tech Explainer: How does design simulation work? Part 2

Cutting-edge technology powers the virtual design process.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The market for simulation software is hot, growing at a compound annual growth rate (CAGR) of 13.2%, according to Markets and Markets. The research firm predicts that the global market for simulation software, worth an estimated $18.1 billion this year, will rise to $33.5 billion by 2027.

No surprise, then, that tech titans AMD and Supermicro would design an advanced hardware platform to meet the demands of this burgeoning software market.

AMD and Supermicro have teamed up with Ansys Inc., a U.S.-based designer of engineering simulation software. One result of this three-way collaboration is the Supermicro SuperBlade.

Shanthi Adloori, senior director of product management at Supermicro, calls the SuperBlade “one of the fastest simulation-in-a-box solutions.”

Adloori adds: “With a high core count, large memory capacity and faster memory bandwidth, you can reduce the time it takes to complete a simulation .”

One very super blade

Adloori isn’t overstating the case.

Supermicro’s SuperBlade can house up to 20 hot-swappable nodes in its 8U chassis. Each of those blades can be equipped with AMD EPYC CPUs and AMD Instinct GPUs. In fact, SuperBlade is the only platform of its kind designed to support both GPU and non-GPU nodes in the same enclosure.

Supermicro SuperBlade’s other tech specs may be less glamorous, but they’re no less impressive. When it comes to memory, each blade can address a maximum of either 8TB or 16TB of DDR5-4800 memory.

Each node can also house 2 NVMe/SAS/SATA drives and as many as eight 3000W Titanium Level power supplies.

Because networking is an essential element of enterprise-grade design simulation, SuperBlade includes redundant 25Gb/10Gb/1Gb Ethernet switches and up to 200Gbps/100Gbps InfiniBand networking for HPC applications.

For smaller operations, the Supermicro SuperBlade is also available in smaller configurations, including  6U and 4U. These versions pack fewer nodes, which ultimately means they’re able to bring less power to bear. But, hey, not every design team makes passenger jets for a living.

It’s all about the silicon

If Supermicro’s SuperBlade is the tractor-trailer of design simulation technology, then AMD CPUs and GPUs are the engines under the hood.

The differing designs of these chips lend themselves to specific core competencies. CPUs can focus tremendous power on a few tasks at a time. Sure, they can multitask. But there’s a limit to how many simultaneous operations they can address.

AMD bills its EPYC 7003 Series CPUs as the world’s highest-performing server processors for technical computing. The addition of AMD 3D V-Cache technology delivers an expanded L3 cache to help accelerate simulations.

GPUs, on the other hand, are required when running simulations where certain tasks require simultaneous operations to be performed. The AMD Instinct MI250X Accelerator contains 220 compute units with 14,080 stream processors.

Instead of throwing a ton of processing power at a small number of operations, the AMD Instinct can address thousands of less resource-intensive operations simultaneously. It’s that capability that makes GPUs ideal for HPC and AI-enabled operations, an increasingly essential element of modern design simulation.

The future of design simulation

The development of advanced hardware like SuperBlade and the AMD CPUs and GPUs that power it will continue to progress as more organizations adopt design simulation as their go-to product development platform.

That progression will continue to manifest in global companies like Boeing and Volkswagen. But it will also find its way into small startups and single users.

Also, as the required hardware becomes more accessible, simulation software should become more efficient.

This confluence of market trends could empower millions of independent designers with the ability to perform complex design, testing and validation functions.

The result could be nothing short of a design revolution.

Part 1 of this two-part Tech Explainer explores the many ways design simulation is used to create new products, from tiny heart valves to massive passenger aircraft. Read Part 1 now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: How does design simulation work? Part 1

Featured content

Tech Explainer: How does design simulation work? Part 1

Design simulation lets designers and engineers create, test and improve designs of real-world airplanes, cars, medical devices and more while working safely and quickly in virtual environments. This workflow also reduces the need for physical tests and allows designers to investigate more alternatives and optimize their products.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Design simulation is a type of computer-aided engineering used to create new products, reducing the need for physical prototypes. The result is a faster, more efficient design process in which complex physics and math do much of the heavy lifting.

Rapid advances in CPUs and GPUs that are used to perform simulation and software have made it possible to shift product design from the physical world to a virtual one.

In this virtual space, engineers can create and test new designs as quickly as their servers can calculate the results and then render them with visualization software.

Getting better all the time

Designing via AI-powered virtual simulation offers significant improvements over older methods.

Back in the day, it might have taken a small army of automotive engineers years to produce a single new model. Prototypes were often sculpted from clay and carted into a wind tunnel to test aerodynamics.

Each new model went through a seemingly endless series of time-consuming physical simulations. The feedback from those tests would literally send designers back to the drawing board.

It was an arduous and expensive process. And the resources necessary to accomplish these feats of engineering often came at the expense of competition. Companies whose pockets weren’t deep enough might fail to keep up.

Fast-forward to the present. Now, we’ve got smaller design teams aided by increasingly powerful clusters of high-performance systems.

These engineers can tweak a car’s crumple zone in the morning … run the new version through a virtual crash test while eating lunch … and send revised instructions to the design team before day’s end.

Changing designs, saving lives

Faster access to this year’s Ford Mustang is one thing. But if you really want to know how design simulation is changing the world, talk to someone whose life was saved by a mechanical heart valve.

Using the latest tech, designers can simulate new prosthetics in relation to the physiology they’ll inhabit. Many factors come into play here, including size, shape, materials, fluid dynamics, failure models and structural integrity over time.

What’s more, it’s far better to theorize how a part will interact with the human body before the doctor installs it. Simulations can warn medical pros about potential infections, rejections and physical mismatches. AI can play a big part in these types of simulations and manufacturing.

Sure, perfection may be unattainable. But the closer doctors get to a perfect match between a prosthetic and its host body, the better the patient will fair after the procedure.

Making the business case

Every business wants to cut costs, increase efficiency and get an edge over the competition. Here, too, design simulation offers a variety of ways to achieve those lofty goals.

As mentioned above, simulation can drastically reduce the need for expensive physical prototypes. Creating and testing a new airplane design virtually means not having to come within 100 miles of a runway until the first physical prototype is ready to take flight. 

Aerospace and automotive industries rely heavily on both the structural integrity of an assembly but also on computational fluid dynamics. In this way, simulation can potentially save an aerospace company billions of dollars over the long run.

What’s more, virtual airplanes don’t crash. They can’t be struck by lightning. And in a virtual passenger jet, test pilots don’t need to worry about their safety.

By the time a new aircraft design rolls onto the tarmac, it’s already been proven air-worthy—at least to the extent that a virtual simulation can make those kinds of guarantees.

Greater efficiency

Simulation makes every aspect of design more efficient. For instance, iteration, a vital element of the design process, becomes infinitely more manageable in a simulated environment.

Want to find out how a convertible top will affect your new supercar’s 0-to-60 time? Simulation allows engineers to quickly replace the hard-top with some virtual canvas and then create a virtual drag race against the original model.

Simulation can take a product to the manufacturing phase, too. Once a design is finished, engineers can simulate its journey through a factory environment.

This virtual factory, or digital twin, can help determine how long it will take to build a product and how it will react to various materials and environmental conditions. It can even determine how many moves a robot arm will need to make and when human intervention might become necessary. This process helps engineers optmize the manufacturing process.

In countless ways, simulation has never been more real.

In Part 2 of this 2-part blog, we’ll explore the digital technology behind design simulation. This cutting-edge technology is made possible by the latest silicon, vast swaths of high-speed storage, and sophisticated blade servers that bring it all together.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

OCP Global Summit demos the power of collaboration for the data center

Featured content

OCP Global Summit demos the power of collaboration for the data center

Learn More about this topic
  • Applications:
  • Featured Technologies:

Thousands of data-center professionals will gather in Silicon Valley this month for the 2023 OCP Global Summit.

This in-person event, sponsored by the Open Compute Project, will be held in San Jose, Calif., on Oct. 17 – 19.

The theme for this year’s conference: “Scaling innovation through collaboration.”

About OCP

OCP members share data center products and best practices that apply open standards. The group’s projects include server design, data storage, rack design, energy-efficient data centers, open networking switches, and servers.

The OCP requires that all contributions meet at least 3 of its 5 core tenets:

  • Efficiency: including power delivery, thermal, platform, overall cost, latencies
  • Impact: including efficiency gains, use of new tech, more robust supply chain
  • Openness: strive to comply with existing open interfaces
  • Scalability: can be used in large-scale deployments.
  • Sustainability: Be transparent about environmental impact, and aspire to improve over time.

OCP began as a Facebook project, launched in 2009, to build an energy-efficient data center. That led to the opening of a Facebook data center in Pineville, Ore., that the company says is 24% less expensive to run than its previous facilities.

That led to OCP being founded in 2011. Today the nonprofit organization has nearly 300 corporate members and over 6,000 active participants. Membership is available in four levels — community, silver, gold and platinum.

OCP’s membership list is a veritable who’s who of tech. Members include Amazon, AMD, Arm, AT&T, Cisco, Dell, Google, HPE, IBM, Lenovo, Meta, Supermicro and Tencent.

AMD & Supermicro participating

Among the keynote speakers at this year’s OCP Global Summit will be Forrest Norrod, executive VP and GM of the data center solutions business group at AMD. He’ll be giving a presentation on Oct. 17 entitled, “Together we advance the modern data center.”

Also, Supermicro will be showing three of its servers at the OCP Global Summit:

  • Supermicro CloudDC A+ Server: Designed for data center, web server, cloud computing and more, this 1U rackmount server is powered by a single AMD EPYC 9004 Series processor.
  • Supermicro Hyper A+ Server: This 2U server is intended for virtualization, AI inference and machine learning, software-defined storage, cloud computing, and use as an enterprise server. It’s powered by dual AMD EPYC 9004 Series processors.
  • Supemicro Storage A+ Server: This 2U storage device can handle software-defined storage, cloud, in-memory computing, data-intensive HPC workloads, and NVMe-over-fabric solutions. It’s powered by a single AMD EPYC 9004 Series processor.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Why M&E content creators need high-end VDI, rendering & storage

Featured content

Why M&E content creators need high-end VDI, rendering & storage

Content creators in media and entertainment need lots of compute, storage and networking. Supermicro servers with AMD EPYC processors are enhancing the creativity of these content creators by offering improved rendering and high-speed storage. These systems empower the production of creative ideas.

 

Learn More about this topic
  • Applications:
  • Featured Technologies:

When content creators at media and entertainment (M&E) organizations create videos and films, they’re also competing for attention. And today that requires a lot of technology.

Making a full-length animated film involves no fewer than 14 complex steps, including 3D modeling, texturing, animating, visual effects and rendering. The whole process can take years. And it requires a serious quantity of high-end compute, storage and software.

From an IT perspective, three of the most compute-intensive activities for M&E content creators are VDI, rendering and storage. Let’s take a look at each.

* Virtual desktop infrastructure (VDI): While content creators work on personal workstations, they need the kind of processing power and storage capacity available from a rackmount server. That’s what they get with VDI.

VDI separates the desktop and associated software from the physical client device by hosting the desktop environment and applications on a central server. These assets are then delivered to the desktop workstation over a network.

To power VDI setups, Supermicro offers a 4U GPU server with up to 8 PCIe GPUs. The Supermicro AS -4125GS-TNRT server packs a pair of AMD EPYC 9004 processors, Nvidia RTX 6000 GPUs, and 6TB of DDR5 memory.

* Rendering: The last stage of film production, rendering is where the individual 3D images created on a computer are transformed into the stream of 2D images ready to be shown to audiences. This process, conducted pixel by pixel, is time-consuming and resource-hungry. It requires powerful servers, lots of storage capacity and fast networking.

For rendering, Supermicro offers its 2U Hyper system, the AS -2125HS-TNR. It’s configured with dual AMD EPYC 9004 processors, up to 6TB of memory, and your choice of NVMe, SATA or SAS storage.

* Storage: Content creation involves creating, storing and manipulating huge volumes of data. So the first requirement is simply having a great deal of storage capacity. But it’s also important to be able to retrieve and access that data quickly.

For these kinds of storage challenges, Supermicro offers Petascale storage servers based on AMD EPYC processors. They can pack up to 16 hot-swappable E3.S (7.5mm) NVMe drive bays. And they’ve been designed to store, process and move vast amounts of data.

M&E content creators are always looking to attract more attention. They’re getting help from today’s most advanced technology.

Do more:

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Need help turning your customers’ data into actionable insights?

Featured content

Need help turning your customers’ data into actionable insights?

Your customers already have plenty of data. What they need now are insights. Supermicro, AMD and Cloudera are here to help.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Your customers already have plenty of data. What they need now are insights.

Data just sits there, taking up costly storage and real estate. But actionable insights can help your customers strengthen their overall business, improve their business processes, and create new products and services.

Increasingly, these insights are based on data captured at the edge. For example, a retailer might collect customer and sales data using the point-of-sale terminals in its stores.

Supermicro is here to help. Its edge systems, including the latest WIO and short-depth servers powered by AMD processors, have been designed to collect data at the business edge.

These servers are powered by AMD’s EPYC 8004 Series processors. Introduced in September, these CPUs extend the company’s ‘Zen4c’ architecture into lower-core-count processors designed for edge servers and form factors.

GrandTwin too

For more insights, tell your customers to check out Supermicro’s GrandTwin servers. They’re powered by AMD EPYC 9004 processors and can run Cloudera Data Flow (CDF), a scalable, real-time streaming analytics platform.

The Supermicro GrandTwin systems provide a multi-node rackmount platform for cloud data centers. They come in 2U with 4 nodes for optimal deployment.

These systems offer AMD’s 4th Gen EPYC 9004 Series of general-purpose processors, which support DDR-5 4800 memory and PCI Express Gen 5 I/O.

Distributed yet united

If you’re unfamiliar with Cloudera, the company’s approach is based on a simple idea: single clouds are passé. Instead, Cloudera supports a hybrid data platform, one that can be used with any cloud, any analytics and any data.

The company’s idea is that data-management components should be physically distributed, but treated as a cohesive whole with AI and automation.

Cloudera’s CDF solution ingests, curates and analyzes data for key insights and immediate actionable information. That can include issues or defects that need remediating. And AI and machine learning systems can use the data to suggest real-time improvements.

More specifically, CDF delivers flow management, edge management, streams processing, streams management, and streaming analytics.

The upshot: Your customers need actionable insights, not more data. And to get those insights, they can check out the powerful combination of Supermicro servers, AMD processors and Cloudera solutions.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro celebrates 30 years of business

Featured content

Supermicro celebrates 30 years of business

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro Inc. is celebrating its 30th year of research, development and manufacturing.

At the company, formed in 1993, some things remain the same. Founder Charles Liang remains Supermicro’s president and CEO. And the company is still based in California’s Silicon Valley.

Of course, in 30 years a lot has also changed, too. For one, AI is now a critical component. And Supermicro, with help from component makers including AMD, is offering a range of solutions designed with AI in mind. Also, Supermicro has stated its intention to be a leader in the newer field of generative AI.

Another recent change is the industry’s focus on “green computing” and sustainability. Here, too, Supermicro has had a vision. The company’s Green IT initiative helps customers lowers data-center TCO, take advantage of recyclable materials, and do more work with lower power requirements.

Another change is just how big Supermicro has grown. Revenue for its most recent fiscal year totaled $7.12 billion, a year-on-year increase of 37%. Looking ahead, Supermicro has told investors it expects an even steeper 47% revenue growth in the current fiscal year, for total revenue of $9.5 billion to $10.5 billion. 

All that growth has also led Supermicro to expand its manufacturing facilities. The company now runs factories in Silicon Valley, Taiwan and the Netherlands, and it has a new facility coming online in Malaysia. All that capacity, the company says, means Supermicro can now deliver more than 4,000 racks a month.

Top voices

Industry leaders are joining the celebration.

“Supermicro has been and continues to be my dream work,” CEO Liang wrote in an open letter commemorating the company’s 30th anniversary.

Looking ahead, Liang writes that the company’s latest initiative, dubbed “Supermicro 4.0,” will focus on AI, energy saving, and time to market.

AMD CEO Lisa Su adds, “AMD and Supermicro have a long-standing history of delivering leadership computing solutions. I am extremely proud of the expansive portfolio of data center, edge and AI solutions we have built together, our leadership high-performance computing solutions and our shared commitment to sustainability.”

Happy 30th anniversary, Supermicro!

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is the intelligent edge? Part 2

Featured content

Tech Explainer: What is the intelligent edge? Part 2

The intelligent edge has emerged as an essential component of the internet of things. By moving compute and storage close to where data is generated, the intelligent edge provides greater control, flexibility, speed and even security.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The Internet of Things (IoT) is all around us. It’s in the digital fabric of a big city, the brain of a modern factory, the way your smart home can be controlled from a tablet, and even the tech telling your fridge it’s time to order a quart of milk.

As these examples show, IoT is fast becoming a must-have. Organizations and individuals alike turn to the IoT to gain greater control and flexibility over the technologies they regularly use. Increasingly, they’re doing it with the intelligent edge.

The intelligent edge moves command and control from the core to the edge, closer to where today’s smart devices and sensors actually are installed. That’s needed because so many IoT devices and connections are now active, with more coming online every day.

Communicating with millions of connected devices via a few centralized data centers is the old way of doing things. The new method is a vast network of local nodes capable of collecting, processing, analyzing, and making decisions from the IoT information as close to its origin as possible.

Controlling IoT

To better understand the relationship between IoT and intelligent edge, let’s look at two use cases: manufacturing and gaming.

Modern auto manufacturers like Tesla and Rivian use IoT to control their industrial robots. Each robot is fitted with multiple sensors and actuators. The sensors report their current position and condition, and the actuators control the robot’s movements.

In this application, the intelligent edge acts as a small data center in or near the factory where the robots work. This way, instead of waiting for data to transfer to a faraway data center, factory managers can use the intelligent edge to quickly capture, analyze and process data—and then act just as quickly.

Acting on that data may include performing preventative or reactive maintenance, adjusting schedules to conserve power, or retasking robots based on product configuration changes. 

The benefits of a hyper-localized setup like this can prove invaluable for manufacturers. Using the intelligent edge can save them time, money and person-hours by speeding both analysis and decision-making.

For manufacturers, the intelligent edge can also add new layers of security. That’s because data is significantly more vulnerable when in transit. Cut the distance the data travels and the use of external networks, and you also eliminate many cybercrime threat vectors.

Gaming is another marquee use case for the intelligent edge. Resource-intensive games such as “Fortnite” and “World of Warcraft” demand high-speed access to the data generated by the game itself and a massive online gaming community of players. With speed at such a high premium, waiting for that data to travel to and from the core isn’t an option.

Instead, the intelligent edge lets game providers collect and process data near their players. The closer proximity lowers latency by limiting the distance the data travels. It also improves reliability. The resulting enhanced data flow makes gameplay faster and more responsive.

Tech at the edge

The intelligent edge is sometimes described as a network of localized data centers. That’s true as far as it goes, but it’s not the whole story. In fact, the intelligent edge infrastructure’s size, function and location come with specific technological requirements.

Unlike a traditional data center architecture, the edge is often better served by rugged form factors housing low-cost, high-efficiency components. These components, including the recently released AMD EPYC 8004 Series processors, feature fewer cores, less heat and lower prices.

The AMD EPYC 8004 Series processors share the same 5nm ‘Zen4c’ core complex die (CCD) chiplets and 6nm AMD EPYC I/O Die (IOD) as the more powerful AMD EPYC 9004 Series.

However, the AMD EPYC 8004s offers a more efficiency-minded approach than its data center-focused cousins. Nowhere is this better illustrated than the entry-level AMD EPYC 8042 processor, which provides a scant 8 cores and a thermal design power (TDP) of just 80 watts. AMD says this can potentially save customers thousands of dollars in energy costs over a five-year period.

To deploy the AMD silicon, IT engineers can choose from an array of intelligent edge systems from suppliers, including Supermicro. The selection includes expertly designed form factors for industrial, intelligent retail and smart-city deployments.

High-performance rack mount servers like the Supermicro H13 WIO are designed for enterprise-edge deployments that require data-center-class performance. The capacity to house multiple GPUs and other hardware accelerators makes the Supermicro H13 an excellent choice for deploying AI and machine learning applications at the edge.

The future of the edge

The intelligent edge is another link in a chain of data capture and analysis that gets longer every day. As more individuals and organizations deploy IoT-based solutions, an intelligent edge infrastructure helps them store and mine that information faster and more efficiently.

The insights provided by an intelligent edge can help us improve medical diagnoses, better control equipment, and more accurately predict human behavior.

As the intelligent edge architecture advances, more businesses will be able to deploy solutions that enable them to cut costs and improve customer satisfaction simultaneously. That kind of deal makes the journey to the edge worthwhile.

Part 1 of this two-part blog series on the intelligent edge looked at the broad strokes of this emerging technology and how organizations use it to increase efficiency and reliability. Read Part 1 now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is the intelligent edge? Part 1

Featured content

Tech Explainer: What is the intelligent edge? Part 1

The intelligent edge moves compute, storage and networking capabilities close to end devices, where the data is being generated. Organizations gain the ability to process and act on that data in real time, and without having to first transfer that data to the a centralized data center.

Learn More about this topic
  • Applications:
  • Featured Technologies:

The term intelligent edge refers to remote server infrastructures that can collect, process and act on data autonomously. In effect, it’s a small, remote data center.

Compared with a more traditional data center, the intelligent edge offers one big advantage: It locates compute, storage and networking capabilities close to the organization’s data collection endpoints. This architecture speeds data transactions. It also makes them more secure.

The approach is not entirely new. Deploying an edge infrastructure has long been an effective way to gather data in remote locations. What’s new with an intelligent edge is that you gain the ability to process and act on that data (if necessary) in real time—without having to first transfer that data to the cloud.

The intelligent edge can also save an organization money. Leveraging the intelligent edge makes sense for organizations that spend a decent chunk of their operating budget transferring data from the edge to public and private data centers, which could be a cloud infrastructure (often referred to as “the core”). Reducing bandwidth in both directions and storage charges helps them control costs.

3 steps to the edge

Today, an intelligent edge typically gets applied in one of three areas:

  • Operational Technology (OT): Hardware and software used to monitor and control industrial equipment, processes and events.
  • Information Technology (IT): Digital infrastructure—including servers, storage, networking and other devices—used to create, process, store, secure and transfer data.
  • Internet of Things (IoT): A network of smart devices that communicate and can be controlled via the internet. Examples include smart speakers, wearables, autonomous vehicles and smart-city infrastructure.

The highly efficient edge

There’s yet another benefit to deploying intelligent edge tech: It can help an organization become more efficient.

One way the intelligent edge does this is by obviating the need to transfer large amounts of data. Instead, data is stored and processed close to where it’s collected.

For example, a smart lightbulb or fridge can communicate with the intelligent edge instead of contacting a data center. Staying in constant contact with the core is unnecessary for devices that don’t change much from minute to minute.

Another way the intelligent edge boosts efficiency is by reducing the time needed to analyze and act on vital information. This, in turn, can lead to enhanced business intelligence that informs and empowers stakeholders. It all gets done faster and more efficiently than with traditional IT architectures and operations.

For instance, imagine that an organization serves a large customer base from several locations. By deploying an intelligent edge infrastructure, the organization could collect and analyze customer data in real time.

Businesses that gain insights from the edge instead of from the core can also respond quickly to market changes. For example, an energy company could analyze power consumption and weather conditions at the edge (down to the neighborhood), then determine whether there's be a power outage.

Similarly, a retailer could use the intelligent edge to support inventory management and analyze customers’ shopping habits. Using that data, the retailer could then offer customized promotions to particular customers, or groups of customers, all in real time.

The intelligent edge can also be used to enhance public infrastructure. For instance, smart cities can gather data that helps inform lighting, public safety, maintenance and other vital services, which could then be used for preventive maintenance or the allocation of city resources and services as needed.

Edge intelligence

As artificial intelligence (AI) becomes increasingly ubiquitous, many organizations are deploying machine learning (ML) models at the edge to help analyze data and deliver insights in real time.

In one use case, running AI and ML systems at the edge can help an organization reduce the service interruptions that often come with transferring large data sets to and from the cloud. Intelligent Edge is able to keep things running locally, giving distant data centers a chance to catch up. This, in turn, can help the organization provide a better experience for the employees and customers who rely on that data.

Deploying AI at the edge can also help with privacy, security and compliance issues. Transferring data to and from the core presents an opportunity for hackers to intercept data in transit. Eliminating this data transfer deprives cyber criminals of a threat vector they could otherwise exploit.

Part 2 of this two-part blog series dives deep into the biggest, most popular use of the intelligent edge today—namely, the internet of things (IoT). We also look at the technology that powers the intelligent edge, as well as what the future may hold for this emerging technology.

Do more:

 

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Featured content

Supermicro introduces edge, telco servers powered by new AMD EPYC 8004 processors

Supermicro has introduced five Supermicro H13 WIO and short-depth servers powered by the new AMD EPYC 8004 Series processors. These servers are designed for intelligent edge and telco applications.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Supermicro is supporting the new AMD EPYC 8004 Series processors (previously code-named Siena) on five Supermicro H13 WIO and short-depth telco servers. Taking advantage of the new AMD processor, these new single-socket servers are designed for use with intelligent edge and telco applications.

The new AMD EPYC 8004 processors enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

The new AMD processors also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

Compact yet capacious

Supermicro’s new 1U short-depth version is designed with I/O in the front and a form factor that’s compact yet still offers enough room for three PCIe 5.0 slots. It also has the option of running on either AC or DC power.

The short-depth systems also feature a NEBS-compliant design for telco operations. NBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.

The new WIO servers use Titanium power supplies for increased energy efficiency, and Supermicro says that will deliver higher performance/watt for the entire system.

Supermicro WIO systems offer a wide range of I/O options to deliver optimized systems for specific requirements. Users can optimize the storage and networking alternatives to accelerate performance, increase efficiency and find the perfect fit for their applications.

Here are Supermicro’s five new models:

  • AS -1015SV-TNRT: Supermicro H13 WIO system in a 1U format
  • AS -1115SV-TRNT: Supermicro H13 WIO system in a 1U format
  • AS -2015SV-TNRT: Supermicro H13 WIO system in a 2U format
  • AS -1115S-FWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, running on AC power and including system-management features
  • AS -1115S-FDWTRT: Supermicro H13 telco/edge short-depth system in a 1U format, this one running on DC power

Shipments of the new Supermicro servers supporting AMD EPYC 8004 processors start now.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Meet the new AMD EPYC 8004 family of CPUs

Featured content

Meet the new AMD EPYC 8004 family of CPUs

The new 4th gen AMD EPYC 8004 family extends the ‘Zen4c’ core architecture into lower-count processors with TDP ranges as low as 80W. The processors are designed especially for edge-server deployments and form factors.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD has introduced a family of EPYC processors for space- and power-constrained deployments: the 4th Generation AMD EPYC 8004 processor family. Formerly code-named Siena, these lower core-count CPUs can be used in traditional data centers as well as for edge compute, retail point-of-sale and running a telco network.

The new AMD processors have been designed to run at the edge with better energy efficiency and lower operating costs. The CPUs enjoy a broad range of operating temperatures and can run at lower DC power levels, thanks to their energy-efficient ‘Zen4c’ cores. These new CPUs also run quietly. With a TDP as low as 80W, the CPUs don’t need much in the way of high-speed cooling fans.

The AMD EPYC 8004 processors are purpose-built to deliver high performance and are energy-efficient in an optimized, single-socket package. They use the new SP6 socket. Each processor features from 8 to 64 simultaneous multithreading (SMT) capable ‘Zen4c’ cores.

AMD says these features, along with streamlined memory and I/O feature set, lets servers based on this new processor family deliver compelling system cost/performance metrics.

Heat-tolerant

The AMD EPYC 8004 family is also designed to run in environments with fluctuating and at times high ambient temperatures. That includes outdoor “smart city” settings and NEBS-compliant communications network sites. (NEBS, short for Network Equipment Building System, is an industry requirement for the performance levels of telecom equipment.) What AMD is calling “NEBS-friendly” models have an operating range of -5 C (23 F) to 85 C (185 F).

The new AMD processors can also run in deployments where both the power levels and available physical space are limited. That can include smaller data centers, retail stores, telco installations, and the intelligent edge.

The performance gains are impressive. Using the SPECpower benchmark, which measures power efficiency, the AMD EPYC 8004 CPUs deliver more than 2x the energy efficiency of the top competitive product for telco. This can result in 34% lower energy costs over five years, saving organizations thousands of dollars.

Multiple models

In all, the AMD EPYC 8004 family currently offers 12 SKUs. Those ending with the letter “P” support single-CPU designs. Those ending “PN” support NEBS-friendly designs and offer broader operating temperature ranges.

The various models offer a choice of 8, 16, 24, 48 or 64 ‘Zen4c’ cores; from 16 to 128 threads; and L3 cache sizes ranging from 32MB to 128MB. All the SKUs offer 6 channels of DDR memory with a maximum capacity of 1.152TB; a maximum DDR5 frequency of 4800 MHz; and 96 lanes of PCIe Gen 5 connectivity. Security features are offered by AMD Infinity Guard.

Selected AMD partners have already announced support for the new EPYC 8004 family. This includes Supermicro, which introduced new WIO based on the new AMD processors for diverse data center and edge deployments.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages