Sponsored by:

Visit AMD Visit Supermicro

Capture the full potential of IT

How ILM creates visual effects faster & cheaper with AMD-powered Supermicro hardware

Featured content

How ILM creates visual effects faster & cheaper with AMD-powered Supermicro hardware

ILM, the visual-effects company founded by George Lucas, is using AMD-powered Supermicro servers and workstations to create the next generation of special effects for movies and TV.

Learn More about this topic
  • Applications:
  • Featured Technologies:

AMD and Supermicro are helping Industrial Light & Magic (ILM) create the future of visual movie and TV production.

ILM is the visual-effects company founded by George Lucas in 1975. Today it’s still on the lookout for better, faster tech. And to get it, ILM leans on Supermicro for its rackmount servers and workstations, and AMD for its processors.

The servers help ILM reduce render times. And the workstations enable better collaboration and storage solutions that move data faster and more efficiently.

All that high-tech gear comes together to help ILM create some of the world’s most popular TV series and movies. That includes “Obi-Wan Kenobi,” “Transformers” and “The Book of Boba Fett.”

It’s a huge task. But hey, someone’s got to create all those new universes, right?

Power hungry—and proud of it

No one gobbles up compute power quite like ILM. Sure, it may have all started with George Lucas dropping an automotive spring on a concrete floor to create the sound of the first lightsaber. But these days, it’s all about the 1s and 0s—a lot of them.

An enormous amount of compute power goes into rendering computer-generated imagery (CGI) like special effects and alien characters. So much power, in fact, that it can take weeks or even months to render an entire movie’s worth of eye candy.

Rendering takes not only time, but also money and energy. Those are the three resources that production companies like ILM must ration. They’re under pressure to manage cash flow and keep to tight production schedules.

By deploying Supermicro’s high-performance and multinode servers powered by AMD’s EPYC processors , ILM gains high core counts and maximum throughput—two crucial components of faster rendering.

Modern filmmakers are also obliged to manage data. Storing and moving terabytes of rendering and composition information is a constant challenge, especially when you’re trying to do it quickly and securely.

The solution to this problem comes in the form of high-performance storage and networking devices. They can shift vast swaths of information from here to there without bottlenecks, overheating or (worst-case scenario) total failure.

EPYC stories

This is the part of the story where CPUs take back some of the spotlight. GPUs have been stealing the show ever since data scientists discovered that graphic processors are the keys to unlocking the power of AI. But producing the next chapter of the “Star Wars” franchise means playing by different rules.

AMD EPYC processors play a starring role in ILM’s render farms. Render farms are big collections of networked server-class computers that work as a team to crunch a metric ton of data.

A typical ILM render farm might contain dozens of high-performance computers like the Supermicro BigTwin. This dual-node processing behemoth can house two 3rd gen AMD EPYC processors, 4TB of DDR5 memory per node and a dozen 2.5-inch hot-swappable solid-state drives (SSDs). In case the specs don’t speak for themselves, that’s an insane amount of power and storage.

For ILM, lighting and rendering happen inside an application by Isotropix called Clarisse. Our hero, Clarisse, relies on CPU rather than GPU power. Unlike most 3D apps, which are single-threaded, Clarisse also features unusually efficient multi-threading.

This lets the application take advantage of the parallel-processing power in AMD’s EPYC CPUs to complete more tasks simultaneously. The results: shorter production times and lower costs.

Coming soon: StageCraft

ILM is taking its tech show on the road with an end-to-end virtual production solution called StageCraft. It exists as both a series of Los Angeles and Vancouver-based sites—ILM calls them “volumes”—as well as mobile pop-up volumes waiting to happen anywhere in the United States and Europe.

The introduction of StageCraft is interesting for a couple of reasons. For one, this new production environment makes ILM’s AMD-powered magic wand accessible to a wider range of directors, producers and studios.

For another, StageCraft could catalyze the proliferation of cutting-edge creative tech. This, in turn, could lead to the same kind of competition, efficiency increases and miniaturization that made 4K filmmaking a feature of everyone’s mobile phones.

StageCraft could also usher in a new visual language. The more people with access to high-tech visualization technology, the more likely it is that some unknown aspiring auteur will pop up, seemingly out of nowhere, to change the nature of entertainment forever.

Kinda’ like how George Lucas did it back in the day.

Do more:

 

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

A hospital’s diagnosis: Professional AI workloads require professional hardware

Featured content

A hospital’s diagnosis: Professional AI workloads require professional hardware

A Taiwanese hospital’s initial use of AI to interpret medical images with consumer graphics cards fell short. The prescription? Supermicro workstations powered by AMD components. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

A Taiwanese hospital has learned that professional AI workloads are too much to handle for consumer-level hardware—and that pro-level workloads require pro-level systems.

When Shuang-Ho Hospital first used AI to interpret medical images, it relied on consumer graphics cards installed on desktop PCs. But staff found that for diagnostics imaging, the graphics cards performed poorly. Plus, the memory capacity of the PCs was insufficient. The result: image resolution too low to be useful.

The hospital, affiliated with Taipei Medical University, offers a wide range of services, including reproductive medicine, a sleep center, and treatment for cancer, dementia and strokes. It opened in 2008 and is located in New Taipei City.

In its quest to use AI for healthcare, Shuang-Ho Hospital is far from alone. Last year, global sales of healthcare AI totaled $15.4 billion, estimates Grand View Research. Looking ahead, the market watcher expects healthcare AI sales through 2030 to enjoy a compound annual growth rate (CAGR) of nearly 38%.

A subset of that market, AI for diagnostic imaging, is a big and fast-growing field. The U.S. government has approved nearly 400 AI algorithms for radiology, according to the American Hospital Association. And the need is great. The World Economic Forum estimates that of all the data produced by hospitals each year—roughly 50 petabytes—97% goes unused.

‘Just right’

Shuang-Ho Hospital knew it needed an AI system that was more robust. But initially it wasn’t sure where to turn. A Supermicro demo changed all that. “The workstation presented by Supermicro was just right for our needs,” says Dr. Yen-Ting Chen, an attending physician in the hospital’s medical imaging department.

Supermicro’s solution for the hospital was its AS-5014-TT SuperWorkstation, powered by AMD’s Ryzen Threadripper Pro 3995WX processor and equipped with a pair of AMD Radeon Pro W6800 professional graphics cards. This tower workstation is optimized for applications that include AI and deep learning.

For the hospital, one especially appealing feature is the Supermicro workstation’s use of a multicore processor that can be paired with multiple GPU cards. The AMD Threadripper Pro has 64 cores, and each of the hospital’s Supermicro workstations was configured with two GPUs.

Another attractive feature had nothing to do with tech specs. “The price was very reasonable,” says Dr. Yen-Ting Chen. “It naturally became our best choice.”

Smart tech, healthier brains

Now that Shuang-Ho Hospital has the AMD-powered Supermicro workstations installed, the advantages of a professional system over consumer products has become even clearer. For one, AI training is much better than it was with the consumer cards.

Even more important, the images from brain tomography, which with the consumer cards had to be degraded, can now be used at full resolution. (Tomography is an approach to imaging that combines scans taken from different angles to create cross-sectional “slices.”)

For now, the hospital is using the Supermicro workstations to help interpret scans for cerebral thrombosis, a serious health condition involving a blood clot in a vein of the brain. Learnings from this first AI workload are being shared with other departments.

Long-term, the hospital plans to use AI wherever the technology can help. And this time, with strictly professional hardware.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

How to help your customers invest in AI infrastructure

Featured content

How to help your customers invest in AI infrastructure

The right AI infrastructure can help your customers turn data into actionable information. But building and scaling that infrastructure can be challenging. Find out why—and how you can make it easier. 

Learn More about this topic
  • Applications:
  • Featured Technologies:

Get smarter about helping your customers create an infrastructure for AI systems that leverage their data into actionable information.

A new Supermicro white paper, Investing in AI Infrastructure, shows you how.

As the paper points out, creating an AI infrastructure is far from easy.

For one, there’s the risk of underinvesting. Market watcher IDC estimates that AI will soon represent 10% to 15% of the typical organization’s total IT infrastructure. Organizations that fall short here could also fall short on delivering critical information to the business.

Sure, your customers could use cloud-based AI to test and ramp up. But cloud costs can rise fast. As The Wall Street Journal recently reported, some CIOs have even established internal teams to oversee and control their cloud spending. That makes on-prem AI data center a viable option.

“Every time you run a job on the cloud, you’re paying for it,” says Ashish Nadkarni, general manager of infrastructure systems, platforms and technologies at IDC. “Whereas on-premises, once you buy the infrastructure components, you can run applications multiple times.”

Some of those cloud costs come from data-transfer fees. First, data needs to be entered into a cloud-based AI system; this is known as ingress. And once the AI’s work is done, you’ll want to transfer the new data somewhere else for storage or additional processing, a process of egress.

Cloud providers typically charge 5 to 20 cents per gigabyte of egress. For casual users, that may be no big deal. But for an enterprise using massive amounts of AI data, it can add up quickly.

4 questions to get started

But before your customer can build an on-prem infrastructure, they’ll need to first determine their AI needs. You can help by gathering all stakeholders and asking 4 big questions:

  • What are the business challenges we’re trying to solve?
  • Which AI capabilities and capacities can deliver the solutions we’ll need?
  • What type of AI training will we need to deliver the right insights from your data?
  • What software will we need?

Keep your customer’s context in mind, too. That might include their industry. After all, a retailer has different needs than a manufacturer. But it could include their current technology. A company with extensive edge computing has different data needs than does one without edge devices.

“It’s a matter of finding the right configuration that delivers optimal performance for the workloads,” says Michael McNerney, VP of marketing and network security at Supermicro.

Help often needed

One example of an application-optimized system for AI training is the Supermicro AS-8125GS-TNHR, which is powered by dual AMD EPYC 9004 Series processors. Another option are the Supermicro Universal GPU systems, which support AMD’s Instinct MI250 accelerators.

The system’s modularized architecture helps standardize AI infrastructure design for scalability and power efficiency despite complex workloads and workflow requirements enterprises have, such as AI, data analytics, visualization, simulation and digital twins.

Accelerators work with traditional CPUs to enable greater computing power, yet without slowing the system. They can also shave milliseconds off AI computations. While that may not sound like much, over time those milliseconds “add up to seconds, minutes, hours and days,” says Matt Kimball, a senior analyst at Moor Insights & Strategy.

Roll with partner power

To scale AI across an enterprise, you and your customers will likely need partners. Scaling workloads for critical tasks isn’t easy.

For one, there’s the challenge of getting the right memory, storage and networking capabilities to meet the new high-performance demands. For another, there’s the challenge of finding enough physical space, then providing the necessary electric power and cooling.

Tech suppliers including Supermicro are standing by to offer you agile, customizable and scalable AI architectures.

Learn more from the new Supermicro white paper: Investing in AI Infrastructure.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: How does generative AI generate?

Featured content

Tech Explainer: How does generative AI generate?

Generative AI systems such as ChatGPT are grabbing the headlines. Find out how this super-smart technology actually works. 

Learn More about this topic
  • Applications:

Generative AI refers to a type of artificial intelligence that can create or generate new content, such as images, music, and text, based on patterns learned from large amounts of data. Generative AI models are designed to learn the underlying distribution of a dataset and then use this knowledge to generate new samples that are similar to those in the original dataset.

This emerging tech is well on its way to becoming a constant presence in everyday life. In fact, the preceding paragraph was generated by ChatGPT. Did you notice?

The growth of newly minted household names like ChatGPT may be novel, headline-grabbing news today. But soon they should be so commonplace, they’ll hardly garner a sidebar in Wired magazine.

So, if the AI bots are here to stay, what makes them tick?

Generating intelligence

Generative AI—the AI stands for artificial intelligence, but you knew that already—lets a user generate content quickly by providing various types of inputs. These inputs can include text, sounds, images, animations and 3D models. Those are also the possible forms of outputs.

Data scientists have been working on generative AI since the early 1960s. That’s when Joseph Weizenbaum created the Eliza chat-bot. A bot is a software application that runs automated tasks, usually in a way that simulates human activity.

Eliza, considered the world’s first generative AI, was programmed to respond to human statements almost like a therapist. However, the program did not actually understand what was being said.

Since then, we’ve come a long way. Today’s modern generative AI feeds on large language models (LLMs) that bear only a glimmer of resemblance to the relative simplicity of early chatbots. These LLMs contain billions, even trillions, of parameters, the aggregate of which provides limitless permutations that enable AI models to learn and grow.

AI graphic generators like the popular DALL-E or Fotor can produce images based on small amounts of text. Type “red tuba on a rowboat on Lake Michigan,” and voila! an image appears in seconds.

Beneath the surface

The human interface of an AI bot such as ChatGPT may be simple, but the technical underpinnings are complex. The process of parsing, learning from and responding to our input is so resource-intensive, it requires powerful computers, often churning incredible amounts of data 24x7.

These computers use graphical processing units (GPUs) to power neural networks tasked with identifying patterns and structures within existing data and using it to generate original content.

GPUs are particularly good at this task because they can contain thousands of cores. Each individual core can complete only one task at a time. But the core can work simultaneously with all the other cores in the GPU to collectively process huge data sets.

How generative AI generates...stuff

Today’s data scientists rely on multiple generative AI models. These models can be either deployed discreetly or combined to create new models greater—and more powerful—than the sum of their parts.

Here are the three most common AI models in use today:

  • Diffusion models use a two-step process: forward diffusion and reverse diffusion. Forward diffusion adds noise to training data; reverse diffusion removes that noise to reconstruct data samples. This learning process allows the AI to generate new data that, while similar to the original data, also includes unique variations.
    • For instance, to create a realistic image, a diffusion model can take in a random set of pixels and gradually refine them. It’s similar to the way a photograph shot on film develops in the darkroom, becoming clearer and more defined over time.
  • Variational autoencoders (VAEs) use two neural networks, the encoder and the decoder. The encoder creates new versions of the input data, keeping only the information necessary to perform the decoding process. Combining the two processes teaches the AI how to create simple, efficient data and generate novel output.
    • If you want to create, say, novel images of human faces, you could show the AI an original set of faces; then the VAE would learn their underlying patterns and structures. The VAE would then use that information to create new faces that look like they belong with the originals.
  • Generative adversarial networks (GANs) were the most commonly used model until diffusion models came along. A GAN plays two neural networks against each other. The first network, called the generator, creates data and tries to trick the second network, called the discriminator, into believing that data came from the real world. As this feedback loop continues, both networks learn from their experiences and get better at their jobs.
    • Over time, the generator can become so good at fooling the discriminator that it is finally able to create novel texts, audio, images, etc., that can also trick humans into believing they were created by another human.

Words, words, words

It’s also important to understand how generative AI forms word relationships. In the case of a large language model such as ChatGPT, the AI includes a transformer. This is a mechanism that provides a larger context for each individual element of input and output, such as words, graphics and formulas.

The transformer does this by using an encoder to determine the semantics and position of, say, a word in a sentence. It then employs a decoder to derive the context of each word and generate the output.

This method allows generative AI to connect words, concepts and other types of input, even if the connections must be made between elements that are separated by large groups of unrelated data. In this way, the AI interprets and produces the familiar structure of human speech.

The future of generative AI

When discussing the future of these AI models and how they’ll impact our society, two words continually get mentioned: learning and disruption.

It’s important to remember that these AI systems spend every second of every day learning from their experiences, growing more intelligent and powerful. That’s where the term machine learning (ML) comes into play.

This type of learning has the potential to upend entire industries, catalyze wild economic fluctuations, and take on many jobs now done by humans.

On the bright side, AI may also become smart enough to help us cure cancer and reverse climate change. And if AI has to take our jobs, perhaps it can also figure out a way to provide income for all.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Do you know why 64 cores really matters?

Featured content

Do you know why 64 cores really matters?

In a recent test, Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors ran engineering simulations nearly as fast as a dual-processor system, but needed only two-thirds as much power.

Learn More about this topic
  • Applications:
  • Featured Technologies:

More cores per CPU sounds good, but what does it actually mean for your customers?

In the case of certain Supermicro workstations and servers powered by 3rd gen AMD Ryzen Threadripper PRO processors, it means running engineering simulations with dual-processor performance from a single-socket system. And with further cost savings due to two-thirds lower power consumption.

That’s according to tests recently conducted by MVConcept, a consulting firm that provides hardware and software optimizations. The firm tested two Supermicro systems, the AS-5014A-TT SuperWorkstation and AS-2114GT-DPNR server.

A solution brief based on MVConcept’s testing is now available from Supermicro.

Test setup

For these tests, the Supermicro server and workstation were both tested in two AMD configurations:

  • One with the AMD Ryzen Threadripper PRO 5995WX processor
  • The other with an older, 2nd gen AMD Ryzen Threadripper PRO 3995WX processor

In the tests, both AMD processors were used to run 32-core as well as 64-core operations.

The Supermicro systems were tested running Ansys Fluent, fluid simulation software from Ansys Inc. Fluent models fluid flow, heat, mass transfer and chemical reactions. Benchmarks for the testing included aircraft wing, oil rig and pump.

The results

Among the results: The Supermicro systems delivered nearly dual-CPU performance with a single processor, while also consuming less electricity.

What’s more, the 3rd generation AMD 5995WX CPU delivered significantly better performance than the 2nd generation AMD 3995WX.

Systems with larger cache saw performance improved the most. So a system with L3 cache of 256MB outperformed one with just 128MB.

BIOS settings proved to be especially important for realizing the optimal performance from the AMD Ryzen Threadripper PRO when running the tested applications. Specifically, Supermicro recommends using NPS=4 and SMT=OFF when running Ansys Fluent with AMD Ryzen Threadripper PRO. (NPS = non-uniform memory access (NUMA) per socket; and SMT = symmetric multithreading.)

Another cool factor involves taking advantage of the Supermicro AS-2114GT-DPNR server’s two hot-pluggable nodes. First, one node can be used to pre-process the data. Then the other node can be used to run Ansys Fluid.

Put it all together, and you get a powerful takeaway for your customers: These AMD-powered Supermicro systems offer data-center power on both the desktop and server rack, making them ideal for SMBs and enterprises alike.

Do more:

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Tech Explainer: What is AI Training?

Featured content

Tech Explainer: What is AI Training?

Although AI systems are smart, they still need to be trained. The process isn’t easy. But it’s pretty straightforward with just 3 main steps.

Learn More about this topic
  • Applications:

Artificial intelligence (AI) training is the process of teaching an AI system to perceive, interpret and learn from data. That way, the AI will later be capable of inferencing—making decisions based on information it’s provided.

This type of training requires 3 important components: a well-designed AI model; large amounts of high-quality and accurately annotated data; and a powerful computing platform.

Properly trained, an AI’s potential is nearly limitless. For example, AI models can help anticipate our wants and needs, autonomously navigate big cities, and produce scientific breakthroughs.

It’s already happening. You experience the power of well-trained AI when you use Netflix’s recommendation engine to help decide which TV show or movie you want to watch next.

Or you can ride with AI in downtown Phoenix, Ariz. It’s home to the robotaxi service operated by Waymo, the autonomous-vehicle developer owned by Google’s parent company, Alphabet.

And let’s not forget ChatGPT, the current belle of the AI ball. This year has seen its fair share of fascination and fawning over this new generative AI, which can hold a remarkably human conversation and regurgitate every shred of information the internet offers—regardless of its accuracy.

AI can also be used for nefarious purposes, such as creating weapons, methods of cybercrime and tools that some nation states use to surveil and control their citizens. As is true for most technologies, it’s the humans who wield AI who get to decide whether it’s used for good or evil.

3 steps to train AI

AI training is technically demanding. But years of research aided by the latest technology are helping even novice developers harness the power of original AI models to create new software like indie video games.

The process of training enterprise-level AI, on the other hand, is incredibly difficult. Data scientists may spend years creating a single new AI model and training it to perform complex tasks such as autonomous navigation, speech recognition and language translation.

Assuming you have the programming background, technology and financing to train your desired type of AI, the 3-step process is straightforward:

Step 1: Training. The AI model is fed massive amounts of data, then asked to make decisions based on the information. Data scientists analyze these decisions and make adjustments based on the AI output’s accuracy.

 

Step 2: Validation. Trainers validate their assumptions based on how the AI performs when given a new data set. The questions they ask include: Does the AI perform as expected? Does the AI need to account for additional variables? Does the AI suffer from overfitting, a problem that occurs when a machine learning model memorizes data rather than learning from it?

 

Step 3: Testing. The AI is given a novel dataset without the tags and targets initially used to help it learn. If the AI can make accurate decisions, it passes the test. If not, it’s back to step 1.

Future of AI Training

New AI training theories are coming online quickly. As the market heats up and AI continues to find its way out of the laboratory and onto our computing devices, Big Tech is working feverishly to make the most of the latest gold rush.

One new AI training technique coming to prominence is known as Reinforcement Learning (RL). Rather than teaching an AI model using a static dataset, RL trains the AI as though it were a puppy, rewarding the system for a job well done.

Instead of offering doggie treats, however, RL gives the AI a line of code known as a “reward function.” This is a dynamic and powerful training method that some AI experts believe will lead to scientific breakthroughs.

Advances in AI training, high-performance computing and data science will continue to make our sci-fi dreams a reality. For example, one AI can now teach other AI models. One day, this could make AI training just another autonomous process.

Will the next era of AI bring about the altruism of Star Trek or the evil of The Matrix? One thing’s likely: We won’t have to wait long to find out.

 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

Featured content

AMD and Supermicro Sponsor Two Fastest Linpack Scores at SC22’s Student Cluster Competition

The Student Cluster Computing challenge made its 16th appearance at the SuperComputer 22 (SC22) event in Dallas. The two student teams that were running AMD EPYC™ CPUs and AMD Instinct™ GPUs were the two teams that aced the Linpack benchmark. That's the test used to determined the TOP500 supercomputers in the world.

Learn More about this topic
  • Applications:
  • Featured Technologies:

Last month, the annual Supercomputing Conference 2022 (SC22) was held in Dallas. The Student Cluster Competition (SCC), which began in 2007, was also performed again. The SCC offers an immersive high-performance computing (HPC) experience to undergraduate and high school students.

 

According to the SC22 website: Student teams design and build small clusters, learn scientific applications, apply optimization techniques for their chosen architectures and compete in a non-stop, 48-hour challenge at the SC conference to complete real-world scientific workloads, showing off their HPC knowledge for conference attendees and judges.

 

Each team has six students, at least one faculty advisor, a sutdent team leader, and is associated with vendor sponsors, which provide the equipment. AMD and Supermicro jointly sponsored both the Massachusetts Green Team from MIT, Boston University and Northeastern University and the 2MuchCache team from UC San Diego (UCSD) and the San Diego Supercomputer Center (SDSC). Running AMD EPYC™ CPUs and AMD Instinct™-based GPUs supplied by AMD and Supermicro, the two teams came in first and second in the SCC Linpack test.

 

The Linpack benchmarks measure a system's floating-point computing power, according to Wikipedia. The latest version of these benchmarks is used to determine the TOP500 list, ranks the world's most powerful supercomputers.

 

In addition to chasing high scores on benchmarks, the teams must operate their systems without exceeding a power limit. For 2022, the competition used a variable power limit: at times, the power available to each team for its competition hardware was as high as 4000-watts (but was usually lower) and at times it was as low as 1500-watts (but was usually higher).

 

The “2MuchCache” team offers a poster page with extensive detail about their competition hardware. They used two third-generation AMD EPYC™ 7773X CPUs with 64 cores, 128 threads and 768MB of stacked-die cache. Team 2MuchCache used one AS-4124GQ-TNMI system with four AMD Instinct™ MI250 GPUs with 53 simultaneous threads.

 

The “Green Team’s” poster page also boasts two instances of third-generation AMD 7003-series EPYC™ processors, AMD Instinct™ 1210 GPUs with AMD Infinity fabric. The Green Team utilized two Supermicro AS-4124GS-TNR GPU systems.

 

The Students of 2MuchCache:

Longtian Bao, role: Lead for Data Centric Python, Co-lead for HPCG

Stefanie Dao, role: Lead for PHASTA, Co-lead for HPL

Michael Granado, role: Lead for HPCG, Co-lead for PHASTA

Yuchen Jing, role: Lead for IO500, Co-lead for Data Centric Python

Davit Margarian, role: Lead for HPL, Co-lead for LAMMPS

Matthew Mikhailov Major, role: Team Lead, Lead for LAMMPS, Co-lead for IO500

 

The Students of Green Team:

Po Hao Chen, roles: Team leader, theory & HPC, benchmarks, reproducibility

Carlton Knox, roles: Computer Arch., Benchmarks, Hardware

Andrew Nguyen, roles: Compilers & OS, GPUs, LAMMPS, Hardware

Vance Raiti, roles: Mathematics, Computer Arch., PHASTA

Yida Wang, roles: ML & HPC, Reproducibility

Yiran Yin, roles: Mathematics, HPC, PHASTA

 

Congratulations to both teams!

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Perspective: Don’t Back into Performance-Intensive Computing

Featured content

Perspective: Don’t Back into Performance-Intensive Computing

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and automation to differentiate their products and services. In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive.

Learn More about this topic
  • Applications:

To compete in the marketplace, enterprises are increasingly employing performance-intensive tools and applications like machine learning, artificial intelligence, data-driven insights and decision-support analytics, technical computing, big data, modeling and simulation, cryptocurrency and other blockchain applications, automation and high-performance computing to differentiate their products and services.

 

In doing so, they may be unintentionally backing into performance-intensive computing because these technologies are computationally and/or data intensive. Without thinking through the compute performance you need as measured against your most demanding workloads – now and at least two years from now – you’re setting yourself up for failure or unnecessary expense. When it comes to performance-intensive computing: plan, don’t dabble.

 

There are questions you should ask before jumping in, too. In the cloud or on-premises? There are pluses and minuses to each. Is your data highly distributed? If so, you’ll need network services that won’t become a bottleneck. There’s a long list of environmental and technology needs that are required to make performance-intensive computing pay off. Among them is making it possible to scale. And, of course, planning and building out your environment in advance of your need is vastly preferable to stumbling into it.

 

The requirement that sometimes gets short shrift is organizational. Ultimately, this is about revealing data with which your company can make strategic decisions. There’s no longer anything mundane about enterprise technology and especially the data it manages. It has become so important that virtually every department in your company affects and is affected by it. If you double down on computational performance, the C-suite needs to be fully represented in how you use that power, not just the approval process. Leaving top leadership, marketing, finance, tax, design, manufacturing, HR or IT out of the picture would be a mistake. And those are just sample company building blocks. You also need measurable, meaningful metrics that will help your people determine the ROI of your efforts. Even so, it’s people who make the leap of faith that turns data into ideas.

 

Finally, if you don’t already have the expertise on staff to learn the ins and outs of this endeavor, hire or contract or enter into a consulting arrangement with smart people who clearly have the chops to do this right. You don’t want to be the company with a rocket ship that no one can fly.

 

So, don’t back into performance-intensive computing. But don’t back out of it either. Being able to take full advantage of your data at scale can play an important role in ensuring the viability of your company going forward.

 

Related Content:

 


 

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Featured content

Locating Where to Drill for Oil in Deep Waters with Supermicro SuperServers® and AMD EPYC™ CPUs

Energy company Petrobas, based in Brazil, is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. Petrobas used system integrator Atos to provide more than 250 Supermicro SuperServers. The cluster is ranked 33 on the current top500 list and goes by the name Pegaso.

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • Atos

Brazilian energy company Petrobas is using high-performance computing techniques to aid it in its oil and gas exploration, especially in deep-water situations. These techniques can help reduce costs and make finding and extracting new hydrocarbon deposits quicker. Petrobras' geoscientists and software engineers quickly modify algorithms to take advantage of new capabilities as new CPU and GPU technologies become available.

 

The energy company used system integrator Atos to provide more than 250 Supermicro SuperServer AS-4124GO-NART+ servers running dual AMD EPYC™ 7512 processors. The cluster goes by the name Pegaso (which in Portuguese means the mythological horse Pegasus) and is currently listed at number 33 on the top500 list of fastest computing systems. Atos is a global leader in digital transformation with 112,000 world-wide employees. They have built other systems that appeared on the top500 list, and AMD powers 38 of them.

 

Petrobas has had three other systems listed on previous iterations of the Top500 list, using other processors. Pegaso is now the largest supercomputer in South America. It is expected to become fully operational next month.  Each of its servers runs CentOS and has 2TB of memory, for a total of 678TB. The cluster contains more than 230,000 core processors, is running more than 2,000 GPUs and is connected via an InfiniBand HDR networking system running at 400Gb/s. To give you an idea of how much gear is involved with Pegaso, it took more than 30 truckloads to deliver and consists of over 30 tons of hardware.

 

The geophysics team has a series of applications that require all this computing power, including seismic acquisition apps that collect data and is then processed to deliver high-resolution subsurface imaging to precisely locate the oil and gas deposits. Having the GPU accelerators in the cluster helps to reduce the processing time, so that the drilling teams can locate their rigs more precisely.

 

For more information, see this case study about Pegaso.

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Featured content

Supermicro H13 Servers Maximize Your High-Performance Data Center

Learn More about this topic
  • Applications:
  • Featured Technologies:
  • Featured Companies:
  • AMD

The modern data center must be both highly performant and energy efficient. Massive amounts of data are generated at the edge and then analyzed in the data center. New CPU technologies are constantly being developed that can analyze data, determine the best course of action, and speed up the time to understand the world around us and make better decisions.

With the digital transformation continuing, a wide range of data acquisition, storage and computing systems continue to evolve with each generation of  a CPU. The latest CPU generations continue to innovate within their core computational units and in the technology to communicate with memory, storage devices, networking and accelerators.

Servers and, by default, the CPUs within those servers, form a continuum of computing and I/O power. The combination of cores, clock rates, memory access, path width and performance contribute to specific servers for workloads. In addition, the server that houses the CPUs may take different form factors and be used when the environment where the server is placed has airflow or power restrictions. The key for a server manufacturer to be able to address a wide range of applications is to use a building block approach to designing new systems. In this way, a range of systems can be simultaneously released in many form factors, each tailored to the operating environment.

The new H13 Supermicro product line, based on 4th Generation AMD EPYC™ CPUs, supports a broad spectrum of workloads and excels at helping a business achieve its goals.

Get speeds, feeds and other specs on Supermicro’s latest line-up of servers

Featured videos


Events


Find AMD & Supermicro Elsewhere

Related Content

Pages