Nvidia Introduces Eos: Their New DGX Supercomputer Designed for AI

The Eos supercomputer unveiled by Nvidia this week was specifically designed to address artificial intelligence workloads. He is ranked 9th on the current list of TOP500 and is used by Nvidia developers to power their work on large language models (LLMs), recommendation systems, quantum simulations and others.

Eos consists of 576 DGX H100 systems on an InfiniBand Quantum-2 network, providing a total of 18.4 exaflops of FP8 performance for AI-related computing. Each DGX H100 system is equipped with eight H100 Tensor Core GPUs. In total, Eos has 1152 Intel Xeon Platinum 8480C CPUs and 4608 Nvidia H100 GPUs.

Advertisement

Eos, a supercomputer designed for AI

First introduced last November at Supercomputing 2023, Eos – named after the Greek goddess believed to open the gates of dawn each day – will, among other things, serve as a model for customers' enterprise AI supercomputers from Nvidia. “Eos comes at the right time”, estimates the firm, which describes it as “an AI factory” always available with which it is easy to train large-scale models.

Eos' architecture is optimized for AI workloads requiring ultra-low latency, high-throughput interconnectivity across a large compute cluster. Its network architecture supports data transfer speeds of up to 400 Gbps.

Supercomputers are popular

Advertisement

The 62nd edition of the TOP500 published last November revealed that the Frontier system retained its first place and remained – to date – the only exascale machine on the list. Its score is 1194 EFlop/s. As a reminder, Frontier uses AMD EPYC 64C 2 GHz processors and is based on the latest HPE Cray EX235a architecture. The system has a total of 8,699,904 CPU and GPU cores combined.

In Europe, new supercomputers are also flourishing. Jupiter, announced operational in 2024 and installed in Germany, is the first European computer capable of achieving exascale performance. It will be made available to researchers and companies to develop algorithms in the environmental and pharmaceutical sectors.

In Spain, MareNostrum 5 is the talk of the town. This European flagship of computing power is also intended to accelerate medical and climate research, among other things. It combines computing power among the 10 largest in the world – 314 petaflops – and sustainable energy operation. By comparison, the figures published by Nvidia are certainly impressive, but should be taken with perspective: 121.4 FP64 PFlop/s and 18.4 FP8 EFlop/s for HPC and AI.

Selected for you

Advertisement