About 58,400 results
Open links in new tab
  1. GB200 NVL72 - NVIDIA

    GB200 NVL72 connects 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale, liquid-cooled design. It boasts a 72-GPU NVLink domain that acts as a single, massive GPU and delivers 30X faster real-time trillion-parameter large language model (LLM) inference.

  2. GB200 Hardware Architecture – Component Supply Chain & BOM

    Jul 17, 2024 · Today we are going to go from A to Z on the different form factors of GB200 and how they changed versus the prior 8 GPU HGX baseboard servers. We will break downs on unit volumes, supplier market share and cost for over 50 different subcomponents of the GB200 rack.

  3. direct-to-chip (DLC) liquid-cooled AI systems, Supermicro’s leading liquid-cooling technology advancement powers NVIDIA GB200 NVL72, an exascale computing in a single rack, providing up to 25 times more energy efficiency than the previous generation.

  4. A closer look at Nvidia's 120kW DGX GB200 NVL72 rack system

    Mar 21, 2024 · GTC Nvidia revealed its most powerful DGX server to date on Monday. The 120kW rack scale system uses NVLink to stitch together 72 of its new Blackwell accelerators into what's essentially one big GPU capable of more than 1.4 exaFLOPS performance — at FP4 precision anyway.

  5. NVIDIA GB200 GPU Specs - TechPowerUp

    NVIDIA's GB200 GPU uses the Blackwell architecture and is made using a 5 nm production process at TSMC. GB200 does not support DirectX. For GPU compute applications, OpenCL version 3.0 and CUDA 12.0 can be used. It features 18432 shading units, 576 texture mapping units and 24 ROPs.

  6. NVIDIA B200 and GB200 AI GPUs Technical Overview: Unveiled at …

    Mar 19, 2024 · At the 2024 GTC conference, NVIDIA introduced its new AI GPU models, the B200 and GB200, under the "Blackwell" series, succeeding the "Hopper" H100 series. These GPUs are designed to achieve up...

  7. Nvidia GB200s now available 'at scale' on CoreWeave - DCD

    Apr 16, 2025 · CoreWeave began offering the Nvidia GB200 NVL72 instances in February of this year, initially from the company's US-West-01 region. The GB200 NVL72-based instances on CoreWeave connect 36 Nvidia Grace CPUs and 72 Nvidia Blackwell GPUs in a liquid-cooled, rack-scale design, are available as bare-metal instances through CoreWeave Kubernetes ...

  8. Introducing the NVIDIA GB200 GPU - cudocompute.com

    Dec 19, 2024 · In tests with a trillion-parameter LLM, the GB200 delivered 30x faster real-time throughput compared to the H100, NVIDIA's previous-generation flagship GPU, which means it can handle more requests and generate responses much quicker, leading to a smoother and more responsive user experience.

  9. Huawei AI CloudMatrix 384 – China’s Answer to Nvidia GB200

    Apr 16, 2025 · Huawei is making waves with its new AI accelerator and rack scale architecture. Meet China’s newest and most powerful Chinese domestic solution, the CloudMatrix 384 built using the Ascend 910C. This solution competes directly with the GB200 NVL72, and in some metrics is more advanced than Nvidia’s rack scale solution. The engineering advantage is at…

  10. NVIDIA GB200 NVL72 Delivers Trillion-Parameter LLM Training …

    Mar 18, 2024 · The GB200 introduces cutting-edge capabilities and a second-generation transformer engine that accelerates LLM inference workloads. It delivers a 30x speedup for resource-intensive applications like the 1.8T parameter GPT-MoE compared to …

Refresh