IT PARK
    Most Popular

    Business Intelligence BI Industry Knowledge - Aerospace, Satellite Internet Industry

    Jul 13, 2025

    Will the latest AI "kill" programming

    Jul 05, 2025

    What causes the bitcoin network hash rate to increase?

    Jun 25, 2025

    IT PARK IT PARK

    • Home
    • Encyclopedia

      Who is more secure, fingerprint recognition or password?

      Aug 02, 2025

      What are "Other" and "Other System Data" on iPhone and how do I clean them up?

      Aug 01, 2025

      Cell phone "a daily charge" and "no power to recharge", which is more harmful to the battery?

      Jul 31, 2025

      Why does the phone turn off when the remaining battery is not zero

      Jul 30, 2025

      Internet era! How to prevent personal information leakage

      Jul 29, 2025
    • AI

      Is AI taking human jobs? Here are 5 ways we might be able to combat it

      Aug 02, 2025

      Coping with the "blind spot" of application in the age of artificial intelligence, and finding the "point of view" from the power of time.

      Aug 01, 2025

      AI fraud is efficient and low cost, and the "three magic tricks" effectively prevent potential threats

      Jul 31, 2025

      Many people use AI to help them work: less time to work and more money to earn

      Jul 30, 2025

      Driving Generative AI Pervasiveness: Intel's "duty to do so"

      Jul 29, 2025
    • Big Data

      Uncover 10 big data myths

      Aug 02, 2025

      3 Ways to Overcome Big Data Obstacles

      Aug 01, 2025

      How big data analytics is reshaping the future of smart cities

      Jul 31, 2025

      3 Ways to Successfully Manage and Protect Your Data

      Jul 30, 2025

      Big data is transforming education

      Jul 29, 2025
    • CLO

      The 6 principles of cloud computing architecture design, do you follow them?

      Aug 02, 2025

      How India can seize a rare opportunity in cloud computing

      Aug 01, 2025

      To make more environmentally friendly use of the cloud IT infrastructure, start with these aspects

      Jul 31, 2025

      Cloud computing, what are the main security challenges

      Jul 30, 2025

      What is cloud computing?

      Jul 29, 2025
    • IoT

      Why Edge Computing Matters to Your IoT Strategy

      Aug 02, 2025

      Iot and Internet misconceptions, which ones do you know?

      Aug 01, 2025

      5 Secrets to Maximizing Return on Investment in IoT

      Jul 31, 2025

      The Role of Industrial IoT Technology in Smart Factories

      Jul 30, 2025

      Is it too early to exit the IoT?

      Jul 29, 2025
    • Blockchain

      Zamna uses blockchain to verify passenger information and has landed on Emirates

      Aug 02, 2025

      What does blockchain mining mean?

      Aug 01, 2025

      NFT, from the "art" of Internet natives to the marketing tools of business

      Jul 31, 2025

      What are the main areas of potential application of blockchain in the construction industry?

      Jul 30, 2025

      Difference between blockchain games and regular games

      Jul 29, 2025
    IT PARK
    Home » AI » Nvidia Announces GH200 Superchip, Most Powerful AI Chip, to Accelerate Generative AI Workloads
    AI

    Nvidia Announces GH200 Superchip, Most Powerful AI Chip, to Accelerate Generative AI Workloads

    Nvidia announced earlier this Monday that the GH200 Grace Hopper Superchip, Nvidia's most powerful artificial intelligence chip to date, is now in full production.
    Updated: Jul 21, 2025
    Nvidia Announces GH200 Superchip, Most Powerful AI Chip, to Accelerate Generative AI Workloads

    Nvidia's most powerful AI chip to date, the GH200 Grace Hopper Superchip, is now in full production, Nvidia announced earlier this week. the GH200 Superchip is designed to power systems that run the most complex AI workloads, including training the next generation of generative AI models.

    The new chip has a total bandwidth of 900 gigabits per second, seven times more than the standard PCIe Gen5 lanes used in today's most advanced accelerated computing systems. nvidia says the Superchip also consumes five times less power, enabling it to more efficiently handle those demanding AI and high-performance computing applications.

    In particular, the Nvidia GH200 Superchip is expected to be used in generative AI workloads represented by OpenAI ChatGPT, the near-human ability of generative AI to generate new content from prompts that is now sweeping the tech industry.

    Generative AI is rapidly transforming the enterprise, unlocking new opportunities and accelerating discovery in healthcare, finance, business services and many more industries," said Ian Buck, vice president of accelerated computing at Nvidia. With Grace Hopper Superchips in full production, global manufacturers will soon be able to provide enterprises with the acceleration infrastructure they need to build and deploy generative AI applications that employ their unique proprietary data."

    One of the first systems to integrate GH200 Superchips will be Nvidia's own next-generation, large-memory AI supercomputer, the Nvidia DGX GH200. according to Nvidia, this new system uses the NVLink Switch System to combine 256 GH200 Superchips, enabling it to run as a single GPU, delivering up to 1 exaflops of performance (or 1 quintillion floating point operations per second) and 144 TB of shared memory.

    That means it has nearly 500 times more memory and is also more powerful than Nvidia's previous-generation DGX A100 supercomputer, which was launched in 2020 and simply combined eight GPUs into a single chip.

    The DGX GH200 AI supercomputer will also come with a complete full-stack of software for running AI and data analytics workloads, Nvidia said. For example, the system supports Nvidia Base Command software, which provides AI workflow management, cluster management, accelerated compute and storage libraries, as well as network infrastructure and system software. The system also supports Nvidia AI Enterprise, a software layer containing more than 100 AI frameworks, pre-trained models and development tools for streamlining the production of generative AI, computer vision, speech AI and other types of models.

    Constellation Research analyst Holger Mueller said Nvidia has effectively merged two truly reliable products into one by converging Grace and Hopper architectures with NVLink. The result, he said, "is higher performance and capacity, as well as a simplified infrastructure for building AI-driven applications that allows users to see and benefit from so many GPUs and their capabilities as one logical GPU."

    Good things happen when you combine two good things in the right way, and that's the case with Nvidia. the Grace and Hopper chip architectures combined with NVLink not only bring higher performance and capacity, but also simplification for building AI-enabled next-generation applications because of treating all of these GPUs as one logical GPU. "

    The first customers to adopt the new DGX GH200 AI supercomputer include Google Cloud, Meta Platforms and Microsoft, in addition to the DGX GH200 design that Nvidia will make available as a blueprint for cloud service providers who want to customize it for their own infrastructure.

    Girish Bablani, corporate vice president of Azure Infrastructure at Microsoft, said, "Traditionally, training large AI models has been a resource- and time-intensive task, and the potential of the DGX GH200 to handle terabytes of data sets will enable developers to conduct advanced research at a much larger scale and at a much faster pace."

    Nvidia also said it will build the DGX GH200-based AI supercomputer "Nvidia Helios" for its own internal R&D team, which will combine four DGX GH200 systems interconnected using Nvidia Quantum-2 Infiniband networking technology. By the time it goes live at the end of this year, the Helios system will contain a total of 1024 GH200 superchips.

    Finally, Nvidia's server partners are planning to build their own systems based on the new GH200 Superchip, and among the first systems to launch is Quanta Computer's S74G-2U, which will be available later this year.

    Nvidia said server partners have adopted the new Nvidia MGX server specification, which was also announced on Monday. MGX is a modular reference architecture that allows partners to quickly and easily build more than 100 versions of servers based on its latest silicon architecture for a wide range of AI, high-performance computing and other types of workloads. By using NGX, server manufacturers can expect to reduce development costs by as much as three-quarters and cut development time by two-thirds, to about six months.

    NVIDIA Development Chip
    Previous Article How to Improve Big Data Performance with Low Latency Analytics?
    Next Article Talking about data lake and data warehouse

    Related Articles

    AI

    OpenAI develops new tool that attempts to explain the behavior of language models

    Jul 15, 2025
    AI

    Meta Quest 3 expected to support generative AI by 2024

    Jul 14, 2025
    Blockchain

    How blockchain technology can be applied to environmental protection to drive a green economy

    Jul 07, 2025
    Most Popular

    Business Intelligence BI Industry Knowledge - Aerospace, Satellite Internet Industry

    Jul 13, 2025

    Will the latest AI "kill" programming

    Jul 05, 2025

    What causes the bitcoin network hash rate to increase?

    Jun 25, 2025
    Copyright © 2025 itheroe.com. All rights reserved. User Agreement | Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.