
Nvidia CEO Jensen Huang showed off the first iteration of the Spectrum-X, the Spectrum-4 chip, with a hundred billion transistors on a 90-millimeter by 90-millimeter die. Nvidia
Nvidia CEO Jensen Huang, speaking at the opening keynote at the Computex computer technology conference, on Monday in Taipei, Taiwan, unveiled several new products, including a new type of switch in ethernet dedicated to the movement of large amounts of data for artificial intelligence (AI). ) tasks.
“How do we introduce a new ethernet, which is backwards compatible with everything, to make every data center a generative AI data center?” Huang posed in his keynote. “For the first time, we are bringing the capabilities of high-performance computing to the ethernet market.”
Also: The best AI chatbots
Spectrum-X, as the family of ethernet products is known, is “the world’s first high-performance ethernet for AI”, according to Nvidia. A key feature of the technology is that it “does not drop packets”, said Gilad Shainer, the senior vice president of networking, in a media briefing.
The first iteration of the Spectrum-X is the Spectrum-4, Nvidia says, which it calls “the world’s first 51Tb/sec Ethernet switch designed for AI networks”. The switch works in tandem with Nvidia’s BlueField data-processing unit, or DPU, chips that handle data fetching and queuing, and Nvidia fiber-optic transceivers. The switch can route 128 ports of 400-gigabit ethernet, or 64 800-gig ports, from end to end, the company said.
Huang held up the silver Spectrum-4 ethernet switch chip on stage, announcing that it was “gigantic”, consisting of a hundred billion transistors on a 90-millimeter by 90-millimeter die built using the technology in the “4N” process of Taiwan Semiconductor Manufacturing. The feature runs at 500 watts, Huang said.
“For the first time, we are bringing high-performance computing capabilities to the ethernet market,” Huang said. Nvidia
The Spectrum-4 was the first in a line of Spectrum-X chips. Nvidia
Nvidia’s chip has the potential to revolutionize the ethernet-networking market. Most of the switch silicon is supplied by chip maker Broadcom. Those switches are sold by networking-equipment makers Cisco Systems, Arista Networks, Extreme Networks, Juniper Networks, and others. Companies are expanding their equipment to better manage AI traffic.
The Spectrum-X family was built to address the fragmentation of data centers in two forms. One form is what Huang calls “AI factories”, which are facilities costing hundreds of millions of dollars for the most powerful GPUs based on Nvidia’s NVLink and Infiniband, which are used for AI training. , which serves a small number of large workloads. .
also: ChatGPT productivity hacks: Five ways to use chatbots to make your life easier
Another type of data center facility is the AI cloud, which is multi-tenant, ethernet-based, and manages hundreds and hundreds of workloads for customers simultaneously, and which focuses on things like -serve AI consumer predictions, to be served by Spectrum-X.
Spectrum-X, says Shainer, will be able to “spread traffic throughout the network in the best way”, using “a new mechanism for congestion control”, avoiding a pile-up of packets that may occur in the network’s memory buffer. routers.
“We use advanced telemetry to understand latencies across the network to identify hotspots before they cause anything, to keep them free of congestion.”
Nvidia said in prepared statements that “the world’s leading hyperscalers are adopting NVIDIA Spectrum-X, including industry-leading cloud innovators.”
Nvidia is building a test-bed computer, it said, at its offices in Israel, called Israel-1, a “generative AI supercomputer”, using Dell PowerEdge XE9680 servers equipped with H100 GPUs that run the data on the Spectrum-4 switch.
All Computex news is available in Nvidia’s newsroom.
Also: I tested Bing’s AI chatbot, and it solved my biggest problem with ChatGPT
In addition to announcing new ethernet technology, Huang’s keynote featured a new model of the company’s “DGX” computer series for AI, the DGX GH200, which the company billed as ” a new class of big memory AI supercomputer for giant. generative AI models”.
Generative AI refers to programs that generate more than one mark, sometimes text, sometimes images, and sometimes other artifacts, such as OpenAI’s ChatGPT.
The GH200 is the first system to ship with what the company calls a “superchip”, the Grace Hopper board, which contains on one circuit board a Hopper GPU, and the Grace CPU, a CPU based on the ARM instruction set. intended to compete with x86 CPUs from Intel and Advanced Micro Devices.
Nvidia’s Grace Hopper “superchip”. Nvidia
The first iteration of the Grace Hopper, the GH200, is “in full production”, Huang said. Nvidia said in a press release that “global hyperscalers and supercomputing centers in Europe and the US are among the many customers who have access to GH200-powered systems.”
The DGX GH200 combines 256 of the superchips, according to Nvidia, to achieve a combined 1 exaflops — ten to the power of 18, or, one billion, billion floating point operations per second – – using 144 terabytes of shared memory. The computer is 500 times faster than the original DGX A100 engine released in 2020, according to Nvidia.
The keynote also unveiled MGX, a reference architecture for system builders to quickly and cost-effectively build 100-plus server variations. The first partners to use the spec are ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT, and Supermicro, with QCT and Supermicro being the first to market the systems, in August, Nvidia said.
also: ChatGPT vs. Bing Chat: Which AI chatbot should you use?
The entire keynote can be seen as a replay from the Nvidia website.
MGX is a reference architecture for computer system developers. Nvidia