Jensen Huang, the cofounder and CEO of Nvidia, the semiconductor company that has taken the global stock market by storm, was present at the birth of the current AI movement.
In fact, one might say he helped sire it when, in August 2016, he donated a groundbreaking new supercomputer specifically designed for AI to a newly founded nonprofit called OpenAI. At the time OpenAI’s research efforts were led by its cofounder Elon Musk, before he departed over differences with current CEO Sam Altman.
“I delivered to [Musk] the first AI supercomputer the world ever made,” Huang said during an interview at the New York Times Dealbook Summit in November. Building the 70-pound, 35,000-part computer took years, according to Huang.
“It took us five years to make it. It’s called a DGX and it’s everywhere in the world today.”
Musk and Huang have a friendly relationship. During an interview at the same New York Times conference, Musk referred to Huang as “awesome.”
To commemorate that fateful day in Nvidia and OpenAI’s history, Huang signed the supercomputer. “To Elon and the OpenAI Team!” wrote Huang in marker. “To the future of computing and humanity. I present you the world’s first DGX-1!”
Musk had recently founded OpenAI because he was concerned that tech giants like Alphabet and Meta would dominate artificial intelligence by hoarding talent and computing power. A mission which Huang seemed to embrace. “I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence,” Huang said at the time.
OpenAI has since walked back some of its open source protocols.
Nvidia’s DGX ended up accelerating OpenAI’s research experiments by weeks, according to OpenAI cofounder Ilya Sutskever, who had introduced Huang to early versions of AI several years earlier. A computer as powerful as the DGX also meant OpenAI could run experiments that were previously out of reach simply because they required too much computing power to be executed. Much of that work would eventually forge the foundations of the generative AI tools OpenAI pioneered. In a 2016 video promoting the collaboration between OpenAI and Nvidia, OpenAI researcher Andrej Karpathy described how they planned to use DGX specifically for large language models, the very technology underpinning ChatGPT, which catapulted AI and OpenAI into the mainstream. In a prescient prediction, Karpathy mused that “eventually we’ll be able to talk to computers just like we talk to people.”
Announced in April 2016, the DGX was indeed one of the world’s first supercomputers, originally billed by Nvidia as having the power of 250 servers in a single box. The proliferation of AI made Nvidia’s supercomputers, chips, and software a hot commodity across the global tech sector. On its 2023 year-end earnings call Wednesday, Nvidia demonstrated just how in-demand its products were. Nvidia posted $22 billion in fourth quarter revenue, obliterating consensus expectations by about $1.7 billion.
But even in 2016, Huang was already positioning Nvidia as the go-to supplier for an AI boom he saw as imminent. “The DGX-1 is easy to deploy and was created for one purpose: to unlock the powers of superhuman capabilities and apply them to problems that were once unsolvable,” Huang said in a 2016 press release a few months before OpenAI received its Nvidia supercomputer.
Huang had originally decided to build the supercomputer so that it could be used by Nvidia’s own engineers. However, when Musk heard about the supercomputer at a conference, he told Huang, “I want one of those,” according to Huang. At the time, OpenAI was still in its infancy as was much of the artificial intelligence technology it had set out to research and ultimately turn into products. Nvidia’s supercomputer would be used to provide the computing power needed to test AI systems. By 2016, researchers had made breakthroughs in deep learning and neural networks. These two techniques allow artificial intelligence to learn from itself and improve the more data it consumes.
Huang saw an early example of these systems in 2012 and he decided to start building a supercomputer specifically designed for AI. He realized tech was entering a new era of computing after Sutskever, the OpenAI cofounder, showed him a groundbreaking new way to program software from a neural network called Alexnet he had built with Geoffrey Hinton, known as the “Godfather of AI.” AlexNet had created a program that made software by being shown an example of the desired output instead of having to code it and then run tests on it.
“It was backwards compared to most programs up to then,” Huang said, explaining what spurred him to build the AI supercomputer.
After his initial excitement Huang tried to assess the bigger picture of how this new development could affect the entire tech industry. We “asked ourselves, ‘What are the implications of this for the future of computers,’” Huang recounted. “And we drew the right conclusions that this was going to change the way computing was going to be done, software was going to be written, and the type of applications we could write.”
Considering Nvidia’s stock price has risen from $15 a share in August 2016 when Huang gifted Musk the first AI supercomputer to $779 a share, it appears he made the right call.
Source link