If you own Nvidia (NVDA) shares or are thinking of buying them post the stock’s 10 for 1 split, I know you’re trying to figure out 1) how much more upside is there to Nvidia’s shares, and 2) how much more valuable can a stock with a $3 trillion market cap get?
You’re especially interested in the answer to those questions because like everyone else in this market, you’re concerned about comparisons to the Dot.com boom and bust. After all, Nvidia shares are up another 151% for 2024 as of July 1.
I can’t give you a definitive call on the top for this stock and this rally, but I cam sure of one thing, I don’t want to sell Nvidia before the company’s new Blackwell chip architecture has hit full launch speed in late 2024 and 2025. My read is that this is the “perfect” AI chip for this moment in the AI boom.
Why is Blackwell such a big deal? And why is it so perfect for this moment in the AI boom?
I think we’re starting to see an increased attention among AI chip and system customers on cost of operation and energy efficiency. That increased attention is being forced on the AI industry by two realizations. First, AI systems, especially AI systems that rely on really massive language models are expensive to run. If AI companies are to ever turn a profit, they’ve got to start generating more revenue at a lower cost. And second AI models are electricity hogs. The grid is already having trouble keeping up with the growth in electricity demand from AI companies. And in some geographies there simply isn’t enough electricity to meet projected demand in even the relatively short-term future.
Enter the Blackwell architecture.
Blackwell is 25 times more energy efficient than Hopper, Nvidia’s current architecture for its AI chips. (The B100 (SXM) model offers about 1.7x higher efficiency compared to Hopper and 3.2x that of Ampere ARM-based designs when normalized to FP16 performance.) In a scenario of training a 1.8 trillion parameter model over 90 days, Blackwell reduced power consumption from 15 megawatts to 4 megawatts compared to the previous generation chip system..
It’s more powerful and faster. Blackwell offers 4 times the AI training performance of Hopper. Blackwell delivers up to 30 times better performance for large language model inference workloads compared to Hopper. Blackwell GPUs pack 208 billion transistors, a substantial increase from previous generations. The chips are manufactured on a custom-built 4-nanometer Taiwan Semiconductor Manufacturing (TSM) line. The architecture uses a dual-chipset design, with two very large dies tied together as one GPU.
And, maybe, Blackwell represents a significant reduction in the cost of ownership. Blackwell’s increased processing power means, it appears from early data, that fewer Blackwell-based servers are needed to manage the same workload, potentially reducing overall power consumption and cooling requirements. Blackwell GPUs do have a higher upfront cost (estimated around $30,000 to $40,000 per chip), but their significantly higher performance and energy efficiency could (and I stress “could” at this point in time) lead to lower total cost of ownership over time. Some early analysis gives Blackwell a big edge over competitors like Advanced Micro Devices’(AMD) MI300X in terms of performance and total cost of ownership.
I would like to point out, though, that while the Blackwell architecture does promise big gains in relative energy efficiency, Blackwell GPUs still consume more power in absolute terms compared to AMD’s and other current offerings.
In other words this new architecture is likely to mean continued revenue momentum for Nvidia, but Blackwell doesn’t solve an AI energy crunch that looks to be headed to a crisis.
I own Nvidia in my 50 Stocks Portfolio. The position is up 163% since I added it on December 7, 2023.