Select Page

And what’s a colly bird and isn’t it “calling birds”? “Colly” derives from the Old English word for coal. So the song’s lyrics here call for four blackbirds. All the evidence I’ve seen argues that the use of “colly birds” in the lyric pre-dates any use of “calling birds” by at least a century.

And now that this important issue is out of the way, let’s move onto the crux of the Nvidia (NVDA) question: How long can the company sustain something like its current lead in massively parallel chip architectures for cloud computing, PC games, artificial intelligence, block chain implementation, deep learning, and autonomous vehicles?

No doubt that Nvidia is ahead now. No doubt that these are some of the fastest growth markets in technology (and in the whole economy.) And no doubt that everyone from Intel (INTC), to Advanced Micro Devices (AMD) to Alphabet (GOOG) is chasing Nvidia. So how long before they catch up?

In my estimation not in 2018 or 2019 or 2020. Which is why I’m adding this fourth day of Christmas pick to my 12-18 month Jubak Picks portfolio.

After that? My crystal ball refuses to answer. Although it does note that everybody is chasing Nvidia as fast as they can. And that Nvidia is fully aware of the pursuit and is showing no signs of taking its foot off the accelerator.

Let’s take a little, very simplified excursion into the world of Nvidia, parallel processors, and GPUs. The first thing to remember about Nvidia is that it comes out of a very different part of computing than Intel (INTC.) Intel’s chips make up the central processing unit (CPU) in most of the world’s desktop PCs. The chip’s job is to tackle problems in sequence–the chip does this and then moves on very, very quickly to do that. The most powerful Intel powered computers use multiple cores–five, say–but each core still tackles its work in sequence. Nvida’s chips come out of the gaming world–it’s GPUs (graphics processing units) are designed to keep all of the parts of a computer game updated simultaneously. If Gandalf grapples with the Balrog and sends both of them tumbling into the pit, the orcs and Frodo all need to react in real time parallel to that piece of the action. Nvidia’s chips handle this challenge–keeping lots of balls in the air simultaneously by using thousands of cores at once. Each core is simpler and less powerful alone than one of the Intel’s CPU core, but they act in parallel–you can think of them swarming the problem. Rather than waiting for a core to finish its sequential processing before it moves onto the next part of the problem, Nvidia’s processors break the problem into pieces and attack them simultaneously–in parallel.

It’s not possible to say that one of these architectures is superior to the other–it all depends on the task at hand. Nvidia’s parallel architecture is so in demand right now because after years of being focused on accelerating the graphics in computer games, parallel processors find themselves with a bushel of new tasks to address in fast growing new markets. In the drive to develop autonomous vehicles, for example, there’s a need to take in vast amounts of data from a wide variety of sources and process it all simultaneously to build up a “picture” of the car’s real world environment and then to parcel out, in parallel, decisions to the devices that govern such functions as steering and braking. The analogy to what a GPU does in a computer game is pretty clear.

It’s less clear, perhaps, but just as central to artificial intelligence. Think of the analogy to the human brain which has to take in relatively simply data from eyes, ears, mouth, skin and other sensors, combine it all, and then spit out a complex and integrated “picture” of the external world and parallel instructions on how various body agents should act in real time. Artificial intelligence programs used to integrate parallel data from multiple sources in real time to build up a picture of a face or a weather system or a robotic manufacturing line have the same needs for processing in parallel.

So do other new technologies such as the block chain programs that run bit coin but that are increasingly used to create secure financial “ledgers.” Or hyperscale cloud computing built around self-learning neural networks where the more data streams that a processor can throw at the network (in a organized fashion) the faster the neural network can learn how to visualize a face or recognize speech. (If you’re interested in the early days of neural network computing, dig up a copy of my 1992 book In the Image of the Brain: Breaking the Barrier Between the Human Mind and Intelligent Machines. The book is out of print but you can easily find used copies, very cheap used copies, on Amazon.)

Nvidia  has competitors in the GPU and parallel processing market that date back to the days when the market was focused on computer games. The chief competitor is Advanced Micro Devices (AMD), which continues to chase the performance of Nvidia’s GPU for the acceleration of computer games. But Advanced Micro faces the same handicaps in competing with Nvidia as it has historically in its battle with Intel in the CPU market. Nvidia has captured about 70% of the gaming GPU market, which gives it significant advantages of scale that let it spend far more on research and development than Advanced Micro can and that let the company constantly push the manufacturing curve and move to smaller, faster and more efficient chips. In my opinion, and I know this will elicit howls from Advanced Micro fans, while Nvidia remains locked in battle with Advanced Micro at lower price points in the gaming GPU market, the latter company simply doesn’t have the resources to take on Nvidia at the bleeding edge of the technology. (Which does not mean, I’d note, that new Advance Micro products can’t dent Nvidia’s market share in the gaming GPU market. Advanced Micro’s EPYC processor looks to be gaining traction with companies such as Dell and it has stuck partnerships in the cloud market with Amazon and Microsoft (MSFT). But the recent release by Nvidia of the Titan V chip with its transition from the company’s Pascal to Volta architecture strengthened Nvidia’s position visa-vis Advanced Micro at the top end of the GPU market.)

The serious competition for Nvidia in the new markets for parallel processing will come from Alphabet and Intel, companies with a lot more cash than Advanced Micro Devices.

For these folks the problem won’t be funding the effort to build massively parallel GPUs, but actually building the architecture for this kind of chip.

The technology barriers aren’t trivial. In the past, for example, Intel attempted to build is own GPUs to integrate with its PC chip sets. But it ultimately needed to license technology from Nvidia.

Which doesn’t mean Intel has given up. In fact, there are recent signs that Intel is ready to spend big to move into these new markets, perhaps goaded on by a determination not to replicate its experience in these new markets of getting locked out of the market for mobile chips. In November Intel announced new partnerships with Advanced Micro that will result in Intel shipping a chip that integrates an Intel core processor with Advanced Micro’s Radeon graphics processing chip with a special focus on the mobile device market.

Even more interesting, In November Intel also announced that it was hiring Raja Koduri away from Advanced Micro. Koduri had served as chief architect of Advanced Micro’s Radeon Technologies graphics division. At Intel Koduri will head a new Core and Visual Computing Group that will be in charge of competing with Advanced Micro and Nvidia in the graphics market and in the new technologies that employ parallel processors. Koduri’s hire is an impressive one. At Apple from 2009 to 2013 he worked on the transition to Retina displays on Macintosh computers. Previous to his time at Advanced Micro he worked at ATI, the graphics company that AMD bought a decade ago.

Intel’s financial staying power makes it worth taking seriously any effort in GPUs and parallel processing. But I’d also emphasize the lead that Nvidia has already developed in new technologies. Looking at the autonomous vehicle space, for example, more than 225 partners are using Nvidia’s Drive PX platform as a deep learning tool in their own development efforts. By the time Intel has a competitive product, the company will face a tough battle to separate Nvidia from partners that have devoted so much time and money to building solutions based on Nvidia’s technology. (In March Intel spent $15.3 billion to acquire Mobileye, an Israeli company that develops sensors and cameras for Advanced Driver Assisted Systems. I think that assures that Intel won’t be completely locked out of the autonomous vehicle market while it develops its own parallel processors.)

It’s not clear to me yet whether Intel’s parallel processing efforts will go down the Nvidia GPU route or take off into a technology called Tensor Processing. TPUs are a new kind of chip designed to accelerate tensor operations, the big workload of the deep learning algorithms at the heart of the new generation of artificial intelligence applications. That does seem to be the direction being taken by Alphabet and startups such as Wave Computing. In theory TPUs should be faster, as much as ten times faster, than GPUs at deep learning since they dedicate more of their cores–all of their cores in some cases–to tensor processing. But so far of the more than a dozen companies building deep learning chips, only Alphabet and Wave Computing have working silicon and are conducting customer trials, according to Ark Invest. And so far those TPUs don’t show significant advantages over Nvidia’s GPUs in areas such as power consumption or processing speed. Moreover Nvidia’s GPUs aren’t standing still. Over four generations, Nvidia has improved the architectural efficiency of its GPUs for deep learning by roughly 10 times, according to Ark Invest.

My conclusion: Yes, there are a lot of companies chasing Nvidia but over the next two to three years I don’t see anybody catching the company–or at least catching it enough to outweigh the incredible growth in the new markets that Nvidia is addressing. The stock seems to have completed a pull back from its November 24 high at $216 to close at $197.40 on December 28. The 50-day moving average is just above at $201.

All of which is why Nvidia is worth four colly birds, three French hens, two turtle doves and a partridge in a pear tree. I’m adding Nvidia to my 12-18 month Jubak Picks Portfolio tomorrow December 29 with a target price of $230 by June 2018.

My first 12 days of Christmas Pick for 2018 was Amazon (AMZN). My second was Nektar Therapeutics (NKTR). The third was Southern Copper (SCCO). (And just for the record the actual 12 days of Christmas started on Christmas day.)