FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

“The Japanese government came out and offered subsidies of a couple billion dollars, I think, for several different Internet companies and telcos to be able to fund their air infrastructure. India has a sovereign initiative going and they’re building their AI infrastructure. Canada, the U.K., France, Italy… Singapore, Malaysia, you know, a large number of countries are subsidizing their regional data centers so that they could become able to build out their AI infrastructure.” — Jensen Huang

I think the market wanted more on Blackwell. They wanted more specifics. And I’m trying to go through all of the call and the transcript. It seems like a very clearly this was a production issue and not a fundamental design issue with Blackwell, but the deployment in the real world, what does that look like tangibly? And is there sort of delay in the timeline of that deployment and thus revenue from that product? I. I let’s say that just the fact that I was so clear and it wasn’t clear enough kind of tripped me up there right away. And so so let’s see, we are we made a mass change to improve the yield functionality of Blackwell’s. Wonderful. We’re sampling Blackwell all over the world today. We show people giving tours to people, all of the Blackwell systems that we have up and running. You could find pictures of Blackwell systems all over the Web. We have started volume production. Volume, production will ship in Q4, Q4, we will have billions of dollars of Blackwell revenues and we will ramp from there. We will ramp from there. The demand for Blackwell far exceeds its supply, of course, in the beginning because the demand is so great. But we’re going to have lots and lots of supply and we will be able to ramp starting in Q4. We have billions of dollars of revenues and we’ll ramp from there into Q1 and Q2 and through next year. We’re going to have a great next year as well. Jensen What is the demand for accelerated computing beyond the hyperscalers and matter? Hyperscalers represent about 45% of our total data center business were relatively diversified. Today we have hyperscalers, we have Internet service providers, we have sovereign AIs, we have industries, enterprises. So it’s fairly fairly diversified aside outside of Hyperscalers is the other 55% know the application use across all of that, all of that data center starts with accelerated computing. Accelerated computing does everything of course from well, the the models, the things that we know about which is generative AI and that gets most of the attention. But at the core we also do database processing pre and post processing of of of data before you use it for generative AI, transcoding, scientific simulations, computer graphics, of course, image processing of course. And so there’s tons of applications that people use our accelerated computing for and one of them is generative AI. And so let’s see, what else can I say? I, I think that’s the coverage pin Jensen please. On Sovereign AI you and I’ve talked about that before and it was so interesting to hear something behind it that in this fiscal year there will be low double digit. I think you said billions of dollars in sovereign AI sales. But to the layperson, what does that mean? It means deals with specific governments. If so, where? It’s not necessarily sometimes it’s deals with a particular a regional service provider that’s been funded by the government. And oftentimes that’s the case in the case of in the case of Japan, for example, the the Japanese government came out and offered subsidies of a couple billion dollars, I think, for several different Internet companies and telcos to be able to fund their air infrastructure. India has a sovereign initiative going and they’re building their air infrastructure. Canada, the U.K., France, Italy are missing somebody. Singapore, Malaysia, you know, a large number of countries are subsidizing their regional data centers so that they could become able to build out their AI infrastructure. They recognize that their countries knowledge their countries data. Digital data is also their natural resource. Now, not just the land they’re sitting on, not just the air above them, but they they realize now that their their digital knowledge is part of their natural and national resource, and they are to harvest that and process that and transform it into their national digital intelligence. And so this is a this is what we call sovereign AI. You could imagine almost every single country in the world will eventually recognize this and build out their air infrastructure density. You use the word resource, and that makes me think about the energy requirements here. I think on the call you you talk about how the next generation models will have many orders of magnitude greater compute needs, but how would the energy needs increase and what is the advantage you feel NVIDIA has in that sense? Well, the most important thing that we do is increase the performance of and increase the performance efficiency of our next generation. So Blackwell is many times more performant than Hopper at the same level of power used. And so that’s energy efficiency, more performance worth the same amount of power or same performance at a lower power. And that’s number one. And the second is using local cooling. We support air. We support air cooling, we support lift cooling. But liquid cooling is a lot more energy efficient. And so so the combination of all of that, you’re going to get a pretty large, pretty large step up. But the important thing to also realize is that AI doesn’t really care where it goes to school. And so increasingly we’re going to see AI be trained somewhere else, have that model come back and be used near to population or even running on your PC or your phone. And so we’re going to train large models, but the goal is not to run the large models necessarily all the time. You can you can surely do that for some of the premium services and the very high value eyes. But. It’s very likely that these large models would then help to train and teach smaller models. And what we’ll end up doing is have one large, you know, few large models that are able to train a whole bunch of small models and they run everywhere. Jensen You explained clearly that demand to build generative AI products on models or even at the GPU level is greater than current supply. In Blackwell’s case in particular. Explain the supply dynamics to me for your products and whether you see an improvement sequentially quarter on quarter or at some point by the end of fiscal year into next year. Well, the fact that we’re growing would suggest that our supply is improving and our supply chain is is quite large, one of the largest supply chains in the world. And we have incredible partners and they’re doing a great job supporting us in our growth. As you know, we’re one of the fastest growing technology companies in history, and none of that would have been possible without very strong demand, but also very strong supply. We’re expecting Q3 to have more supply than Q2. We’re expecting Q4 to have more supply than Q3 and we’re expecting Q1 to have more supply than Q4. And so I think our supply, our supply condition going into next year will be in would be a large improvement over this last year. With respect to demand, I Blackwell’s just such a leap and and there are several things that are happening. You know, just the foundation model makers themselves, the size of the foundation models are growing from hundreds of billions of parameters to trillions of parameters. They’re also learning more languages. Instead of just learning human language, they’re learning the language of images and sounds and videos, and they’re even learning the language of 3D graphics. And whenever they are able to learn these languages, they can understand what they see, but they can also generate what they’re asked to generate. And so they’re learning the language of proteins and chemicals and and physics, you know, could be fluids and it could be particle physics. And and so they’re learning all kinds of different languages, learned and meaning what we call modalities, but basically learn languages. And so so these models are growing in size. They’re learning from more data and there are more model makers than there was a year ago. And so the number of model makers have grown substantially because of all these different modalities. And so that’s just one just the frontier model. The Yes. Foundation model makers themselves have really grown tremendously. And then the generative A.I. market has really diversified, you know, beyond the Internet service makers to startups. And now enterprises are jumping in and different countries are jumping in. So the demand is really growing. Jensen I’m sorry to cut you off. I will lose your time soon. And you’ve also diversified. And when I said to our audience you were coming on, I got so many questions. Probably the most common one is what is in video. We talked about you as a systems vendor, but so many points on Nvidia GPU Cloud. And I want to ask, finally, do you have plans to become literally a cloud compute provider? No. Our GPU cloud was designed to be the best version of Nvidia Cloud that’s built within each cloud and video Cloud is built inside GCP, inside Azure, inside a WAC, inside OCI. And so we build our clouds within theirs so that we can implement our best version of our cloud, work with them to make that cloud, that, that infrastructure, that infrastructure and video infrastructure as performant, as great TCO as possible. And and so that strategy has worked incredibly well. And of course, we are large consumers of it because we create a lot of ourselves because our chips aren’t possible to design without AI or software is not possible to write without A.I.. And so we use that ourselves tremendous, you know, tremendous amount of it. Self-driving cars, the general robotics work that we’re doing, the omniverse work that we’re doing. So we’re using the TJX Cloud for ourselves. We also use it for an AI foundry. We make A.I. models for companies that would like to have expertise in doing so. And so we are a we’re in a we’re a foundry for I like TSMC as a foundry for our chips. And so so there are three fundamental reasons why we do it. One is to have the best version of Nvidia inside all the clouds. Two, because we’re a large consumer to ourselves, and third, because we use it for Foundry four to help every other company.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.