NVIDIA Corporation (NASDAQ:NVDA) Wells Fargo 7th Annual TMT Summit Conference November 29, 2023 11:00 AM ET
Company Participants
Colette Kress – EVP & CFO
Conference Call Participants
Aaron Rakers – Wells Fargo
Aaron Rakers
All right. So why don’t we go ahead and get started? I’m Aaron Rakers. I’m the semiconductor and hardware analyst here at Wells Fargo and extremely excited to host a discussion with Colette Kress, the CFO of NVIDIA.
Before I start, though, Colette, I’m just going to throw out this little fact, right? You’ve been here 10 years, actually, September was your 10-year anniversary. At the point you joined, the company was doing $4 billion of revenue trailing 12 months. You’re now doing 45 roughly, right? The market cap has gone from $9 billion to $1.2 trillion. So, I’m going to start by just saying good work. Congrats. It’s been a phenomenal run. So, keep it going. Don’t take your foot off the pedal.
But before I start with the question is Colette. I think you get the joy of reading a safe harbor and then I think you might have some prepared comments as well. I’ll kick it over to you.
Colette Kress
Sounds great. Thank you, Aaron, for having us. I do have an opening statement may first say. As a reminder, this presentation contains forward-looking statements, and investors are advised to read our report filed with the SEC for information related to risks and uncertainties facing our business.
Okay. So, enjoy coming out here for this event. But let me kind of start with some of those things that we are seeing here at NVIDIA. What is the last part of this year been about and this important time? The important time is related to really a change in how we see the Data Center computing going forward and the rise of generative AI has really created a new paradigm in front of us, where we will see accelerated computing and AI computing being the thrust of a lot of the computing going forward.
There’s just an enormous installed base right now, about $1 trillion of compute that has been about the same type for the last several decades. And this is the opportunity that people see for both sustainability as we move forward, at a more efficient way to do computing, but also transact using AI as AI will be with us in almost everything that we do.
So, it’s a statement to say this is just the beginning of a journey. We have a huge opportunity in front of us, and we’re looking forward to more to come. Yes, we gave our earnings just right before the Thanksgiving holiday to show both strong growth both sequentially and year-over-year growth for the company across all of our market platforms.
But a standout, of course, is our Data Center compute. Our Data Center compute reached record levels again, both for our training and inferencing also for our GPU sales and systems, but also our networking. We are reaching more and more customers every day in terms of the work that we are doing. The strength stem from our consumer Internet companies and a lot of our enterprises. Let’s not forget our CSPs also all growing in this last quarter, and we are continuing to see more of our specialized and regional CSPs also grow. So, I just wanted to start with just a beginning statement.
I’ll turn it back over to you, Aaron.
Question-and-Answer Session
Q – Aaron Rakers
That’s perfect. So, I’m going to — I’ve got — I don’t know, 50 questions here for you, so we’re going to try and get through as many as we can. But the inevitable question I always get is, what is fiscal 1Q or ’25 look like? And I think I know the answer from you. But maybe I’ll just start kind of help us characterize the balance or the imbalance, I should say, between the current supply and demand environment, what NVIDIA is doing? And how do we think about the dynamic of lead times on some of your higher-end SKUs? And how does that start to progress? Or how should we think about that progressing as we move forward?
Colette Kress
So, an important look at this last year has been our ability to scale as a company for the size of revenue. So, some folks are really looking at it that it must have been an easy process. But it took a lot of work working with many of our supply channel, not just in ordering supply, but keep in mind, that is also about ordering capacity.
And our long-term relationships with them was really helpful for us to be able to scale as fast as we’ve had. Unfortunately, we are still supply constrained though. And it’s going to take us a little while yet for us to catch up with that. We plan in terms of scaling supply both all of this year in each quarter, but we also plan that as we go into ’25.
We’re making meaningful progress in terms of catching up with that supply. Many folks look at both our ordering and seeing what we have in inventory. But keep in mind, midst of that is many different durations. — durations in terms of what we need just today, what we are also procuring or solidifying for capacity in terms of the long term.
So, we’re on track next year to make some meaningful progress right now in terms of that supply and demand. But at the same time, that we are serving demand. We are also bringing new products to market, new products to market, have therefore, surfaced the onset of now more demand coming in for our next set of products, and I know we’ll talk about that more.
Aaron Rakers
Yes, that’s perfect. And I know you just mentioned, right, like you had like if I add up purchase commitments and inventory and prepaid capacity is like it grew 40% sequentially this last quarter. The point of that is that you would expect that to continue to grow sequentially over the next handful of quarters.
Kind of away from the supply side, the demand side, I guess, how has your — if at all, view on visibility change — the demand visibility that you see, the demand shaping? And the second piece of that question, I think last quarter when we talked last week, you talked a little bit about product cycle. The cadence of product cycle is an important variable to consider on product visibility. Maybe help us appreciate that a little bit more.
Colette Kress
So, when you think back over time, that the many different architecture generations that we’ve gone through, and what we are seeing today, our relationship with our customers has grown stronger and stronger. And when that relationship comes to helping them think through what they plan to build in terms of their data centers, that’s a long-standing discussion and help us both work with them and what is the exact configuration that they need, but it also helps us on demand visibility.
So, our work with them continues each and every day. And if you think about how long it would take to build the data center. From the beginning of day one of planning to standing it up, that’s likely a year if you are a very well-seasoned team that has built up data centers. So already, we are seeing the work begin in terms of that next year. What do they want to build and the bringing together both our existing portfolio, our new portfolio of products coming out that again, build a relationship of more and more demand as we go forward?
So, we’ll continue that process. Our visibility is strong. And when we talk about our visibility, each one of our customers, knowing where we were with supply need to help us plan they needed to provide us that deep understanding from essentially appeal perspective. So, we’ll continue that path right now with the new products and maintain this process of understanding demand.
Aaron Rakers
Yes. And just on a finer point on that, if you look at the slide deck that you’ve put out there on your Investor Relations website, you kind of outlined that cadence of product cycle, right? It’s — it looks like it’s now more or less a year cadence whereas in the past, it might have — that was a much longer cadence. So that’s part of this visibility discussion that these customers are asking you for that kind of cadence of product cycle, is it fair?
Colette Kress
That’s correct. Additionally, the market has advanced so much the complexity of the type of AI and solutions that folks are working for, they would love to see something new for each of their new plans that they have. And so, for us to work on more different products even in between architectures that we can do as well as our architectures going forward to be on a faster cadence that is helpful to them, both helpful from planning, but also to support the new projects that they’re doing.
Aaron Rakers
That’s perfect. So, shifting gears a little bit. The competitive landscape, the — I often get the question of it’s — one of your competitors launch a product next week, there seems to be more and more narrative around like what the hyperscale cloud guys are doing. I know AWS this week or announcement. You’ve seen what Microsoft announced recently. How do you characterize the competitive landscape that you’re seeing in Data Centers?
Colette Kress
When we think about the work that we have done, we still step back and help folks understand. We didn’t build a specific product or we didn’t build a chip. We built a full stack. We built many times a full data center, a data center for competing for the minute information data enters the data center, you can work with NVIDIA, both NVIDIA systems, NVIDIA’s networking, NVIDIA’s overall software stack from just both system software to really as close to the applications we can get.
We help people build models. We help people correct models. That’s the work that we do. So competitive is hard to look at because there isn’t anything that is an apples-to-apples in terms of what we’re doing. They’re all very, very different. There can be specific chips that may help certain specific workloads.
But the reason that our customers continue to turn to us is because of the TCO savings of purchasing a full stack for them doesn’t require them the significant amount of resources that they would have to add on top if they only had received a chipset. That work continues for us to help support their TCO efforts. So, when we think about other types of solutions coming to market, they’re great. But again, our position is the more of the Marriott, that’s fine. But we do know TCO is going to be the number one goal of many of our customers today.
Aaron Rakers
And a lot of times, I’ll field the question of it’s CUDA. It’s the 4.8 million developers, it’s the stack there, but it’s so much deeper, right? It’s — I think we tend to get lost on the CUDA stickiness, but is that a fair assessment? It’s so much more than just the CUDA layer.
Colette Kress
If you think about the onset of CUDA. CUDA in our development platform that’s on every single one of our GPUs and has been for close to 15 years. That is a building of not only a very strong development platform, but a community that has joined that development platform.
Everything that we do on our GPUs today is both backwards compatible and forward compatibility. Every customer knows that. They change generation of architectures to our new generation. Everything is still working. We also have to think through where would that development community like to be.
They like to be where all the other developers are because so much work has been built over time. Somebody would have to rebuild that. And so, our position there has just been a very full end-to-end solution that no one can overall argue with and they understand that we are here to continue to innovate going forward. They can count on us that next year, yes, we are going to be thinking about new products for this market as well.
Aaron Rakers
So, I want to go down further down the layers of the stack strategy in a minute here, networking software. I definitely want to touch on those. But before we go there, I wanted to ask about the China restrictions. This recent round I know last week, you had mentioned, look, we’re going to have solutions that adhere to the restrictions to sell into China within “month”. You also mentioned though that the China contribution would be down “significantly” this quarter. Help us think about that cadence? Like is that — did you kind of take out that full China business in your expectations this quarter and we start to see that come back as some of these solutions come into the market. Is that how we think about it into the next quarter or two?
Colette Kress
The U.S. export controls this time were quite detailed, quite long. And took some really thinking about how do we move forward to help our China customers. China is still a very big market, not just for us, but for much of the industry as a whole. And when you look through the export controls, we have to carefully go through what is just not an option that they would not approve.
There’s a new set of an area that says up to notify and review with the government. And then there’s an area that says carry on, this is fine for China. Now what we want to do is make sure our both understanding and our relationship with the U.S. government remains as solid as it has.
We’ve created a great understanding of their needs and we want to make sure we’re following that. Keep in mind, our China customers want to as well. If we will bring them a new product, they do want to know that the U.S. government also agrees. And so, we’re working through right now in terms of our design of what we think we could do. We will certainly talk with the U.S. government and make sure that is also aligned with them.
Given that, that is an unknown defined time, you are correct. We’re not looking for that to be a part of what we provided as an outlook for our Q4. And so, the sequential decline that we will see for China, keep in mind, we still will be selling for our gaming business. We still bill for some of our other parts of our Data Center business that we can sell in terms of that, but there would be a significant change in the quarter. But going forward, we will support China with the approval and the understanding of the U.S. government.
Aaron Rakers
Has the dialogue this around relative to what the restrictions were more bandwidth-oriented at the first kind of reset? Has the dialogue with the U.S. government changed? Has it deepened as far as the engagement of solutions that will fall under the thresholds?
Colette Kress
Just given the complexity of the market, given the complexity of just semiconductor as a whole and the complexity of AI, yes, it was a much more thorough discussion on both sides.
Aaron Rakers
Yes. And as far as the cadence of — you said months, right, as far as new solutions for China…
Colette Kress
We’re working as fast as we can.
Aaron Rakers
I got you. Okay. Let’s go down the stack a little bit more. A topic that I’ve written a lot about given my coverage universe and this networking business, which I think people are now really starting to see the significance of it. To put some context of that. I mean when you bought Mellanox, the business was running $1.3 billion of revenue. I think if my math is remotely right, you did $2.6 billion or even $2.7 billion of revenue in networking. I know Jensen endorsed $10 billion-plus of annualized revenue in this last quarter.
Help us appreciate that a little bit more. First of all, I want to know how much is InfiniBand and then I’m going to get to Spectrum-X and how you see that evolving as far as even deepening that networking strategy.
Colette Kress
Yes. A great question regarding networking. At the time that we had completed the acquisition, one of the things that we did know that it was a match of culture, a match of culture in terms of how both teams worked on both innovation, thinking about where the future would be and have been really the basis of their data center was high-performance computing, very similar to what we’ve done.
But we’re so pleased in terms of how well the acquisition has both helped our solutions for customers, but our partnerships that we now have across so many of our peers that are in Israel. And our work, you’re correct. We’ve reached now an annual run rate of nearly $10 billion in terms of our networking that is looking where there’s a very sizable amount where we are together when we are selling GPU systems and selling network together.
They look for our high-end networking solutions. Why? Because they’re the best of the breed for accelerated computing and also for our AI solutions. If you are doing AI, both training and the inferencing, the importance of InfiniBand as the standard for those large clusters is very, very key. So, InfiniBand has also grown even faster than the total networking business that we have.
And we have very large customers that have been using it and installing it throughout, and that’s an important part of the process of building out their data center. But we need and also understand that Ethernet for accelerated computing and AI is also very key. And so, our Spectrum-X will be coming out in the new calendar year that will be there, again, with the high speed, moving from 400 gig to 800 gig. And it will be very, very key now based on Ethernet.
Ethernet is important for enterprises when they have the multi-tenancy types of data centers that they have. And we do know that, that is an important piece. So, we’re going to be able to really manage both of these industries.
Aaron Rakers
So, Colette, there’s a debate about InfiniBand, Ethernet is Ethernet replace InfiniBand. Do you look at Spectrum-X as being accretive to the business? Or is it either or do they just play in different pieces of this AI stack? I think a lot of your white paper talks and delineates between AI factories versus AI cloud. And it seems like that might be a delineation of Ethernet versus InfiniBand. Maybe help us understand where one plays and is it accretive to the model?
Colette Kress
It’s absolutely accretive. This is not taken away. InfiniBand, again, is a standard for many that they will have. Now opening it up for those that are on Ethernet, it’s an in addition in terms of that key place. It’s true that we think about it from what will be the AI factory, what will they standardize on versus what will they standardize, for example, supercomputers that are built just for AI.
Thinking about the traffic that is coming into a data center, particularly for some of this large inferencing platform, that traffic and that traffic mitigation, both the InfiniBand and the new Spectrum-X really, really work to manage all of those traffic challenges that may be there.
Aaron Rakers
Yes. That’s perfect. And again, that’s Q1 those come out, you’ve announced partnerships with Dell and the server ecosystem.
Colette Kress
Absolutely.
Aaron Rakers
Okay. Great. Kind of sticking on the product portfolio. Announcement this week, AWS is the first deployer, I think, of the GH200. So that’s a Grace Hopper the combination of the ARM-based CPU and the Hopper GPU. Talk a little bit about where that fits in the strategy? What does Grace Hopper look like as we start to think about that piece of the product portfolio going forward?
Colette Kress
Correct. So, we came out with Grace Hopper 200, and we started shipping it within Q3. Q3 was many of our supercomputer design wins that we have had. So, it has begun the shipping. But what we have now is Grace Hopper 200 with Amazon and with their AWS EC2 set. And what is important about that is they will also take that to create a full supercomputer dedicated to where you are now able to keep 32 GPUs together and working as well as a new revised NV link within there. This is again yet that new product introduction.
We’re excited to work with AWS. They will be standing up the very first GH200 as a CSP. There will also be the opportunity for them working with us on DGX Cloud on GH200 as well. So now working with customers on software and solutions, using GH200 just is a great opportunity, both of using that CPU, but also a complete faster performance as a whole in terms of…
Aaron Rakers
Is it going to be Grace Hopper, GH200, GH300, whatever the subsequent versions might look like? Or is there just a Grace? Is there a market for just an ARM-based CPU from NVIDIA?
Colette Kress
There is an opportunity for just a Grace. There is an opportunity for just Grace. New product scenarios that we could see in the data center, you will likely see opportunities for Grace as well.
Aaron Rakers
That’s perfect. So, I want to shift now to software, something we’ve also written a lot about. And I think the reason for writing more and more about it is that I just hear you becoming more vocal about it, right? You talked this last quarter, it’s on pace to hit $1 billion in ARR. Can you walk us through the software monetization for NVIDIA, like the big drivers? And then certainly, I have questions after that. But walk us through the key drivers of the software side?
Colette Kress
We talk about our software more and more because it is an important reason. Again, why people choose our stack and why the success of the work that they’re doing is so successful. The years of our software building. There is software that comes with every GPU even though it is not foreseen as part of the invoice, it just says we’ll provide it in terms of free that is important into the work that they’re doing.
But now there is a new opportunity for us to look at software as a monetization as well. But there’s reasons for that. Our work is with enterprises, our true end customer is the enterprises around the world of all shapes and sizes for their work that they do. When they are building accelerated solutions or AI and they need help as they are likely not staffed with a significant amount of software engineers. That software stack is essential. It’s essential that things have been already prebuilt, predesigned that they can work for an structure.
They could also turn to us in terms of help assistance, on fixing models, optimizing models, and additional work for new projects that they may be able to do. Those enterprises are very focused in terms of seeing their AI computing in the same frame that they see all of their computing that’s in the data center.
This is important that our software now leverages and works with VMware as most enterprises leverage VMware to manage and operate all of their data center, all of their different workloads. So that is key for us to be a key part of this as we see the data centers in the future becoming a very big portion of it being accelerated in AI.
Those enterprises are looking for a solution that says, who’s accountable for keeping up that software, who is providing the security platform with it? And how can I create a trusting relationship with it? That is why it monetizes. That’s why this is something that we can actually sell in that piece. NVIDIA AIE is our software platform, essentially the AI operating system for the enterprises. That will be a very big part of our software probably going forward.
We have other different components as well. Omniverse is a key component. And let’s forget our AV software for automotive that will be with us. These things will scale not only with just our types of customers, but just because of our infrastructure as people install more and more infrastructure that operating system will be important for them too.
Aaron Rakers
If I think about $1 billion ARR this year, is it fair to say that the overwhelming majority of that, I’m sure — unless you want to give me a number, which I don’t think you will, is the overwhelming majority of that AI enterprise software today?
Colette Kress
There’s a lot of different pieces in there, but most of it is associated with what’s going to the data center and a lot of different data center components.
Aaron Rakers
And that’s interesting because do you think that, that consumption model is through your cloud partners? Again, now that I look at it, you’ve got Oracle, Microsoft, Azure, Google and just this week, DGX at AWS as well. Is it consumed through your cloud partners? Or is it consumed on traditional enterprise on-premise infrastructure?
Colette Kress
Yes. The great thing is it’s consumed in almost every form of the channel that you can think through. If you have in terms of I’m going to self-design it with an ODM or with Adel with an HP or I’m going to have cloud credits and work with my cloud customers and download the software in terms of their, all of these are opportunities for them to procure our software, making it easy for that integration, you can pretty much get it in a lot of different places.
Aaron Rakers
So, through the cloud guys, price per GPU per hour SaaS model?
Colette Kress
That’s correct. It is. You should look at it somewhere in the range of about a $4,500 to $5,000 per year per GPU type of look. Somewhere in that range is what we’re looking.
Aaron Rakers
Okay. And I’ve asked you this many time after conference calls and stuff, one of the metrics you guys talk about is these multiyear cloud service agreement numbers. Some of that’s internal usage, a lot of that might be internal usage, right? I’m always looking for that leading indicator on the software side. Some of that is actually your potential payment to your cloud partners for the infrastructure for the software. It’s actually — is that a fair assessment?
Colette Kress
So, we have cloud service agreements, just like every other enterprise has out there. Our cloud service agreements serve many different uses. The ability for them to stand up in the cloud, so we can understand what enterprises are facing. And then we are using that to test our software, test our future solutions and work with them on new use cases for products.
We do this all the time. So, we have — most of that right now has been centered around our internal view. But now we are building for DGX Cloud, where we have established space within the CSPs. So, for any customer coming in from an enterprise and says we’d like your DGX Cloud, we can move them across multiple different CSPs. They are not having to be in any one. We’re just being in almost all of them, and that will help them as quickly come to market as they can on their product solutions.
Aaron Rakers
So, two other real quick on the software side. So Omniverse, I want to say if my memory is right, kind of initially introduced back in the latter part of ’21-ish time frame, maybe on a year, I don’t know, I don’t remember, but is that — the progression of that software piece is it just takes a little bit longer. It’s more…
Colette Kress
It’s a great progress already in terms of what we’re seeing in Omniverse, working with very large manufacturing and factory types of builds and the importance of the work that they need to do to redesign and/or initially design any one of those factories for the most efficiency, they are using Omniverse very clearly.
So many of the large car companies, car manufacturers really looking at Omniverse to help them in there. But you can see this to almost any type of factory that is being built, industrial types of factories or warehousing and how do I redesign that because you have the ability to create a complete digital twin of your existing and, or future without going through that full prototype of a building and making large error throughout by using an Omniverse environment.
What happens is each and every day, more types of uses come to Omniverse as we add many different more prescriptions that they need to do their work. That’s always going to be added. And so, it will be a continuous evolution. But that’s 3D type of view versus a 2D, which is used so much in terms of design and build will be essential. So yes, we’re pleased with the progress, and we’ll continue to see it in the future.
Aaron Rakers
And then the final thing on software, automotive, am I still thinking like Mercedes, flagship, Jaguar Land Rover flagship 2025, 2026-time frame, is it fair?
Colette Kress
Absolutely. We are busy working, but yes, that’s when we expect the pilots to start as also the full fleet for both of those companies.
Aaron Rakers
So, I’ve got 3.5 minutes left. I’m going to maybe rapid fire through a couple of quick questions. So, mix has been a huge driver of the business. Where do we think gross margin should go? I mean it’s remarkable, right? You’re 75% gross margin. How do we think about the trajectory of gross margin? It seems like data center mix will continue to go higher. Software is going to layer on top of that. How should we think about that?
Colette Kress
Yes. So, when you think about our gross margin, although it is an important metric for many of us on the P&L, keep in mind, it doesn’t capture everything when we talk about both our ASP or we think about the actual manufacturing cost.
It really just is the manufacturing costs that are included in that because the work that we did in terms of the designing the software that is in many of days and, or just the full engineering work on many other solutions that keep giving even after we have shipped the product. Doesn’t easily get represented. Most of that is still in the OpEx. So, it’s a metric and it’s an important metric. And so, the 75 given our size our data center business, you are now seeing the company margins and the data center margins to kind of be about the same because what you are seeing as a company total is mostly just that data center.
We believe this is about a level where it will stay with the continuation at this point, probably being software and software adding to that. But we think you are pretty much now seeing the data center one.
Aaron Rakers
That’s perfect. And then the other quick question I want to ask is that you this last quarter with $18-plus billion of cash on the balance sheet. I think everybody can look at a model and say that you guys are going to generate a lot of cash. How do you think about strategic M&A, like this platform strategy, Mellanox home run, obviously, and play out? But how do you think about the balance of strategic maybe platform expanding M&A activity for the company?
Colette Kress
Yes. I’d say first stepping back, cash allocation is a very top priority to make sure we can think of all the right avenues that we want to apply that cash. Always going to be first as an investment back into the business, whether that be capital back into it, whether it be OpEx in it, investing in the business, innovation, just write-off is going to be our number one use of cash.
Secondly, we do want to make sure that our investors get their portion, and we want to make sure that we cannot have any dilution associated with our equity that we provide to employees. Our equity to employees is very important. That is a very important part of their compensation, but we do want to keep that dilution about as flat as possible. After that, we look in terms of investments every single day. Investments that we can learn from many companies in terms of the work that they are doing, but also working with other companies is this an opportunity for M&A.
It’s hard to have found the perfect Mellanox on the past and think that would be easy to find again. It’s a new environment in terms of right now of the M&A environment, but it doesn’t mean that we still look. We look in terms of smaller companies’ teams that are bringing a unique add to our company, and that would be something that we’ll look at all the time.
Aaron Rakers
So, we got literally 12 seconds left 10. I’m going to ask you just the 1 point of question, which is you talked to hundreds of investors after earnings, right? You — your earnings don’t end and then you go and do something else. This is a continual flywheel of discussion. What are you surprised that people aren’t asking you about more? Like is there any topic where you like, man, I’m surprised I’m not getting this question, what would that be?
Colette Kress
The surprise of the question that I am not getting. Well, I would look at it as our goal in terms of earnings and the reasons why we do these talks like this is to make sure the clarity of our products, the clarity in terms of accelerating computing, and why there has been this growth has been an important part of us.
So, I do believe the questions mainly surround a little bit more detail, they want, but they have gotten a very clear understanding that with generative AI, there has been such a significant change of the focus of enterprises around the world, focusing on building out their AI solutions for their enterprises.
Each enterprise looks and says the future is about using enterprise AI. Otherwise, they will not be able to compete in the market. That’s a pretty big market to go after and work through. This is important to think about our focus on sovereign AI has come forth and folks have asked what do we mean by that? That has been probably an important key understanding.
We’re speaking here, we’re in the U.S. We see ChatGPT is U.S. culture, U.S. data, U.S. ways to think. Each and every nation and, or region wants the same thing. They also know that they would, therefore, have proprietary data and information as well. So not only do we speak with so many different enterprises in their work right now to add AI. We are working with many regions also to build out what they need.
Aaron Rakers
Colette, we’re over time. I appreciate you joining us this morning. Thank you so much.
Colette Kress
Thank you.
Read the full article here