• Well, it's been so interesting about these. Gold and touches from OpenAI for some of the big publicly traded companies is usually the OpenAI has been given a sweetener by Nvidia by AMD. Broadcom it just seems to be independence that they give to OpenAI.
    speaker1
  • speaker2
    Yeah, and I think the model here is the Google GPU model. I mean, when you think about broad comms, almost $20 billion run rate for AI chips, more than half of that. is from Google TPUs. So what OpenAI is saying is you help us get there in terms of the ramp up like... Google TPUs which is in their seventh version of
  • a chips.
    speaker1
  • speaker2
    mean and they've done it at a very quick pace. from that perspective. It will help. Open AI reduce costs of up to 30 to 40 percent. Where did you go what? If you think about one giga watt takes 40 to 50 billion. a one gigawatt with broad calm.
  • chips.
    speaker1
  • speaker2
    would be at least 30 to 40% cheaper because the cost of those chips. is the highest component. in that giga-board buildout. So Broadcom helps you lower that cost of chips. And I think that's the model here that Yes we want merchant silicon. But we also want custom silicon with broad calm because that's the kind of diversification Google has. And that's why they're cost of infrastructure is the lowest among all diverse scale. We do the read across for the TPU's from Google. Who else is in the mix there?
  • ra and approval comes as then need I'll help you custom design a chip. I'll help you with network and gear. But there's a lot more to an AI data center than all of that.
    speaker1
  • speaker2
    Absolutely, and you need to source the power, you need all the other deals, but... In then, when you look at what Amazon is trying to do with Trainium, they're doing that with Marvel. Microsoft is doing that with Marvel as well. And they haven't had the same kind of success that Google has had with. Ppu's with broadcast column. So to my mind, it was natural for opening eye to try with broad Kong. given the success that again Google has had. compared to everyone else who is trying to do custom silicon. And yeah, they will do deals for power. That's what OpenAI is good at in terms of sourcing different providers. That's what... Sam Altmaneshon, but clearly chips is the component. that cost 60 to 70 percent of the data center. So you want to make sure you get that at the lowest cost. You won't be able to do that within video. and video will still be the highest cost. to provider even though they are making an investment. AMD has a blue likely cut its cost but it won't be the same performance for what? Broadcom will do it custom specs for you.
  • and then they can do it at scale that Google is doing. And it's for inference. And I'm interested if you can interpret when Open AI Sam and Hocktan get together on a podcast and announce this sort of a deal. What is it that by understanding your own large language model and the needs of it that can really be built into the custom silicon?
    speaker1
  • speaker2
    I mean, just this past week and I read about tiny recursive models. So everyone is looking at how these large models can be one more efficiently in terms of inferencing costs. And you know whether it's tiny recursive models or some other form. you want minimum latency. as well as you know power is your real constraint so you want maximum performance per what? So if you're optimizing for those two, you are going to go with custom silicon because that's what Google has shown as they can you run YouTube videos best. because it's their custom silicon, no other merchant silicon can give you that kind of performance. And I think that's what OpenAI is going up.
© 2025 - marketGuide.cc