AI as New Global Power? Podcast Por  arte de portada

AI as New Global Power?

AI as New Global Power?

Escúchala gratis

Ver detalles del espectáculo
Our Deputy Head of Global Research Michael Zezas and Stephen Byrd, Global Head of Thematic and Sustainability Research, discuss how the U.S. is positioning AI as a pillar of geopolitical influence and what that means for nations and investors.Read more insights from Morgan Stanley.----- Transcript -----Michael Zezas: Welcome to Thoughts on the Market. I'm Michael Zezas, Morgan Stanley's Deputy Head of Global Research.Stephen Byrd: And I'm Stephen Byrd, Global Head of Thematic and Sustainability Research.Michael Zezas: Today – is AI becoming the new anchor of geopolitical power?It's Wednesday, February 27th at noon in New York.So, Stephen, at the recent India AI Impact Summit, the U.S. laid out a vision to promote global AI adoption built around what it calls “real AI sovereignty.” Or strategic autonomy through integration with the American AI stack. But several nations from the global south and possibly parts of Europe – they appear skeptical of dependence on proprietary systems, citing concerns about control, explainability, and data ownership. And it appears that stake isn't just technology policy. It's the future structure of global power, economic stratification, and whether sovereign nations can realistically build competitive alternatives outside the U.S. and China.So, Stephen, you were there and you've been describing a growing chasm in the AI world in terms of access to strategies between the U.S. and much of the global south, and possibly Europe. So, from what you heard at the summit, what are the core points of disagreement driving that divide?Stephen Byrd: There definitely are areas of agreement; and we've seen a couple of high-profile agreements reached between the U.S. government and the Indian government just in the last several days. So there certainly is a lot of overlap. I point to the Pax Silica agreement that's so important to secure supply chains, to secure access to AI technology. I think the focus, for example, for India is, as you said; it is, you know, explainability, open access. I was really struck by Prime Minister Modi's focus on ensuring that all Indians have access to AI tools that can help them in their everyday life.You know, a really tangible example that really stuck with me is – someone in a remote village in India who has a medical condition and there's no doctor or nurse nearby using AI to, you know, take a photo of the condition, receive diagnosis, receive support, figure out what the next steps should be. That's very powerful. So, I'd say, open access explainability is very important.Now, the American hyperscalers are very much trying to serve the Indian market and serve the objectives really of the Indian government. And so, there are versions of their models that are open weights, that are being made freely available for health agencies in India, as an example; to the Indian government, as an example.So, there is an attempt to really serve a number of objectives, but I think this key is around open access, explainability, that I do see that there's a tension.Michael Zezas: So, let's talk about that a little bit more. Because it seems one of the concerns raised is this idea of being captive within proprietary Large Language Models. And maybe that includes the risk of having to pay more over time or losing control of citizen data. But, at the same time, you've described that there are some real benefits to AI that these countries want to adopt.So, what is effectively the tension between being captive to a model or the trade off instead for pursuing open and free models? Is it that there's a major quality difference? And is that trade off acceptable?Stephen Byrd: See, that's what's so fascinating, Mike, is, you know, what we need to be thinking about is not just where the technology is today, but where is it in six months, 12 months, 24 months? And from my perspective, it's very clear. That the proprietary American models are going to be much, much more capable.So, let's put some numbers around that. The big five American firms have assembled about 10 times the compute to train their current LLMs compared to their prior LLMs, and that's a big deal. If the scaling laws hold, then a 10x increase in training compute to result in models are about twice as capable.Now just let that sink in for a minute, twice as capable from here. That's a big deal. And so, when we think about the benefit of deploying these models, whether it's in the life sciences or any number of other disciplines, those benefits could start to get very large. And the challenge for the open models will be – will they be able to keep up in terms of access to compute, to training, access to data to train those models? That's a big question.Now, again, there's room for both approaches and it's very possible for the Indian government to continue to experiment and really see which approach is going to serve their citizens the best. And I was really struck by just how focused the Indian ...
Todavía no hay opiniones