Jensen Huang on AI, synthetic data, & his American Dream (Pt 3 of 3)_Summary & Comments by Joanne Z. Tan_Season 2, Episode 61 Podcast Por  arte de portada

Jensen Huang on AI, synthetic data, & his American Dream (Pt 3 of 3)_Summary & Comments by Joanne Z. Tan_Season 2, Episode 61

Jensen Huang on AI, synthetic data, & his American Dream (Pt 3 of 3)_Summary & Comments by Joanne Z. Tan_Season 2, Episode 61

Escúchala gratis

Ver detalles del espectáculo
Jensen Huang on why AI will be indispensable, how he uses AI, synthetic data and AI generated knowledge in 10 years to be 99% on all AIs, and his own American Dream. This Part 3 is the third and last 10-minute segment of a 3-part recording of Jensen Huang’s entire 30-minute talk in Stanford on July 26, 2025. Part 3 Summary: to read Pt. 3 as a 5-min blog To watch Pt. 3 as a 16-minute video Part 3 Summary: Jensen Huang advised young people to learn how to reason and break things down to first principles. To know what the first principles are: “Go to school!” In answering the concerns about human collective intelligence of managing the collective intelligence of AGI, Jensen Huang stated that “... human generated knowledge and human generated data would today be 99%, in about 10 years it will probably be 1%. The vast majority of human knowledge will be generated by AI. It will be AI generated data that the other AIs learn from,...it’s going to be synthetic generated intelligence. …that's just intelligence, it is not a big deal, It's just data…that the amount of AI generated knowledge is to be incredibly high.” (Comments from Joanne Z. Tan:) I respectfully disagree with Jensen Huang regarding synthetic data: I wrote an article (link below) seven months ago, analogizing the danger of synthetic data to Norman Rockwell’s famous painting, “The Gossips”. What may start as a story about a “cat”, may end up being about an “elephant” after being passed through 15 people. It is therefore important to label data either as originating from a source or as synthetic, before being used to train AI and becoming untraceable, to avoid misinformation that can cause catastrophes like a financial market meltdown. Here is my article:https://10plusbrand.com/2025/01/13/synthetic-data-ai-toxic-assets-financial-crises-2008-1987-joanne-z-tan/This point is echoed by a prominent expert in the AI fintech industry, who was also a chief data officer at both state and federal government levels, in the “Interviews of Notables and Influencers”. The subheadings about synthetic data speak for themselves: https://10plusbrand.com/2025/04/07/ai-future-synthetic-data-ai-mistakes-ai-governance-crypto-regulations-knowledge-economy-tammy-roust-interview-joanne-z-tan/(At 46’21”): “Untagged synthetic data pose systemic risks; model collapse; The real danders from AI hallucination” (At 49’56”): “Need for auto tagging of synthetic data when it is being generated and used; the danger of group think” and “We need to have a human consensus mechanism & AI governance committee to correct AI’s mistakes”. Jensen Huang said this about AI: “You want the smartest friends? You want the most productive friends?...go engage AI as fast as possible, because they’re super, super smart and they're going to help you solve problems.” “It's also the case that we want second opinions, and third opinions. I use multiple AIs at the same time solving the same problems. And I take the answers from one and I give it to the other one. I'll make the second one judge the first one: What do you think about this answer?...And I ask each one of them to produce, you know, based on everything that you’ve now learned, why don’t you reflect on what I told you and what I gave you, and then give me a better answer. And so you notice I'm interacting with AI the way I interact with people, I want them on my side, I want them to work with me.” (Comments from Joanne Z. Tan:) The above sounds like circular reasoning to me. If nothing is done to label synthetic data used by all AI models, what makes their second and third opinions any more reliable?Without holding AI accountable by resorting to “first principle thinking” that Jensen Huang has applied over and over, what makes AI smarter or credible?Assuming that Jensen Huang’s preference for human control over AI tools is not hijacked by AI yet, AI is threatening human intelligence with this “double whammy”:By automating tasks, AI will take away the OPPORTUNITY for humans to learn the basic skills that train their minds to advance to higher level positions;By relinquishing analytical and critical thinking to AI, human mental acuity will be degraded. Without doing the thinking ourselves to practice and strengthen the skills, humanity will lose reasoning CAPABILITY by relying on AI. Finally, Jensen reflected on the American melting pot, amazing opportunities, and the rule of law for both immigrants and Americans. He said it is a combination that is “SO delicate, … it depends on so many things working together, …. It is not a guarantee, … I really hate to see us squander that… I hope that we continue to protect that.” Regarding the competition between China and the US, he said “competition is great, but conflict is less good.” He cautioned that what is going on between governments and countries ought not to be conflated with how ...
Todavía no hay opiniones