Dwarkesh Podcast Podcast Por Dwarkesh Patel arte de portada

Dwarkesh Podcast

Dwarkesh Podcast

De: Dwarkesh Patel
Escúchala gratis

Deeply researched interviews

www.dwarkesh.comDwarkesh Patel
Ciencia
Episodios
  • Jensen Huang – TPU competition, why we should sell chips to China, & Nvidia’s supply chain moat
    Apr 15 2026

    I asked Jensen about TPU competition, Nvidia’s lock on the ever more bottlenecked supply chain needed to make advanced chips, whether we should be selling AI chips to China, why Nvidia doesn’t just become a hyperscaler, how it makes its investments, and much more. Enjoy!

    Watch on YouTube; read the transcript.

    Sponsors

    * Crusoe’s cloud runs on state-of-the-art Blackwell GPUs, with Vera Rubin deployment scheduled for later this year. But hardware is only part of the story—for inference, Crusoe’s MemoryAlloy tech implements a cluster-wide KV cache, delivering up to 10x faster TTFT and 5x better throughput than vLLM. Learn more at crusoe.ai/dwarkesh

    * Cursor helped me build an AI co-researcher over the course of a weekend. Now I have an AI agent that I can collaborate with in Google Docs via inline comment threads! And while other agentic coding tools feel like a total black-box, Cursor let me stay on top of the full implementation. You can try my co-researcher out at github.com/dwarkeshsp/ai_coworker, or get started on your own Cursor project today at cursor.com/dwarkesh

    * Jane Street spent ~20,000 GPU hours training backdoors into 3 different language models, then challenged my audience to find the triggers. They received some clever solutions—like comparing the base and fine-tuned versions and extrapolating any differences to reveal the hidden backdoor—but no one was able to solve all 3. So if open problems like this excite you, Jane Street is hiring. Learn more at janestreet.com/dwarkesh

    Timestamps

    (00:00:00) – Is Nvidia’s biggest moat its grip on scarce supply chains?

    (00:16:25) – Will TPUs break Nvidia’s hold on AI compute?

    (00:41:06) – Why doesn’t Nvidia become a hyperscaler?

    (00:57:36) – Should we be selling AI chips to China?

    (01:35:06) – Why doesn’t Nvidia make multiple different chip architectures?



    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Más Menos
    1 h y 43 m
  • Michael Nielsen – How science actually progresses
    Apr 7 2026
    Really enjoyed chatting with Michael Nielsen about how we recognize scientific progress.It's especially relevant for closing the RL verification loop for scientific discovery.But it's also a surprisingly mysterious and elusive question when you look at the history of human science.We approach this question stories like Einstein (who claimed that he hadn't even heard of the famous Michelson-Morley experiment, which is supposed to have motivated special relativity, until after he had come up with the theory), Darwin (why did it take till 1859 to lay out an idea whose essence every farmer since antiquity must have observed?), Prout (how do you recognize that isotopes exist if you cannot chemically separate them?), and many others.The verification loop on scientific ideas is often extremely long and weirdly hostile. Ancient Athenians dismissed Aristarchus's heliocentrism in the 3rd century BC because it would imply that the stars should shift in the sky as the Earth orbits the sun. The first successful measurement of stellar parallax was in 1838. That's a 2,000-year verification loop.But clearly human science is able to make progress faster than raw experimental falsification/verification would imply, and in cases where experiments are very ambiguous. How?Michael has some very deep and provocative hypotheses about the nature of progress. One I found especially thought-provoking is that aliens will likely have a VERY different science + tech stack than us. Which contradicts the common sense picture of a linear tech tree that I was assuming. And has some interesting implications about how future civilizations might trade and cooperate with each other.Watch on Youtube; read the transcript.Sponsors* Labelbox researchers built a new safety benchmark. Why? Well, current safety benchmarks claim that attacks on top models are successful only a few percent of the time, but the prompts in those benchmarks don’t reflect how real bad actors actually write. You can read Labelbox’s research here. If this could be useful for your work, reach out at labelbox.com/dwarkesh* Mercury has an MCP that lets you give an LLM access to your full transaction history, including things like attached receipts and internal notes. I just used it to categorize my 2025 transactions, and it worked shockingly well. Modern functionality like this is exactly why I use Mercury. Learn more at mercury.com* Jane Street’s ML engineers presented some of their GPU optimization workflows at GTC, showing how they use CUDA graphs, streams, and custom kernels to shave real time off their training runs. You can watch the full talk here. And they open-sourced all the relevant code here. If this kind of stuff excites you, Jane Street is hiring — learn more at janestreet.com/dwarkeshTimestamps(00:00:00) – How scientific progress outpaces its verification loops(00:17:51) – Newton was the last of the magicians(00:23:26) – Why wasn’t natural selection obvious much earlier?(00:29:52) – Could gradient descent have discovered general relativity?(00:50:54) – Why aliens will have a different tech stack than us(01:15:26) – Are there infinitely many deep scientific principles left to discover?(01:26:25) – What drew Michael to quantum computing so early?(01:35:29) – Does science need a new way to assign credit?(01:43:57) – Prolificness versus depth(01:49:17) – What it takes to actually internalize what you learn Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Más Menos
    2 h y 3 m
  • Terence Tao – Kepler, Newton, and the true nature of mathematical discovery
    Mar 20 2026

    We begin the episode with the absolutely ingenious and surprising way in which Kepler discovered the laws of planetary motion.

    People sometimes say that AI will make especially fast progress at scientific discovery because of tight verification loops.

    But the story of how we discovered the shape of our solar system shows how the verification loop for correct ideas can be decades (or even millennia) long.

    During this time, what we know today as the better theory can actually make worse predictions.

    And the reasons it survives this epistemic hell is some mixture of judgment and heuristics that we don’t even understand well enough to actually articulate, much less codify into an RL loop. Hope you enjoy!

    Watch on YouTube; read the transcript.

    Sponsors

    - Jane Street loves challenging my audience with different creative puzzles. One of my listeners, Shawn, solved Jane Street’s ResNet challenge and posted a great walk-through on X. If you want to try one of these puzzles yourself, there’s one live now at janestreet.com/dwarkesh.

    - Labelbox can get you rubric-based evals, no matter your domain. These rubrics allow you to give your model feedback on all the dimensions you care about, so you can train how it thinks, not just what it thinks. Whatever you’re focused on—math, physics, finance, psychology or something else—Labelbox can help. Learn more at labelbox.com/dwarkesh.

    - Mercury just released a new feature called Insights. Insights summarizes your money in and out, showing you your biggest transactions and calling out anything worth paying attention to. It’s a super low-friction way to stay on top of your business. Learn more at mercury.com/insights.

    Timestamps

    (00:00:00) – Kepler was a high temperature LLM

    (00:11:44) – How would we know if there’s a new unifying concept within heaps of AI slop?

    (00:26:10) – The deductive overhang

    (00:30:31) – Selection bias in reported AI discoveries

    (00:46:43) – AI makes papers richer and broader, but not deeper

    (00:53:00) – If AI solves a problem, can humans get understanding out of it?

    (00:59:20) – We need a semi-formal language for the way that scientists actually talk to each other

    (01:09:48) – How Terry uses his time

    (01:17:05) – Human-AI hybrids will dominate math for a lot longer



    Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
    Más Menos
    1 h y 24 m
Todavía no hay opiniones