TechSurge: Deep Tech Podcast Podcast Por Celesta Capital | Deep Tech Venture Capital Firm arte de portada

TechSurge: Deep Tech Podcast

TechSurge: Deep Tech Podcast

De: Celesta Capital | Deep Tech Venture Capital Firm
Escúchala gratis

The TechSurge: Deep Tech VC Podcast explores the frontiers of emerging tech, geopolitics, and business, with conversations tailored for entrepreneurs, technologists, and investment professionals. Presented by Celesta Capital, and hosted by Founding Partners Nic Brathwaite, Michael Marks, and Sriram Viswanathan. Send feedback and show ideas to techsurge@celesta.vc. Each discussion delves into the intersection of technology advancement, market dynamics, and the founder journey, offering insights into the vast opportunities and complex challenges ahead. Episode topics include AI, data center transformation, blockchain, cyber security, healthcare innovation, VC investment trends, tips for first-time founders, and more. Tune in to hear directly from Silicon Valley leaders, daring new founders, and visionary thinkers. Past guests include investor Vinod Khosla, former PepsiCo CEO Indra Nooyi, the Global Head of McKinsey, and executive leaders from Microsoft, OpenAI, and other leading tech companies. New episodes release every two weeks. Visit techsurgepodcast.com for more details and to sign up for our newsletter and other content!Celesta Capital | Deep Tech Venture Capital Firm Economía Finanzas Personales
Episodios
  • Pixels to Intelligence: The Next Era of Imaging
    Apr 7 2026

    Digital imaging is so ubiquitous today that it’s easy to forget how improbable it once was. In this episode of TechSurge, guest host Nic Brathwaite sits down with Dr. Eric Fossum, inventor of the CMOS active pixel image sensor, to unpack the breakthrough that made it possible to embed cameras into billions of devices and the deeper lessons behind it.

    Eric explains how his work began not with consumer electronics, but with a NASA constraint: how to shrink a refrigerator-sized space camera into something small enough for spacecraft. The solution required a fundamental shift in architecture. By moving from CCD-based imaging to CMOS, where sensing and processing could happen on a single chip, he enabled a level of miniaturization and scalability that transformed cameras from standalone systems into embedded infrastructure.

    But the conversation goes far beyond the invention itself. Nic and Eric explore what it takes to commercialize deep technology, from the early days of Photobit to its acquisition by Micron, and the critical role ecosystems play in turning breakthroughs into global platforms. They discuss why intellectual property is less about protection and more about leverage, and why even the most important inventions require manufacturing scale, capital, and partnerships to succeed.

    The episode also looks forward. As AI systems increasingly rely on visual and physical data, sensors are shifting from tools designed for human perception to components optimized for machine intelligence. Eric highlights the challenges of pushing intelligence to the edge, the limitations of current architectures, and the growing importance of sensing technologies beyond traditional imaging—including molecular detection and new materials that go beyond silicon.

    While much of today’s investment is concentrated in models and compute, this conversation makes the case that the next wave of innovation may come from deeper layers of the stack, where machines interact directly with the physical world. The future of AI may depend not just on how systems think, but on how they see, detect, and understand their environment.

    If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

    Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.

    Episode Links

    • Connect with Eric and learn more about his work and recognition: https://engineering.dartmouth.edu/community/faculty/eric-fossum
    • Learn more about CMOS image sensors: https://www.spacefoundation.org/space_technology_hal/active-pixel-sensor/

    Timestamps

    • 02:00 From CCD to CMOS: Rethinking How Images Are Captured
    • 06:45 The NASA Problem: Shrinking a Camera for Space
    • 12:30 From Refrigerator to Coffee Cup and Beyond
    • 19:30 From Lab to Market: Founding Photobit
    • 26:00 Scaling the Technology: Micron, Manufacturing, and Cost
    • 31:00 The Role of IP in Deep Tech: Leverage vs Protection
    • 39:30 From Human Vision to Machine Perception
    • 44:30 Edge AI vs Centralized Compute: Where Intelligence Lives
    • 49:30 Beyond Imaging: Molecular Sensing and New Frontiers
    • 53:30 What Comes Next: Materials, Sensors, and the Limits of Silicon
    Más Menos
    51 m
  • Sovereign AI Stacks: The New Strategic National Resource
    Mar 19 2026


    As artificial intelligence becomes a strategic capability for nations as well as companies, questions of governance, safety, and geopolitical competition are moving to the forefront. In this episode of TechSurge, host Sriram Viswanathan speaks with Helen Toner, Interim Executive Director of the Center for Security and Emerging Technology (CSET) at Georgetown and a former OpenAI board member, about the rise of sovereign AI stacks and the global implications of increasingly powerful AI systems.

    Helen brings a rare vantage point from both inside the frontier AI ecosystem and the policy world. She reflects on lessons from her time on the OpenAI board, including the governance challenges that arise when nonprofit missions intersect with enormous commercial incentives and rapid technological progress. As AI capabilities accelerate, she argues that the industry is still grappling with deep uncertainty about how these systems work, how they will evolve, and what responsibilities companies and governments should carry.

    The conversation explores the idea of sovereign AI; the growing push by countries to control key layers of the AI stack, including compute infrastructure, models, and data. Helen explains why governments increasingly view AI as a strategic national resource, comparable to past transformative technologies like electricity or the internet. At the same time, she cautions that full technological independence may be unrealistic for most nations, given the complexity and global interdependence of the AI supply chain.

    Sriram and Helen also examine the evolving US–China AI competition, the role of export controls and semiconductor supply chains, and how different countries, from China to emerging AI hubs in the Middle East, are positioning themselves in the race to build advanced AI capabilities. Along the way, they discuss whether the industry should slow down development, how companies are experimenting with “safety frameworks” for frontier models, and why installing guardrails may be more realistic than attempting to halt progress altogether.

    Ultimately, Helen argues that society is entering a period of profound uncertainty. AI is transitioning from a research discipline into a foundational system that will shape economies, security, and daily life. Navigating that transition will require not just technical breakthroughs, but new approaches to governance, transparency, and global cooperation.

    If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

    Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.
    --

    Episode Links

    Connect with Helen: linkedin.com/in/helen-toner-4162439a

    Learn more about CSET: https://cset.georgetown.edu/
    --

    Timestamps

    03:00 Lessons from the OpenAI Board: Governance in the Age of Frontier AI

    05:00 The Big Unknowns in AI Development: Why Experts Still Disagree

    12:05 Public Trust and the Risk of an AI Backlash

    14:20 When AI Became Infrastructure: From Research Field to Societal System

    16:00 Is AGI a Meaningless Term Now? Rethinking the Goalposts
    19:05 AI’s True Scale: Internet-Level Impact or Something Bigger?
    23:15 Why Frontier AI Labs Struggle to Slow Down

    24:40 What “Sovereign AI” Actually Means for Nations

    28:10 Mapping the AI Stack: Chips, Cloud, Models, and Applications
    33:38 The US–China AI Competition: Who’s Ahead and Why
    39:44 China’s Progress in AI: Compute Constraints and Fast Followers
    44:03 US AI Policy: Export Controls, Regulation, and Federal Preemption
    48:40 Frontier AI Safety Frameworks: How Labs Define Dangerous Capabilities
    51:36 The Future of AI: Utopia, Industrialization, or Something Worse?
    56:04 Rapid Fire: AI Misconceptions, Governance Reforms, and Regions to Watch

    Más Menos
    59 m
  • Governing AI Before It Outpaces Us: Safety for Critical Infrastructure
    Mar 5 2026

    As generative AI systems move from novelty to infrastructure, questions of safety, trust, and governance are becoming urgent. In this episode of TechSurge, host Sriram Viswanathan is joined by Dr. Rumman Chowdhury, CEO of Humane Intelligence PBC and responsible AI Pioneer, about what AI safety really means and why the industry may be focusing on the wrong problems.

    Rumman argues that the most overlooked lever in AI development is evaluation. While companies emphasize model training and capabilities, far less attention is paid to how systems are assessed in real-world contexts, who defines “good,” what risks are measured, and how societal impacts are accounted for at scale. She distinguishes between technical assurance and broader sociotechnical risk, from misinformation and bias to over-reliance and erosion of institutional trust.

    Drawing on her experience at Twitter (X) and in global policy circles, Rumman highlights a fundamental governance gap: unlike finance, aviation, or healthcare, AI lacks a mature, independent ecosystem of auditors and evaluators. Today, the same companies building AI systems often define what counts as harm. She also challenges the belief that stronger guardrails alone will solve the problem, noting that cultural context, language differences, and human behavior complicate any notion of “neutral” or fully objective AI.

    Rather than focusing solely on speculative existential threats, Rumman urges attention to the harms already visible from AI-enabled misinformation to mental health risks and shifts in how younger generations relate to knowledge and authority. The future of AI, she suggests, will be determined not just by technological breakthroughs, but by whether we build credible systems of accountability, evaluation, and global cooperation around them.

    If you enjoy this episode, please subscribe and leave us a review on your favorite podcast platform.

    Sign up for our newsletter at techsurgepodcast.com for updates on upcoming TechSurge Live Summits and future Season 2 episodes.


    Episode Links

    • Connect with Rumman: https://www.linkedin.com/in/rumman
    • Learn more about Humane Intelligence: https://humane-intelligence.org/

    Timestamps

    • 02:50 Why AI Evaluations Matter: Defining “Good” Models in Context
    • 04:25 What Is AI Safety? From Product Performance to Societal Harm
    • 11:30 Regulation Reality Check: EU AI Act, Conformance Assessments & Checklists
    • 15:25 Building the AI Evaluation Profession: Audits, Red Teaming & Legal Protections
    • 23:00 When It’s OK to Outsource Judgment and When It’s Dangerous
    • 39:38Who’s Responsible When AI Outcomes Go Wrong?
    • 52:37 Design vs Governance: Complex Systems, System-Level Evaluation, and Regulating Horizontally
    • 44:11 AI Psychosis, Youth Harm, and What’s Already Here
    • 47:27 What Keeps Rumman Up at Night: Kids, Algorithms, and Hope from Global Governance
    • 54:00 Bringing Sci-Fi to the Real World?
    Más Menos
    58 m
Todavía no hay opiniones