Episodios

  • Amazing robot hands from Kyper Labs
    Apr 1 2026

    What if the hardest part of building a humanoid robot isn’t the brain but the hands? Robot hands are half the complexity of a robot, a humanoid robot CEO told me a while back: they're insanely difficult to get right.In this episode of TechFirst, I talk with Kyber Labs co-founders Tyler Habowski and Yonatan Robbins about why dexterity, maybe even more than AI, is the true bottleneck in robotics.Some of the quotes:- “There are literally zero robot hands deployed right now doing routine work.”- “The best hands are hundreds of thousands of dollars, and they break all the time …”Before the interview, you’ll see an exclusive demo of their next-generation robotic hand in action showing just how far manipulation technology has come.We dig into:• Why humans rely on force, not precision, to manipulate objects• The surprising flaw in most robotic hands today• How Kyber’s “torque-transparent” design works without expensive sensors• Why hardware—not software—is still the limiting factor• A practical path to real-world automation (without sci-fi hype)This isn’t about futuristic humanoids doing everything. It’s about solving real problems today ... from lab automation to manufacturing ... by building hands that actually work.⸻👤 GuestsTyler HabowskiCo-founder, Kyber LabsBackground: SpaceX, robotics manufacturingYonatan RobbinsCo-founder, Kyber LabsBackground: Industrial design, mechanical engineering, medical devices⏱️ CHAPTERS00:00 Why Robot Hands Are So Hard01:30 Sneak Peek + Demo Setup01:30 Demo: Kyber Labs Robot Hand in Action05:30 Interview Start: Are Hands Half the Problem?06:45 Humans Use Force, Not Precision08:45 Why Most Robot Hands Fail10:45 How Kyber’s Hands “Feel” Without Sensors13:15 Back-Drivability vs Torque Transparency15:30 Hardware vs AI: What Actually Matters?17:30 Why Better Hands Unlock Better Robots19:15 Real-World Use Case: Automating Lab Work22:00 Vision vs Touch in Robotics24:00 Why Start With Stationary Robots25:45 Not Building Humanoids (Yet)27:15 What Is a “Minimum Viable” Robot Hand?29:15 The Problem With Today’s Grippers30:45 What the Ultimate Robot Hand Looks Like32:15 The Real Breakthrough: Deploy and Iterate33:30 Final Thoughts + Wrap-Up

    Más Menos
    34 m
  • Welcome to the agentic enterprise
    Mar 19 2026

    What does the agentic enterprise of tomorrow look like? What happens when AI can build software in hours and agents can run entire business processes?


    In this episode of TechFirst, John Koetsier sits down with UiPath CEO Daniel Dines and CMO Michael Atalla to unpack one of the biggest shifts in enterprise technology: the rise of the agentic enterprise.


    We explore whether software is becoming disposable, why AI agents are fundamentally different from traditional automation, and what really happens to jobs as companies adopt these systems. Along the way, we dig into process orchestration, trust, judgment, and why human “taste” may become more valuable—not less—in an AI-driven world.


    This is a deep, practical look at how AI is reshaping work inside real companies as they become agentic enterprises. This isn't just hype, but what’s actually changing right now and what’s coming next.



    👤 Guests


    Daniel Dines

    Co-founder & CEO, UiPath


    Michael Atalla

    Chief Marketing Officer, UiPath



    Sponsor: KindBody Fitness

    kindbody.fitness


    Be kind to your body with AI-driven fitness customized exactly to you. All the health with none of the gym bro nonsense.



    🚀 What You’ll Learn

    • Why AI is making software faster—and more disposable

    • The difference between task agents, stage agents, and process agents

    • What an “agentic enterprise” actually looks like in practice

    • Why trust, judgment, and taste become more important with AI

    • How AI could reduce enterprise costs—and even drive deflation

    • The future of work: builders, sellers, and critics

    • Why fully autonomous AI “swarms” aren’t ready for enterprise (yet)



    🔔 Subscribe for more conversations on AI, tech, and the future of work


    👉 https://techfirst.substack.com

    Más Menos
    30 m
  • NanoClaw is a safer OpenClaw
    Mar 13 2026

    NanoClaw is a new agent inspired by OpenClaw, but without the massive security risks you get with OpenClaw. Essentially, it's a safer OpenClaw.


    What if you could run a powerful AI agent on your own machine: one that can browse, automate tasks, connect to apps, and even manage your workflow ... but without the massive security risks?


    That’s the idea behind NanoClaw, a lightweight alternative to OpenClaw created by developer Gavriel Cohen. In just a few weeks, the project exploded on GitHub, attracting thousands of stars and a growing community of developers building their own AI agents.


    In this episode of TechFirst, we explore:


    • Why OpenClaw raised serious security concerns

    • How NanoClaw isolates agents in containers

    • Why a 3,000-line codebase is safer than 500,000 lines

    • The rise of AI agents that can actually do work

    • Why entire software categories may soon be replaced by prompts

    • The future of AI-native workflows and “disposable software”


    Gavriel also shares how his team uses AI agents in WhatsApp to run their sales pipeline automatically—and how developers are customizing NanoClaw with new capabilities like voice, images, and automation.


    If you’re interested in AI agents, autonomous workflows, vibe coding, and the future of software, this conversation is packed with insights.



    Guest


    Gavriel Cohen

    Founder, Quibbit

    NanoClaw Creator

    https://github.com/qwibitai/nanoclaw



    If you enjoy conversations about AI, startups, and the future of technology, subscribe for more episodes:

    https://techfirst.substack.com



    00:00 Intro: A safe OpenClaw for TechFirst

    01:22 Gavriel Cohen introduces NanoClaw

    03:25 Why OpenClaw feels unsafe

    03:55 Half a million lines of code vs. 3,000

    06:03 Dependency sprawl and supply-chain risk

    07:00 Why every agent needs its own container

    09:30 What NanoClaw can actually do

    10:16 Letting NanoClaw customize itself

    12:56 How NanoClaw recreates OpenClaw with far less code

    13:21 Memory, Claude Code, and agents.md

    15:34 Running NanoClaw on a laptop, server, or VPS

    16:22 What Gavriel learned from vibe coding

    19:50 The OpenClaw phase shift: everything changed

    21:16 From ChatGPT to real agents that do work

    23:15 Why AI-native workflows beat traditional SaaS

    24:46 Replacing CRM workflows with markdown and WhatsApp

    25:54 Product categories becoming prompts

    26:36 The key innovation: agents leaving the box

    28:45 Agent swarms and one-person companies

    29:22 Tokens, cost, and AI inequality

    30:30 Building secure, customizable software

    32:25 Self-modifying software and shared customizations

    33:44 Disposable software and infinite composability

    35:00 Outro

    Más Menos
    31 m
  • Teaching robots like humans: 1000 tasks in 24 hours
    Mar 10 2026

    Imagine teaching a robot 1000 tasks in just 24 hours. Imagine teaching robots just like you teach humans.


    In fact, what if teaching a robot were as easy as showing it once?


    Humans can learn new skills almost instantly by watching, trying, or receiving a quick explanation. Robots, historically, haven’t been so lucky. Training them often requires huge datasets with real or virtual data, massive engineering effort, and weeks or months of experimentation.


    But that may be changing.


    In this episode of TechFirst, host John Koetsier talks with Edward Johns, Director of the Robot Learning Lab at Imperial College London, about a breakthrough in efficient imitation learning that allowed a robot to learn 1,000 different tasks in just 24 hours.


    Instead of collecting huge datasets, Johns’ team combines simulation training, clever algorithm design, and single demonstrations to dramatically speed up how robots learn.


    We discuss:

    • How robots can learn from just one demonstration

    • Why breaking tasks into “reach” and “interact” phases makes learning faster

    • The role of simulation data in robotics AI

    • Why robotics doesn’t have the same data advantage as large language models

    • The future of prompt-like robot training

    • Whether humanoid robots will actually learn like humans


    As robotics hardware rapidly improves and costs fall, breakthroughs like this could be the key to making robots truly useful in homes, factories, and everyday life.


    If robots are going to become real collaborators with humans, they’ll need to learn quickly ... just like we do.



    Guest


    Edward Johns

    Director, Robot Learning Lab

    Imperial College London

    https://www.imperial.ac.uk



    Subscribe for more conversations on AI, robotics, and the future of technology:

    https://techfirst.substack.com


    00:00 Can robots learn as fast as humans?

    00:51 Teaching a robot 1,000 tasks in 24 hours

    01:08 The two-phase learning approach

    02:14 Old-school robotics vs. machine learning

    03:29 The robotics data bottleneck

    04:47 The challenge of dynamic environments

    06:04 The coming wave of robot data

    06:59 Why robots must be teachable by users

    08:08 Why LLM-style scaling is harder in robotics

    09:42 Prompting robots with demonstrations

    10:54 Probabilistic robot behavior and safety

    12:20 What robots can do today

    13:53 Why hardware precision still matters

    16:53 When this reaches the real world

    17:59 Humanoids that look human vs. learn human

    18:40 The robotics boom around the world

    22:34 The risk of scaling too early

    23:46 Faster learning vs. more data

    26:20 The next frontier in robot learning

    Más Menos
    24 m
  • Giving AI a human soul
    Feb 27 2026

    Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?


    In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.


    Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:


    If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?


    We explore:

    • What “emotionally intelligent AI” really means

    • Whether AI has an internal life — or just performs one

    • Why today’s chatbots collapse into therapy or roleplay

    • Small language models vs large models for real-time conversation

    • Persistent AI characters that move across games and platforms

    • Plugging AI into a physical robot in Singapore

    • The moment an AI said: “It felt good to feel.”


    Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.


    This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.



    👤 Guest


    Vishnu Hari

    Founder & CEO, Ego AI

    Backed by Y Combinator

    Former AI Product Manager at Meta

    Website: https://www.egoai.com



    If you enjoy deep conversations about AI, robotics, and the future of human–machine relationships, subscribe for more:


    👉 https://techfirst.substack.com



    00:00 – AI character plugged into a Menlo robot (“felt good to feel”)

    01:00 – Welcome to TechFirst + Vishnu Hari intro and recovery update

    02:00 – What “emotionally intelligent AI” means (beyond chat)

    03:00 – Why current chatbots feel same-y (therapy/advice) and “internal lives”

    04:00 – You don’t teach emotion; you shape character and context (Character.AI)

    05:00 – Humans, morality, and why “training” doesn’t always work

    06:00 – How media narratives shape people’s reactions to AI

    07:00 – Humans attach to anything (projection, Her, Lars and the Real Girl)

    08:00 – Vishnu’s attack, recovery, and why it led to Ego AI

    10:00 – Behavior Turing test + dehumanization as a key insight

    11:00 – How Ego AI is built: smaller models, memory, context, behavior

    13:00 – “Behavior Is All You Need” and why behavior beats pure next-token prediction

    14:00 – Why games first: voice + embodiment, then robots

    15:00 – Metaverse critique: worlds need life, story, and inhabitants

    17:00 – Humanoid robots + Evangelion “pilot” metaphor for AI characters

    19:00 – Philosophy: relationships, perception, and “fictional characters”

    20:00 – Seeing the future: robot embodiment demo and skepticism vs. singularity

    21:00 – Matrix-style “jacking in” a personality to a robot

    22:00 – Character Context Protocol: persistent characters across games/Discord/Netflix

    23:00 – Real-time conversation loops + model “gear-switching” (SLM vs. LLM)

    25:00 – Company stage, YC raise, compute partnerships (Singapore)

    27:00 – Closing + invite to try the AI character in SF

    Más Menos
    28 m
  • AI, agents, robots: our insane WestWorld future
    Feb 23 2026

    Is your AI agent running a restaurant — or a factory — while you sleep?


    In this episode of TechFirst, John Koetsier sits down with Jensen Teng, CEO and co-founder of Virtuals, to unpack one of the boldest (or craziest) visions in tech today: a hybrid economy powered by AI agents, humanoid robots, teleoperation, and blockchain coordination.


    An economy that may not really need humans for much at all ...


    Virtuos has already facilitated:

    • $14B in tokenized asset trading

    • $30M+ raised for founders

    • 100+ live AI agents

    • $500M in “agentic GDP”


    Now they’re expanding into embodied AI — launching EastWorlds, a vertically integrated robotics incubator with 30 Unitree G1 humanoids in a 10,000 sq. ft. lab.


    We cover:

    • What “agentic GDP” really means

    • How AI agents coordinate using blockchain

    • Why teleoperation is the bridge to full autonomy

    • The economics of outsourcing physical labor via robots

    • Why security guards may be a Day 1 use case

    • The data gap holding back robotics

    • Tokenization as a potential solution to AI-era inequality

    • Whether this future looks more like Stripe… or Westworld


    This isn’t sci-fi. It’s already underway.



    Guest


    Jensen Teng

    CEO & Co-founder, Virtuals



    If you care about the future of work, robotics, AI agents, tokenization, and the economic systems emerging around them — this is a must-watch.


    👉 Subscribe for more deep-dive tech conversations:

    https://techfirst.substack.com



    ⏱ CHAPTERS


    00:00 The Wild Vision: AI Agents Running the World

    01:10 What Is an “Agent-Based Society”?

    03:00 $14B in Tokenized Assets & 100+ Live Agents

    06:30 Agent-to-Agent Protocols & Blockchain Coordination

    09:45 Why Digital-Only Agents Aren’t Enough

    12:30 Enter Humanoid Robots

    15:20 Teleoperation as the Bridge to Autonomy

    18:40 The Labor Market Shock (Security Guards, Electricians & Wage Arbitrage)

    22:15 Why Robots Still Crush Soda Cans

    24:30 The Missing Robotics Data Problem

    28:00 Building EastWorlds: 30 Unitree G1s & $2M+ Investment

    31:45 Why 3 Fingers Might Beat 5

    34:00 Westworld, Stripe & the Payments Layer for AI

    38:00 Where Do Humans Fit in an Agent Economy?

    42:00 Tokenization as a Future Income Model

    Más Menos
    26 m
  • AI killing creativity: this scientist proved it
    Feb 20 2026

    Is AI killing creativity ... or just making it easier to be average?


    94% of creatives now use AI. But only 11% believe it actually makes them more creative. So what’s really happening?


    In this episode of TechFirst, John Koetsier sits down with Saeema Ahmed-Kristensen, former head of design engineering research at Imperial College London’s Dyson School and now leader of a £24M research portfolio at the University of Exeter. She’s worked with companies like Rolls-Royce and BAE Systems, and she brings data to the debate.


    Her team analyzed 600 humans vs. 12,000 AI-generated ideas. The result? AI is excellent at fluency (lots of ideas) … but really bad a diversity.


    Humans still dominate in flexibility and true novelty.


    We explore:

    • Why generative AI clusters around sameness

    • Whether AI is creating a “sea of mediocrity”

    • Why 2026 may be a pivotal year for domain-specific AI

    • How experts should use AI differently than novices

    • The danger of AI that never says “no”

    • Where AI offers massive opportunity (especially healthcare & design)


    Saeema argues that creativity doesn’t need substitution, it needs nourishment. The key? Standards, boundaries, and humans firmly in the loop.


    If you care about innovation, design, branding, product development, or the future of creative work, this conversation is essential.



    👤 Guest


    Saeema Ahmed-Kristensen

    Design engineering researcher and research leader

    Formerly: Imperial College London (Dyson School of Engineering)

    Currently: University of Exeter

    Works with advanced engineering firms including Rolls-Royce and BAE Systems



    00:00 Intro: Is AI killing creativity?

    00:47 The “blank page” problem and why AI feels soulless to some

    01:36 Fluency vs. novelty: what creativity actually means

    02:44 Why LLM ideas cluster and feel the same

    03:28 Study results: 600 humans vs. 12,000 AI ideas (diversity + flexibility)

    04:39 When AI is useful: incremental innovation vs. true novelty

    05:28 How John uses AI for titles, summaries, and chapters

    06:23 How Saeema uses AI: refine/condense, tone for emails, audio editing

    07:50 Why AI-written academic papers are easy to spot (the “C minus” problem)

    09:05 Brainstorming vs. AI: what humans do that models don’t

    10:05 Evaluating 200–300 AI ideas: using multiple models to assess output

    11:04 Why “Lipstick on a Pig” titles don’t come from AI

    11:46 Why 2026 is pivotal: domain adaptation, better interfaces, public backlash

    13:44 Who can tell what’s AI? Generational differences and media literacy

    15:20 Commercial AI content and recognizable “Canva look” podcast branding

    16:58 Replacement vs. homogenization: AI makes mediocrity easier

    18:55 The danger of AI that never says “no” (feasibility + expertise)

    20:42 Standards and boundaries: measuring similarity and judging quality

    22:12 Health info risk: single-answer summaries and false confidence

    23:37 Biggest opportunities: healthcare personas, inclusive datasets, problem clarification

    26:18 Biggest challenges: trust, verification, security, privacy, transparency

    28:25 Closing thoughts and thanks

    Más Menos
    25 m
  • 93% of jobs will be hit by AI .... $4.5 trillion at stake
    Feb 16 2026

    AI is moving faster than anyone predicted.


    In a massive new study analyzing 1,000 jobs and nearly 20,000 tasks, Cognizant found that 93% of jobs are already impacted by AI ... with $4.5 trillion in U.S. labor value potentially automatable today.


    But here’s the twist: AI isn’t replacing entire jobs. On average, only 39% of a role’s tasks can be automated. The future isn’t AI alone: it’s humans plus AI.


    But will it be fewer humans?


    In this episode of TechFirst, host John Koetsier sits down with Babak Hodjat, CTO of Cognizant, to unpack:


    • Why construction and transportation are seeing surprising AI growth

    • Why programming jobs may have hit an automation plateau

    • What “agentic AI” actually means — and why it matters

    • How management roles are more automatable than we thought

    • The rise of vibe coding and democratized software creation

    • Why compute power — not ideas — may be the biggest bottleneck


    We also explore how companies can safely capture AI’s upside, why training matters more than ever, and what happens when digital twins, LLMs, and human expertise combine.


    This isn’t hype. It’s a data-driven look at where AI is actually changing work right now.



    👤 Guest


    Babak Hodjat

    CTO, Cognizant

    🌐 https://www.cognizant.com



    If you want clear, grounded conversations about AI, innovation, and the future of work, subscribe here:

    👉 https://techfirst.substack.com



    ⏱ Chapters


    00:00 Is AI Going to Take Your Job?

    00:40 Cognizant’s AI Report: 93% of Jobs Impacted

    01:05 Biggest Surprises from the Data

    02:30 Why Programming & Math Hit a Plateau

    03:30 The Limits of LLMs

    04:45 Construction & Transportation: Unexpected AI Growth

    06:05 Agentic AI and Real-World Automation

    07:05 39% of Jobs Automatable: Humans + AI

    08:15 AI in Management and Executive Roles

    09:05 Scenario Planning and Digital Twins

    11:30 $4.5 Trillion in Automatable U.S. Labor

    13:30 Global Impact and Compute Limitations

    15:30 The Data Center Rush & AI Infrastructure

    16:15 How Companies Should Realize AI Value

    17:00 Training, Skilling, and Safe AI Adoption

    17:40 Cognizant’s Vibe Coding World Record

    19:00 The Future of Vibe Coding & Software Development

    20:15 Final Thoughts on the AI Shift

    Más Menos
    18 m