Episodios

  • Could Living Neurons Power the Future of AI with Ewelina Kurtys
    Mar 15 2026

    Over the last couple of years, most of my conversations around AI have been about capability.

    How fast models are improving.

    How agents are becoming more autonomous.

    How enterprises can adopt GenAI safely.

    How teams can redesign workflows around intelligence.

    But this week, I found myself thinking about something deeper.

    Not what AI can do.

    But what does AI cost?

    And I don't just mean money.

    I mean energy.

    I mean infrastructure.

    I mean the hidden assumptions underneath the current AI boom.

    Because when we talk about the future of AI, most people immediately jump to models, chips, data centers, agents, and software stacks.

    But as someone who works closely with organizations trying to operationalize AI in the real world, I keep coming back to a harder question:

    What happens when the current compute model itself becomes the bottleneck?

    This is not a question most teams are asking yet.

    But it is a question serious builders should start paying attention to.

    This week, while reviewing different enterprise AI patterns and thinking through long-term architecture choices, I realized that much of the current AI conversation still happens within the assumptions of silicon, scale, and software abstraction.

    But what if the next major shift is not a better model?

    What if it is a different computing substrate altogether?

    That's exactly why today's conversation is important.

    Because this episode is not about another AI app.

    It is not about another wrapper.

    It is not about another productivity layer.

    It is about something much more fundamental:

    What might come after silicon, and how should we think about it today?

    Chapters:

    00:00 Introduction to Ewelina Kurtis and Final Spark
    00:52 Understanding Living Neurons and Their Potential
    02:44 The Vision Behind Final Spark
    05:34 Current Progress and Future Goals
    08:27 Collaborations and Research Opportunities
    11:17 Programming Living Neurons
    14:02 Ethical Considerations in Biocomputing
    16:59 Benefits of Biocomputing for Society
    19:39 Advice for Aspiring Bioengineers
    22:30 Commercial Aspects of Final Spark
    24:24 Investor Insights and Future Directions

    Episode # 184

    Today's Guest: Dr. Ewelina Kurtys, Scientist from FinalSpark
    • Website: FinalSpark

    What Listeners Will Learn:

    • Why the future of AI may require rethinking computation itself, not just models
    • How energy efficiency is becoming a core strategic issue in AI
    • What biocomputing means in simple terms
    • How living-neuron-based computing differs from traditional silicon-based systems
    • Why future AI progress may depend on alternative hardware paradigms
    • How emerging scientific computing trends should matter to enterprise AI leaders today
    • Why staying ahead in AI means looking beyond current tools and architectures
    Resources:
    • FinalSpark
    Más Menos
    27 m
  • How Attackers Use AI And Why Your Defenses Might Still Fail with Adriel Desautels
    Feb 22 2026

    Episode # 183

    Today's Guest: Adriel Desautels, Founder & CEO, Netragard

    Adriel is a leader in cybersecurity with over 20 years of experience. Adriel founded Secure Network Operations and the SNOsoft Research Team, whose vulnerability research helped shape modern responsible disclosure practices. He later launched Netragard, pioneering Realistic Threat Penetration Testing, which he now call Red Teaming, and expanding into a broad range of security services.

    • Website: Netregard
    • X/Twitter: Netregard

    What Listeners Will Learn:

    • Why "AI penetration testing" is often closer to automated scanning than real offensive testing
    • How AI changes security risk mainly through volume and speed, not necessarily sophistication
    • Where organizations get misled into a false sense of security
    • Why "preventing breach" is unrealistic and why limiting damage paths matters more
    • What cybersecurity professionals should focus on to stay relevant in the LLM era
    • How AI may influence vulnerability research, but still struggles with novel exploitation thinking

    Resources:
    • Netregard
    Más Menos
    25 m
  • Why 95% of AI Pilots Fail and How to Be in the 5% with Mindaugas Maciulis
    Feb 7 2026

    Welcome to Open Tech Talks.

    Quick note before we start, thank you.

    The messages, the feedback, the "keep this practical" reminders… they've been incredibly helpful. Open Tech Talks has always been a weekly sandbox for technology insights, experimentation, and inspiration—with one objective: learn, test, and share what's real.

    Now, a personal moment from this week.

    A few days ago, I sat with a business owner who said something that stuck with me:

    "AI is everywhere… but I don't know where to start without breaking my business."

    And that's the truth for most companies, especially small businesses.

    Because "start with AI" sounds simple… until it touches real operations:

    • leads that go cold,

    • follow-ups that don't happen,

    • teams that feel overwhelmed,

    • tools that multiply,

    • processes that nobody can explain clearly.

    Most AI projects don't fail because the model is weak.

    They fail because the process is unclear, the team is overloaded, and the strategy is missing.

    Let's begin.

    Episode # 182

    Today's Guest: Mindaugas (Min) Maciulis, Founder & CEO of Strategic AI Advisors

    He works with CEOs, COOs, and operating partners in the $20M–$250M range who are ready to go beyond pilots and turn AI into real EBITDA growth. His proven 90-day sprint framework, AImpact OS, delivers measurable lifts across productivity, customer service, and sales.

    • Website: Strategic Advisors

    What Listeners Will Learn:

    • Identify the best "starting point" for AI using business pain, not hype
    • Understand why AI pilots fail mostly due to adoption (not technology)
    • Learn a practical approach to simplify workflows before adding automation
    • See how SMBs can move faster than enterprises in the AI era
    • Understand the difference between augmentation and transformation with AI
    • Learn how to avoid tool overload and focus on measurable outcomes
    Resources:
    • Strategic Advisors
    Más Menos
    30 m
  • AI Is Creating Technical Debt Faster Than You Think with Maxim Silaev
    Jan 30 2026

    This week, I've been thinking about something slightly uncomfortable.

    Last weekend, I was reviewing one of my older architecture diagrams from five years ago. A cloud-native migration plan I was deeply proud of at the time. It was clean. Structured. Scalable.

    And then I asked myself:

    If I were to rebuild this today in the era of generative AI…

    Would I build it the same way?

    The honest answer?

    No.

    Not because it was wrong.

    But because our assumptions have changed.

    Two years ago, AI was a feature.

    Today, AI is shaping architecture decisions.

    We're not just designing systems anymore.

    We're designing systems that design, generate, predict, and automate.

    And here's the tension I keep seeing in enterprise conversations:

    Everyone wants AI.

    But very few are asking:

    "What technical debt are we creating while chasing it?"

    That's why today's conversation matters.

    Today, I'm joined by Maxim Salav, based in Australia, someone who works deeply in enterprise architecture and technical debt remediation.

    And this episode is not about hype.

    It's about responsibility.

    Because AI doesn't remove architectural complexity.

    In many cases, it amplifies it.

    Let's get into it.

    Chapters

    00:00 Introduction to Technical Debt and Architecture
    01:34 The Impact of AI on Technical Debt
    04:12 Generative AI and Architectural Challenges
    08:40 Adopting AI in Organizations
    12:26 Building AI Strategies and Governance
    17:33 Data Quality and AI Integration
    22:43 Guardrails for AI Adoption

    Episode # 181

    Today's Guest: Maxim Silaev, Technology Advisor and Enterprise Architect

    He is a technology advisor and enterprise architect with more than two decades of experience working with high-growth companies, complex systems, and business-critical platforms.

    • Website: Arch-Experts

    What Listeners Will Learn:

    • What technical debt really means in the AI era
    • How generative AI can unintentionally increase hidden system risk
    • Why architecture remains critical despite AI coding tools
    • The importance of governance and verification layers in AI systems
    • How large enterprises are cautiously integrating AI
    • Why strategy must precede AI deployment
    • The evolving role of enterprise architects in AI-native environments
    Resources:
    • Arch-Experts
    Más Menos
    33 m
  • Simplify Your Tech Stack and Scale Faster with Kara Williams
    Jan 25 2026

    Chapters

    00:00 Introduction to Kara Williams
    01:53 Kara's Coaching Journey and Entrepreneurial Background
    03:20 The Importance of a Simplified Tech Stack
    05:51 Common Mistakes in Tech Selection
    07:09 Exploring AI in Business
    08:16 Creating the Proof First GPT
    10:47 Learning and Executing with AI
    12:04 Common Challenges Faced by Entrepreneurs
    13:50 Guiding New Entrepreneurs
    14:59 Misconceptions About Low Ticket Offers
    16:18 Refining Messaging and Offers
    17:29 The Role of Automation in Business
    18:34 Understanding Automation Needs
    19:36 Testing Freebies and Building Relationships
    20:29 Lessons Learned in Business
    21:20 Future Plans and Refinements
    22:31 Final Tips for Entrepreneurs

    Episode # 180

    Today's Guest: Kara Williams, Founder, GHL Mastery Academy

    She is the founder of GHL Mastery Academy, where she helps CEOs stop being the bottleneck in their business by turning their VA, OBM, or EA into a trained backend powerhouse.

    • Website: Kara Williams
    • Youtube: GHL Mastery Academy

    What Listeners Will Learn:

    • Why "cheap tool stacking" quietly becomes expensive (money + time + broken trust)
    • How to think about systems like a real business owner (not a hobbyist)
    • Why reliability matters more than feature-count in early-stage tech stacks
    • How entrepreneurs can use AI to validate offers before building full courses or funnels
    • What automation is actually for: visibility, testing, and removing blind spots
    • How to simplify business operations without losing flexibility or creativity
    Resources:
    • Website: Kara Williams
    Más Menos
    24 m
  • Building Startups in the AI Era Lessons from 30 Years of Venture Capital with Scott Kelly
    Jan 18 2026

    Welcome back to Open Tech Talks, and thank you, genuinely, for the continued support, messages, and thoughtful feedback. This show has been running for years now, and what keeps it meaningful is the shared curiosity of this community.

    We're in a very different phase of the AI journey.

    The conversation has clearly moved past "Can we build this?"

    Now it's about "Should we build this?", "Is this sustainable?", and "Does this actually create value?"

    Over the last year, I've personally noticed something interesting while working with enterprises, founders, and investors: AI has lowered the cost of building but raised the cost of judgment.

    It's easier than ever to create products, prototypes, and even companies. But deciding what's worth building, when to raise capital, and how to scale responsibly has become harder, not easier.

    That's why today's conversation matters.

    This episode is not about chasing trends or predicting the next AI unicorn.

    It's about long-term thinking, founder discipline, and understanding capital, timing, and execution in an AI-driven world.

    Today's guest has spent decades working across venture capital, startup growth, and exits through multiple technology cycles and brings a grounded perspective that's especially valuable right now.

    Let's welcome Scott Kelly to Open Tech Talks.

    Chapters

    00:00 Introduction to Scott Kelly and His Ventures
    02:00 The Transformative Impact of AI
    04:03 Successful Investments and Entrepreneurial Journeys
    05:53 Lessons for Entrepreneurs and Pitching Tips
    10:06 Navigating the AI Landscape in Startups
    11:52 Industry Applications of AI
    14:54 Pitch Events and Investor Engagement
    17:03 Investor Perspectives on New Technologies
    19:52 Advice for Aspiring Entrepreneurs

    Episode # 179

    Today's Guest: Scott Kelly, Founder & CEO, Black Dog Venture Partners

    He has been working on both sides, with entrepreneurs and investors alike, for more than three decades. Harnessing his innovative skills, vast experience training thousands of salespeople, and tapping into his vast network of investors.

    • Website: Black Dog Venture Partners
    • Youtube: VC FastPitch

    What Listeners Will Learn:

    • How AI is changing the economics of building and scaling startups
    • Why many founders may not need venture capital as early as they think
    • Lessons from past technology cycles that still apply in the GenAI era
    • How investors evaluate AI-driven businesses beyond surface-level hype
    • Why timing, discipline, and execution matter more than tools
    • What founders often misunderstand about pitching, capital, and exits
    • How AI lowers build costs but raises the importance of strategic judgment
    Resources:
    • Website: Black Dog Venture Partners
    • YouTube: VC FastPitch
    Más Menos
    29 m
  • Building AI Products That Users Actually Trust, Lessons from Angshuman Rudra
    Jan 11 2026

    January has a very particular energy.

    The holidays are behind us. The inbox is slowly filling up again. Calendars are waking up. And there's always this short window, just a few quiet days, where it feels like everything could still go in a different direction.

    I've been thinking a lot during this pause.

    Over the last couple of years, AI and large language models have gone from experiments to expectations. What used to feel optional is now part of daily work, whether someone asked for it or not. And the biggest shift I've personally noticed isn't technical.

    It's psychological.

    People aren't asking "What can AI do?" anymore.

    They're asking "What should we actually build?", "What do we trust?", and "What's worth shipping versus waiting?"

    That question shows up everywhere, especially in product teams.

    Because as exciting as LLMs are, shipping the wrong AI feature is worse than shipping none at all.

    And that's exactly why today's conversation matters.

    This episode is not about hype.

    It's about judgment, timing, and responsibility in product leadership.

    Chapters:

    00:00 Introduction to Angshuman Rudra
    01:06 The Impact of Large Language Models on Product Management
    03:14 Balancing Innovation and User Needs
    04:37 Navigating Generative AI in Product Development
    06:46 Driving Adoption of New Features
    09:34 Challenges and Lessons in Generative AI Products
    11:15 Evolving Roles of Product Leaders with AI
    12:39 The Future of Multi-Agent Systems
    14:36 Translating User Requirements into Product Features
    17:31 Finding the Next Big Feature
    19:56 Adopting AI in Development Cycles
    21:24 Tips for Job Seekers in Tech
    23:10 Market Shifts in Marketing Technology
    25:01 Exciting Use Cases in Marketing Technology
    26:52 Concluding Thoughts and Future Outlook

    Episode # 178

    Today's Guest: Angshuman Rudra, AI Product Leader, building Martech platforms, AI Agents, and data workflows for 500+ agencies.

    Angshuman Rudra is a senior product executive at TapClicks, where he leads a portfolio of data, analytics, and AI products for a market-leading martech platform.

    • Website: Angshuman Rudra

    What Listeners Will Learn:

    • How to evaluate real user demand for AI features (not hype)
    • When AI adds value and when it creates unnecessary complexity
    • How product leaders should think about LLMs as tools, not magic
    • Why many AI features fail after launch
    • How to balance innovation with resource constraints
    • What "AI adoption" actually looks like inside real companies
    • Why multi-agent systems are promising but not ready to be fully autonomous
    • How PMs can use AI for research, specs, and design without losing judgment
    • What skills will matter most for product leaders over the next 3–5 years

    Resources:
    • Angshuman Rudra
    Más Menos
    34 m
  • How Generative AI Is Reshaping Fraud, Security, and Abuse Detection with Bobbie Chen
    Jan 4 2026

    In this episode of Open Tech Talks, host Kashif Manzoor sits down with Bobbie Chen, a product manager working at the intersection of fraud prevention, cybersecurity, and AI agent identification in Silicon Valley.

    As generative AI and large language models rapidly move from experimentation into real products, organizations are discovering a new reality. The same tools that make building software easier also make abuse, fraud, and attacks easier. Vibe coding, AI agents, and LLM-powered workflows are accelerating innovation, but they are also lowering the barrier for bad actors.

    This conversation breaks down why security, identity, and access control matter more than ever in the age of LLMs, especially as AI systems begin to touch authentication, customer data, financial workflows, and enterprise knowledge. Bobbie shares practical insights from real-world security and fraud scenarios, explaining why many AI risks are not entirely new but become more dangerous when speed, automation, and scale increase.

    The episode explores how organizations can adopt AI responsibly without bypassing decades of hard-earned security lessons. From bot abuse and credit farming to identity-aware AI systems and OAuth-based access control, this discussion helps listeners understand where AI changes the threat model and where it doesn't.

    This is not a hype-driven episode. It is a grounded, experience-backed conversation for professionals who want to build, deploy, and scale AI systems without creating invisible security debt.

    Episode # 177

    Today's Guest: Bobbie Chen, Product Manager, Fraud and Security at Stytch

    Bobbie is a product manager at Stytch, where he helps organizations like Calendly and Replit fight against fraud and abuse.

    • LinkedIn: Bobbie Chen

    What Listeners Will Learn:

    • How LLMs and AI agents change the economics of fraud and abuse, making attacks cheaper, faster, and more customized
    • Why vibe coding is powerful for experimentation, but risky when used without security review in production systems
    • The difference between exploring AI ideas and asking users to trust you with sensitive data
    • Standard security blind spots in AI-powered apps, especially around authentication, parsing, and edge cases
    • Why organizations should not give AI systems blanket access to enterprise data
    • How identity-aware AI systems using OAuth and scoped access reduce risk in RAG and enterprise search
    • Why are many AI security failures process and organizational problems, not tooling problems
    • How fraud patterns like AI credit farming and automated abuse are emerging at scale
    • Why security teams must shift from being gatekeepers to continuous partners in AI adoption
    • How professionals in security, product, and engineering can stay current as AI threats evolve
    Resources:
    • Bobbie Chen
    • The two blogs I mentioned:
    • Simon Willison: https://simonwillison.net
    • Drew Breunig: https://www.dbreunig.com
    Más Menos
    32 m