Episodios

  • Mobile Test Automation is Broken. Here's How QApilot Fixes It with Aditya Challa
    Mar 31 2026

    Mobile test automation is still one of the biggest bottlenecks in modern software delivery. In this interview, QApilot's Co-founder Aditya Challa explains why most AI testing approaches fail and how to fix them.

    Learn more about QApilot: https://links.testguild.com/flutterqa

    If your mobile tests are flaky, slow, or hard to trust, you're not alone.

    Most teams are trying to apply LLM-based AI to problems that actually require deterministic reliability—and that's where things break down.

    In this video, you'll learn:

    • Why mobile test automation breaks at scale
    • The real issue with "99% accurate" AI in testing
    • LLMs vs deterministic AI (and why it matters for mobile apps)
    • How flaky tests destroy confidence in your pipeline
    • How QApilot approaches mobile testing differently
    • What reliable, scalable mobile automation should look like

    What this means for you:

    Fewer false positives, faster releases, and mobile tests you can actually trust.

    00:00 Why Mobile Test Automation Is Still Broken
    01:10 QApilot Overview
    01:51 Why Mobile Testing Tools Fail
    03:13 Why Appium Isn't Enough
    05:09 QApilot's Approach to Mobile Testing
    07:10 Scaling Mobile Testing Across Devices
    08:02 Autonomous Testing + Human in the Loop
    10:55 How QApilot Works (Architecture + Agents)
    13:45 Real Example: Mobile App Crawling in Action
    16:31 Finding Bugs Automatically (Performance + Accessibility)
    18:52 Device Farms & Real Device Testing
    21:50 Future of Mobile Testing (SRE + AI + Quality Layer)
    27:06 Real Customer Results & Case Study
    31:02 Why QApilot Focuses Only on Mobile
    34:04 Where QApilot Fits in CI/CD
    36:00 How to Try QApilot + Final Advice

    Más Menos
    38 m
  • AI Testing: How Solo Testers Stay Confident in Releases with Christine Pinto
    Mar 25 2026

    Are you the only tester on your team—and expected to ensure quality across everything?

    In this episode, we break down the growing challenge of solo QA testing in the age of AI-driven development—where code is generated faster than ever, but confidence hasn't caught up.

    Christine Pinto shares real-world insights from her experience as a solo tester and now as a founder building tools designed to help testers reduce risk, collaborate better, and make smarter release decisions.

    You'll learn:

    Why "all tests passing" doesn't mean your product is safe
    The hidden risks of AI-generated code and test automation
    How to shift from test coverage to risk-based testing
    Practical ways solo testers can avoid burnout and isolation
    How to bring collaboration back into QA—even if you're the only tester
    Why better requirements still matter more than better AI

    Más Menos
    45 m
  • AI Testing from Production Logs: Generate Smarter Regression Tests with Tanvi Mittal
    Mar 17 2026

    What if your production logs could automatically generate new test cases?

    In this episode, Joe Colantonio sits down with Tanvi Mittal to break down how AI-powered log mining is changing the way teams approach software testing, quality engineering, and DevOps.

    Most teams ignore production logs or use them only for debugging. But those logs contain real user behavior, real failures, and real edge cases—the exact scenarios your test suite is probably missing.

    👉 Learn how to:

    • Convert production logs into automated regression tests
    • Use AI to detect real-world failure patterns
    • Apply shift-right testing to catch bugs earlier (and smarter)
    • Handle the challenge of testing non-deterministic AI systems
    • Reduce flaky tests and automation debt with real data

    If you're working with Playwright, Selenium, Cypress, or AI-driven testing tools, this episode will give you a completely new way to think about test coverage.

    Más Menos
    28 m
  • AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman
    Mar 10 2026

    How do you ensure software quality when the system you're testing doesn't give the same output twice? That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades.

    In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now.

    We cover:

    • Why AI-generated code is raising the stakes for QA teams while budgets stay flat
    • The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test
    • How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an air traffic control system)
    • Why testers who embrace AI as a tool — not a threat — will be the ones leading their organizations forward
    • How a live demo failure at a conference inspired Inflectra's new non-deterministic testing tool, SureWire

    If you're a tester, QA manager, or automation engineer trying to figure out how to keep up with AI-driven development without losing your mind — or your job — this one's for you.

    Más Menos
    43 m
  • Test Automation Tools That Scale: From Zero to 1.6M Users with Sanjay Kumar
    Mar 3 2026

    What does it really take to build a test automation tool that millions of testers rely on, without venture capital, paid ads, or a massive team?

    In this episode, we explore how SelectorsHub grew into one of the most widely used productivity tools in software testing, reaching over 1.6 million testers worldwide.

    You'll discover:

    • How to build test automation tools that solve real QA pain
    • Why community-driven development beats chasing funding
    • How to prioritize features when you have thousands of users
    • Whether AI testing tools will replace selector-based automation
    • How to choose between Playwright vs Selenium using automation analysis
    • What founders and QA leaders can learn from scaling without VC

    If you're an automation engineer, QA lead, DevOps professional, or tool builder looking to scale smarter, this episode delivers real-world insight without hype.

    Whether you're building frameworks internally or launching your own automation product, you'll walk away with a clearer strategy for solving problems testers actually care about.

    Más Menos
    30 m
  • AI Test Automation: Ship Twice as Fast with 10x Coverage with Karim Jouini
    Feb 24 2026

    AI test automation is evolving fast — but most tools still generate brittle code that breaks with every UI change.

    See it for yourself now: https://links.testguild.com/Thunders

    In this episode of the TestGuild Podcast, Joe Colantonio sits down with Karim Jouini, founder of Thunders, to explore a radically different approach to AI testing: executing test automation in plain English without generating Selenium or Playwright code.

    Instead of "auto-healing selectors," Thunders interprets natural language directly — allowing teams to:

    • Ship twice as fast
    • Achieve 10x test coverage with the same resources
    • Reduce regression cycles from weeks to days
    • Eliminate massive automation maintenance overhead

    Karim shares real-world case studies, including:

    • A European bank that reduced a 3-year core banking upgrade testing effort to 4 months
    • A SaaS company that transitioned from a traditional QA team to AI-assisted product-led testing

    We also discuss:

    • Whether AI test agents replace QA roles
    • How QA managers must shift from individual contributors to AI managers
    • The risks of adopting AI without a defined success metric
    • The future of shift-left testing in the AI era

    If you're a software tester, automation engineer, QA lead, or DevOps leader trying to understand what's hype versus real ROI in AI testing — this episode breaks it down.

    Try it for yourself and see how AI testing fits into your pipeline.

    Get personal demo: https://links.testguild.com/Thunders

    Más Menos
    42 m
  • Performance Testing with AI w/ Akash Thakur
    Feb 17 2026

    Is traditional performance testing becoming obsolete?

    In this episode, performance engineering expert Akash Thakur shares why AI is fundamentally transforming load testing, scripting, observability, and shift-left strategies.

    With 17 years of real-world enterprise experience, Akash explains how AI-augmented tools are already reducing scripting time by 30%, improving analysis speed, and helping teams move from reactive performance testing to predictive intelligence.

    You'll learn:

    • How AI is accelerating performance scripting and analysis
    • Why shift-left performance testing is finally becoming realistic
    • The role of structured data in predictive QA models
    • How to test AI applications (LLMs, GPUs, inference throughput) differently than traditional web apps
    • What the future role of performance engineers looks like — architect, not script writer

    If you're a performance tester, SRE, QA leader, or DevOps engineer wondering how AI will impact your role — this episode gives you practical, actionable insights you can apply immediately.

    Más Menos
    27 m
  • Spec2TestAI: Stop Defects Before They Reach Production with Missy Trumpler
    Jan 27 2026

    Most teams find defects after the damage is done — during regression, late-stage testing, or production incidents. That's expensive, stressful, and completely avoidable.

    Try Spec2Test AI now: https://testguild.me/spec2testdemo

    In this episode, Joe Colantonio sits down with Missy Trumpler, CEO of AgileAILabs, to explore how Spec2TestAI helps teams prevent defects before code ships by applying AI directly to requirements.

    You'll learn:

    • Why traditional test automation still misses critical risk
    • How predictive, requirements-based AI testing works in practice
    • What "shift-left" actually looks like beyond the buzzword
    • How to reduce escaped defects without writing more tests
    • Why secure, explainable AI matters for QA and enterprise teams

    This conversation is especially valuable for software testers, automation engineers, and QA leaders who want earlier visibility into risk, faster feedback, and higher confidence releases.

    Don't miss Automation Guild 2026 - Register Now: https://testguild.me/podag26

    Más Menos
    35 m