Episodios

  • Test Code Migration not Test Cases
    Oct 7 2025

    Should you use AI to help you migrate test automation code? And what should you actually migrate, the tests coverage hasn't changed. In this episode we discus show abstractions and AI can be used to migrate... and discuss when you shouldn't.

    Welcome to The Evil Tester Show! In this episode, host Alan Richardson dives into the complex world of test automation migrations. Have you ever wondered what it really takes to move your automated test execution code from one tool or language to another—like switching from WebDriver to Playwright, or migrating from Java to TypeScript? Alan breaks down the pitfalls, challenges, and best practices you need to consider before taking the leap. He explains why migrating isn’t just about copying test cases, how abstraction layers can save you time and headaches, and why using AI and solid design principles can streamline your transition. Whether you’re facing unsupported tools, evolving frameworks, or strategic changes in your testing approach, this episode offers practical advice to plan and execute a seamless migration—without burying new problems beneath old ones.

    00:00 Migration Challenges

    02:43 Tool Evaluation

    04:05 Migrating to Playwright: Considerations

    06:00 Migration Process

    06:25 Migrate: Easy First, Hardest Next

    09:37 Effective Migration Strategies for Tests

    10:23 Focusing Abstractions

    14:39 Optimize Test Code Migration

    15:44 Focus on Abstraction, Not Auto-Healing

    **1. Why Migrate—And When You Really Shouldn’t** Before any big move, Alan urges teams to get their “why” straight. Is your current tool unsupported? Is your framework truly incompatible, or are you missing some hidden potential? Migrate for the right reasons and make sure your decision isn’t just papering over problems that could follow you to the next tool.


    **2. Don’t Confuse Migration with a Rewrite** Too many teams treat migration like a rewrite—often with disastrous results. Alan emphasizes the importance of planning ahead, solving existing flakiness and coverage issues _before_ you move, and carefully evaluating all options (not just the shiny new tool you think you want).


    **3. The Secret Weapon: Abstraction Layers** The podcast’s biggest takeaway: Don’t migrate “test cases”—migrate _abstractions_. If your tests are full of direct calls like `webdriver.openPage()`, you’ve got work to do. Build out robust abstraction layers (think page objects or logical user flows) and keep your tests clean. When it comes time to migrate, you’ll only need to move those underlying layers, not thousands of individual test case scripts.


    **4. Taming Flakiness and the Risks of Retries** Migration is not the time to rely on self-healing tests or retries. Any test flakiness _must_ be rooted out and fixed before porting code. Bringing instability into a new stack only multiplies headaches later.


    **5. Harnessing AI—But Stay in Control** AI-assisted migration really shines at mapping old code to new, but Alan warns against “agentic” (hands-off) approaches. Use AI as a powerful tool, not as the driver—you need understanding and control to ensure things work reliably in CI/CD pipelines.


    **6. Learn Fast: Tackle the Hardest Stuff Early** Pro tip: Once you’re ready, start your migration with the simplest test, just to get going—then dive into the hardest, flakiest, most complex workflows. You’ll uncover potential blockers early and kick-start team learning.


    “We’re not migrating test cases when we change a tool. We’re migrating the physical interaction layer with our application... ”

    Más Menos
    17 m
  • Building a Job-Hunting Portfolio for Software Development and Testing
    Sep 18 2025

    Should you have an online portfolio showcasing your Software Development and Testing skills to help get a job?

    It really depends on the recruitment process. But... if I'm recruiting, and you have a profile then I will have looked at it. So it better be good.

    Most Software Developers and Testers don't have public portfolios so that means you can really stand out.

    We'll cover the difference between different types of projects: A breakdown of project types: Learning Projects, Personal Projects, Portfolio Projects.

    Lots of tips on how to adjust your Github profile and promote your projects.

    00:00 Value of Portfolio

    02:59 Stand Out Skills

    09:19 Project Types

    12:27 Showcase Projects

    19:39 Promoting Yourself

    21:44 Final Advice

    Más Menos
    23 m
  • Respect in Software Testing and Development
    Sep 6 2025

    Software Testing deserves respect. Doesn't it? But so does every role in Software Development: managers, testers, QA, programmers, Product, Everyone. This is for you.

    Ever feel like you’re not getting the respect that you deserve in your job? This episode dives deep into the topic of Respect in tech, especially focusing on software testing versus programming.

    We look at why some roles seem to earn more respect, what that means for workplace culture, and how you can change things for yourself and your team. Respect isn’t just about manners or titles - it’s about how the system works and how we show up in our roles.

    If you’ve worked in agile projects, you might have heard, "Everyone is a developer." But some roles seem to get more recognition than others. Is this because of how we define our jobs, or is it just baked into the way our workplaces run? This episode is a call to action, urging everyone to look at respect both at a personal, process and craft level.

    We’re breaking down the difference between self-respect, respect for others, and respect built into your team’s process. You'll see why just doing your job isn’t enough. You have to own your craft, communicate what you do, and make your contributions visible to earn genuine respect. By the end of this episode, you'll have practical steps to make respect part of your daily work, whether you’re writing code, testing, building products, or managing.


    00:00 Respect Dilemma

    02:41 Human Level Respect

    06:31 Self-Respect First

    10:17 Respect Cycle

    15:37 Knowledge Sharing

    18:53 Respectful Organizations

    21:26 Final Thoughts

    Más Menos
    22 m
  • Software Testing Strategy vs Planning The Strategy Episode
    Aug 7 2025

    Software Testing typically confuses a Test Strategy Document with the process of strategising. Alan Richardson simplifies the over complicated world of test strategy. Drawing on years of experience creating test strategies and plans, Alan explains the real difference between strategy, approach, and plan. Explaining that what really matters isn’t following templates or writing elaborate documents, but actually thinking through problems, understanding risks, and communicating those ideas clearly.

    Más Menos
    30 m
  • Software Testing Job Market with Jack Cole
    Jun 28 2025

    Are you trying to figure out how to break into the software testing job market or make your next big move? This episode of the Evil Tester Show dives deep into the realities of tech recruitment, job search strategies, and career planning for testers - with recruitment veteran Jack Cole from WEDOTech.uk - Whether you're an experienced Test manager, expert Tester, junior QA or even a programmer, Jack’s decades of Software Testing and Development industry experience will give you strategies and tips about what works in today’s competitive job seeking world.

    In this packed hour-long conversation, we cover everything from market trends, LinkedIn networking, and the recruitment pipeline, to building a career roadmap and even the AI hype machine. Grab your notebook, settle in, and get ready for real insights you can use – plus a few stories from the trenches and actionable tips for every step of your job hunt.

    Más Menos
    1 h
  • Practicing Software Testing - Guest James Lyndsay
    Mar 18 2025

    Software Testing is a skill and like all skills require practice, that's what makes you a practitioner of Software Testing. In this episode we're diving into the world of practice with the James Lyndsay.

    In this conversation, your host Alan Richardson chats with James about the essence of practice in software testing, exploring how exercises and real-world scenarios can enrich our skills. James shares insights on his weekly online practice sessions and the interactive Test Lab concept, offering a dynamic playground for testers.

    Discover how practice blends with rehearsal and learning, and delve into the intriguing intersection of testing and development. With firsthand experiences in software experiments, fencing, and scientific investigation, James and Alan discuss the art of modeling and exploring software systems. Whether you're refining your testing techniques or embracing new perspectives with AI, this episode offers a wealth of wisdom for testers at all levels.

    Join us as we learn, laugh, and explore the world of testing practice. We hope you find inspiration for your own practice sessions. Don't forget to check out James's resources at https://workroom-productions.com for more testing challenges and exercises.

    Más Menos
    52 m
  • Context in Context Driven Software Testing
    Jan 4 2025

    Effective Software Testing is highly contextual: we adapt what we do to the project and the process.

    In this episode of The Evil Tester Show, host Alan Richardson describes context-driven testing. Is there really such a thing as context-driven testing, or is it just a phrase we use to describe our testing approach? Alan explores the intricacies of context in testing, discussing its evolving nature, the impact of context on testing practices, and the challenges in defining it.

    From the origins of the term by James Bach, Brian Marick, Brett Petichord, and Cem Kaner, to Alan’s personal insights on systems within systems and how context impacts our testing methodologies, this episode provides a comprehensive look at how context affects software testing. Alan also critiques the principles of context-driven testing and emphasizes the importance of adapting to projects without being swayed by ideologies.

    We explore how to navigate context in testing environments, adapt our approaches, and effectively challenge and evolve systems. Discover the importance of context-driven testing in software development, exploring models, adaptability, and useful practices.

    Más Menos
    25 m
  • The Test Automation Pyramid Episode
    Jun 18 2023

    Software Testing and Development professionals often mention the Test Automation pyramid when describing Test Autoamtion. Let's do a deep dive and explore what that means.

    This episode covers the Test Automation Pyramid, created by Mike Cohen in 2008-2009 in the book "Succeeding With Agile". We will go beyond the diagram and look at the model that supports it. Then deep dive into the model to explore it's meaning in relation to Automated Execution Coverage, not Testing.

    - The model was created by Mike Cohen in 2008-2009 in the book "Succeeding With Agile."

    - The original model focused on UI, service level, and unit level automation.

    - Over the years, different interpretations and variations of the model have emerged.

    - The term "service level" in the model has led to ambiguity and different interpretations.

    - The diagram in the model is a simplified representation of a deeper underlying model.

    - The focus should be on achieving coverage at the most appropriate level in the system.

    - The model addresses the importance of avoiding duplication and redundancy in automated coverage.

    - The process and team structure can impact the effectiveness of the model.

    - The model can be reframed as an automated execution coverage pyramid.

    Más Menos
    34 m