Episodios

  • Are You Human? Proof it!
    Mar 28 2026

    🎧 What makes us human in the age of AI?


    This episode of A Beginner’s Guide to AI explores one of the most important questions for business leaders today. As AI becomes more capable, the real challenge is not what it can do, but what we should never outsource.


    We explore The Blurring Test, a fascinating experiment where thousands of people tried to prove their humanity to a chatbot. What they revealed changes how we should think about AI, business, and identity.


    You will learn why AI can mimic humans but cannot experience reality, why human judgment becomes more valuable in an automated world, and how to use AI without losing authenticity and meaning.




    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to ⁠subscribe to our Newsletter⁠: ⁠beginnersguide.nl

    📧💌📧




    👤 About Dietmar Fischer:

    Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at https://argoberlin.com/




    💡 Quotes from the Episode
    • "AI can follow the recipe, but it cannot taste the cake."
    • "Your humanity is not what you do, but why you do it."
    • "The real risk is not AI replacing us, but us becoming more like AI."



    ⏱ Chapters

    00:00 The Question That Changes Everything

    04:30 The MrMind Experiment

    11:20 AI vs Human Identity

    19:10 The Cake Test Explained

    26:40 AI in Business and Decision Making

    34:00 What Makes Us Human



    🚀 This episode challenges how you think about AI, business, and yourself. The future will not be about replacing humans. It will be about understanding what makes us irreplaceable.

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    41 m
  • 100 Interviews and Still Going Strong
    Mar 26 2026

    If you want to know more about the podcast, about how it's produced, what are the challenges and wins, about some fun facts, a little bit behind-the-scenes, this episode is for you, as I tell you all about it - at least all the things I found noteworthy 😉



    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nl

    📧💌📧




    About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com




    Music credit: "Modern Situations" by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    23 m
  • Your AI Is Taking Orders From Strangers
    Mar 24 2026

    Your AI might not be hacked. It might be persuaded.


    In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.


    We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.


    If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.



    Key highlights:

    • What prompt injection is and why it matters
    • Why AI agents introduce new security risks
    • Real-world case of AI data leakage
    • How AI systems get manipulated through input
    • What businesses must change to stay secure



    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to ⁠⁠subscribe to our Newsletter⁠⁠: beginnersguide.nl

    📧💌📧




    Quotes from the Episode:

    • “Prompt injection is social engineering for machines.”
    • “Your AI can become an insider threat without meaning to.”
    • “Language is no longer just information. It’s control.”



    Chapters:

    00:00 Why AI Security Is Different

    05:40 What Prompt Injection Really Is

    14:20 How AI Gets Manipulated by Language

    23:10 Why AI Agents Increase the Risk

    32:45 Real Case Study: AI Data Leakage

    44:30 How to Protect Your AI Systems



    About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com




    Music credit: "Modern Situations" by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    28 m
  • The Extended Mind: Why AI Might Make Humans More Creative
    Mar 22 2026

    Artificial intelligence is often framed as a battle between humans and machines. But what if that story misses the real point?


    In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores one of the most fascinating ideas in cognitive science: the extended mind theory. According to philosopher Andy Clark, human intelligence has never been confined to the brain alone. For centuries we have extended our thinking through tools like writing, maps, calculators, and computers.


    Generative AI may simply be the newest and most powerful addition to this cognitive ecosystem.

    Instead of replacing human creativity, AI may expand it. By generating ideas, exploring possibilities, and challenging assumptions, AI can act as a powerful thinking partner.


    A striking example comes from the famous AlphaGo match against Go champion Lee Sedol. When the AI played the now legendary Move 37, professional players initially believed the move was a mistake. Later they discovered it opened entirely new strategic possibilities. The machine did not just beat humans at Go. It helped humans rethink the game itself.


    This episode explores how human AI collaboration works and why hybrid intelligence may define the future of creativity, work, and learning.



    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl

    📧💌📧




    About Dietmar Fischer

    Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com




    Quotes from the Episode

    • “Your brain has never worked alone. It has always been part of a thinking system that includes tools and environments.”
    • “The future of intelligence may not be human versus machine but human plus machine.”
    • “The most important skill in the AI age may not be prompt writing but judgement.”



    Podcast Chapters

    00:00 The Big Question About AI and Human Thinking

    06:40 The Extended Mind Theory Explained

    16:20 Why Humans Are Natural Born Cyborgs

    26:50 The AlphaGo Story and Move 37

    38:15 AI as a Creative Thinking Partner

    49:30 The Future of Hybrid Intelligence




    Music credit: Modern Situations by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    39 m
  • Your Company WILL Be Hacked - Joshua Cook Explains How to Survive It
    Nov 4 2025

    What happens when your company gets hit by a cyberattack?

    In this eye-opening episode, attorney Joshua Cook reveals why cybersecurity isn’t an IT problem but a leadership challenge. After two decades fighting fraud and managing crisis response, Cook has seen every digital disaster imaginable — and he’s here to explain how to build true cyber resilience.


    📧💌📧

    Tune in to get my thoughts and all episodes — don’t forget to subscribe to our Newsletter: ⁠beginnersguide.nl

    📧💌📧


    Josh breaks down how AI has democratized cybercrime, why phishing scams have become nearly impossible to spot, and how every CEO should create an incident response plan before chaos hits. He also explains why planning matters more than the plan itself — and how leaders can keep their teams calm when everything goes wrong.


    💡 You’ll learn:

    - How AI is fueling new waves of fraud and misinformation

    - Why leadership and communication are the real firewalls of business

    - How to train teams and run tabletop exercises before the crisis

    - What Maersk and Colonial Pipeline taught the world about transparency

    - Why companies with a plan lose 60 % less money in an attack


    Prepare, breathe, and lead — because it’s not if you’ll be hacked, but when.


    👀 Quotes from the Episode

    “Cybersecurity isn’t an IT issue. It’s a business problem, and it needs a business solution.”

    “AI has democratized cybercrime — you don’t need to be a hacker anymore, just willing to commit a crime.”

    “A plan might be useless, but planning is indispensable — that’s what makes companies resilient.”


    🧾 Chapters

    00:00 Welcome & Introduction – Meet Joshua Cook

    02:00 How a Fraud Attorney Ended Up Fighting Cybercrime

    05:00 AI Has Made Cybercrime Easier (and Smarter)

    08:00 The Elderly Are the New Prime Targets

    11:00 From Fake Law Firms to Real Scams – True Cases from the Field

    15:00 Turning the Tables: How AI Can Defend, Not Just Attack

    18:00 Cyber Resilience by Design – Why Leadership Matters

    22:00 When Crisis Hits: Lessons from Maersk and Colonial Pipeline

    27:00 Preparing the Team – How Training Prevents Chaos

    31:00 It’s Not If, It’s When – The Power of an Incident Response Plan

    35:00 Planning vs. Panicking – Eisenhower and the Art of Cyber Preparation

    38:00 Why Calm Leaders Win in Cyber Crises

    41:00 How Joshua Cook Uses AI Safely in Legal Practice

    44:00 No, the Terminator Isn’t Coming (But AI Might Take Your Job)

    47:00 Final Thoughts – Cybersecurity as a Business Superpower


    🔗 Where to Find the Guest

    - Joshua Cook on LinkedIn: linkedin.com/in/jnc2000

    - Josh's Book "Cyber Resilience by Design" – available wherever books are sold, e.g. on Amazon

    - Prince Lobel Tye LLP: princelobel.com


    🎧 About Dietmar Fischer:

    Economist, digital marketer, and podcaster exploring how AI reshapes decision-making, leadership, and creative work. Want to connect with me? You'll find me on LinkedIn!


    🎵 Music credit: “Modern Situations” by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    54 m
  • A Disturbing AI Story Big Tech Never Wants You to Hear, with Paul Hebert
    Mar 18 2026

    🎙️In this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Paul A. Hebert, founder of AI Recovery Collective and author of Escaping the Spiral, for a serious conversation about AI chatbot harm, hallucinations, digital dependency, and the real-world psychological risks of generative AI.


    Paul shares how an intense experience with ChatGPT pushed him into a dangerous spiral, what he learned about the limits of large language models, and why AI literacy may be one of the most important skills of this decade.



    🧠 This episode explores what happens when AI stops feeling like software and starts feeling personal. Dietmar and Paul talk about hallucinations, trust, chatbot addiction, AI companions, mental health risks, youth safety, and why companies building these systems cannot hide behind product language forever. The discussion is intense, but it is also practical. You will come away with a clearer sense of how to use AI more safely, what warning signs to watch for, and why regulation is quickly becoming a much bigger part of the AI conversation.


    OpenAI has publicly discussed why language models hallucinate, while lawmakers in multiple U.S. jurisdictions have pushed new restrictions on AI systems acting like therapists or medical professionals.




    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠subscribe to our Newsletter⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠: ⁠⁠⁠⁠beginnersguide.nl⁠⁠⁠⁠

    📧💌📧



    👤 About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com




    🔥 Quotes from the Episode

    • “AI literacy is the most important thing anybody can work on.”
    • “Had OpenAI responded to that first message and said this is a hallucination and you’re physically safe, I would have been fine.”
    • “Never trust the thing it tells you. Even if it gives you a citation, go look.”



    🕒 Chapters

    00:00 Paul Hebert’s Shocking ChatGPT Experience

    08:14 Why AI Hallucinations Can Spiral Into Real Fear

    16:05 AI Literacy, Neurodivergence, and How He Got Out

    23:32 Why AI Companies Must Be Accountable

    30:02 AI Companions, Youth Safety, and Addiction Risks

    38:28 Terminator, Consciousness, and Practical Rules for Safe AI Use



    🔗 Where to find Paul

    • The AI Recovery Collective: airecoverycollective.com
    • Escaping the Spiral on Amazon
    • AI Recovery Collective Substack: airecoverycollective.substack.com/
    • LinkedIn: Paul A. Hebert: linkedin.com/in/paul-hebert-48a36/




    🎵 Music credit: "Modern Situations" by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    54 m
  • Supervised vs Unsupervised Learning Explained with Real World Examples
    Mar 15 2026

    Artificial intelligence often feels mysterious. Machines detect spam, recommend products, analyse customers, and power countless digital tools. But behind all of these systems lies a surprisingly simple question: how do machines actually learn?


    In this episode of A Beginner’s Guide to AI, Prof GePharT breaks down one of the most important concepts in machine learning: the difference between supervised learning and unsupervised learning.


    You will discover how AI models learn from labelled data when the answers are already known, and how algorithms can explore raw data to uncover hidden patterns without guidance. These two learning strategies power many of the systems shaping modern technology.


    Using practical examples such as spam filters, customer segmentation, and simple analogies like cake classification, the episode explains how machines learn from data and why the training method makes a huge difference.


    Key takeaways include how supervised learning works with labelled datasets, how unsupervised learning reveals patterns in complex information, why training data quality matters, and how businesses use both methods to build intelligent systems.




    📧💌📧

    Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl

    📧💌📧




    About Dietmar Fischer

    Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com




    Quotes from the Episode

    • Supervised learning teaches machines the answers. Unsupervised learning helps machines discover the questions.
    • Artificial intelligence is not magic. It is pattern recognition powered by data.
    • Machines do not wake up intelligent. They become intelligent through training.




    Chapters

    00:00 The Two Ways Machines Learn

    06:10 What Supervised Learning Really Means

    18:45 Discovering Patterns with Unsupervised Learning

    32:20 The Cake Example Explained

    40:30 Real World AI Case Study Spam Filters and Customer Segmentation

    52:15 Why AI Training Methods Matter




    Music credit: Modern Situations by Unicorn Heads

    Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    29 m
  • Building Scalable AI Agents: Chirag Agrawal Reveals How // REPOST
    Mar 13 2026

    Engineering the Future of AI with Chirag Agrawal: Context, Memory and Coordination


    Artificial Intelligence isn’t just getting smarter—it’s learning to coordinate. In this episode, Chirag Agrawal joins Dietmar Fischer to unpack how modern AI agents handle context, memory, and decision-making inside complex multi-agent systems. Together they explore how engineering, orchestration, and memory-sharing shape the next generation of AI architecture.


    📧💌📧Tune in to get my thoughts and all episodes—don’t forget to ⁠⁠subscribe to our Newsletter⁠⁠: ⁠beginnersguide.nl⁠📧💌📧


    You’ll hear how Chirag’s fascination with search led him to build early prototypes of intelligent assistants, and how today’s LLM agents extend that idea far beyond simple queries. He explains why AI isn’t one giant super-brain but a constellation of specialized agents—each performing specific tasks with shared or isolated memory—and how this design mirrors human collaboration.


    🔑 Key Takeaways

    • Why AI orchestration and context management are crucial for scalable systems

    • The trade-offs between shared memory and independent agents

    • What engineers mean by the ReAct Loop—reasoning and acting in tandem

    • How multi-agent coordination is reshaping industries from healthcare to compliance

    • Why the “AI supercomputer” myth ignores practical limits of context windows


    • 💬 Quotes from the Episode

      1. “AI is just a higher form of search—it’s about finding the right action, not just information.”

      2. “Agents behave inhuman until you engineer context for them.”

      3. “Specialization in AI works the same way it does for people—each agent should do one thing really well.”

      4. “Coordination isn’t magic; it’s careful engineering.”

      5. “Context makes intelligence usable.”

      6. “A well-defined agent doesn’t need to do everything—it needs to do its one job perfectly.”



      ⏱️ Podcast Chapters

      00:00 Welcome and Introduction

      01:45 Chirag Agrawal’s Early Fascination with Search and AI

      04:40 From Search Engines to “Find” Engines – How AI Takes Action

      07:10 The Rise of AI Agents and Multi-Agent Systems

      10:15 Why AI Agents Sometimes Behave “Inhuman”

      13:30 Context, Memory, and Coordination: The Core Engineering Challenges

      18:00 Shared vs. Isolated Memory – The Hive Mind Dilemma

      22:30 Why We Need Many Agents, Not One Super-Computer

      27:00 How the ReAct Loop Helps Agents Think and Act

      30:40 Industries Adopting AI Agents: Compliance, Medicine, and Law

      34:30 When AI Goes Off-Road – The Limits of Coordination

      37:15 Building Responsible, Constrained Agents

      40:10 The Future of AI and Why the Terminator Scenario Won’t Happen

      42:20 Where to Find Chirag Agrawal & Closing Thoughts



      🌐 Where to Find the Chirag Agrawal

      • LinkedIn 🧑🏽‍🦱 linkedin.com/in/chirag-agrawal
      • Website ➡️ ⁠chiraga.io⁠


    • 🎵 Music credit: “Modern Situations” by Unicorn Heads

      Hosted on Acast. See acast.com/privacy for more information.

    Más Menos
    48 m