What’s the BUZZ? — AI in Business Podcast Por Andreas Welsch arte de portada

What’s the BUZZ? — AI in Business

What’s the BUZZ? — AI in Business

De: Andreas Welsch
Escúchala gratis

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

“What’s the BUZZ?” is a live format where leaders in the field of artificial intelligence, generative AI, agentic AI, and automation share their insights and experiences on how they have successfully turned technology hype into business outcomes.

Each episode features a different guest who shares their journey in implementing AI and automation in business. From overcoming challenges to seeing real results, our guests provide valuable insights and practical advice for those looking to leverage the power of AI, generative AI, agentic AI, and process automation.

Since 2021, AI leaders have shared their perspectives on AI strategy, leadership, culture, product mindset, collaboration, ethics, sustainability, technology, privacy, and security.

Whether you're just starting out or looking to take your efforts to the next level, “What’s the BUZZ?” is the perfect resource for staying up-to-date on the latest trends and best practices in the world of AI and automation in business.

**********
“What’s the BUZZ?” is hosted and produced by Andreas Welsch, top 10 AI advisor, thought leader, speaker, and author of the “AI Leadership Handbook”. He is the Founder & Chief AI Strategist at Intelligence Briefing, a boutique AI advisory firm.

© 2025 Intelligence Briefing — What’s the BUZZ? — All rights reserved.
Economía Gestión Gestión y Liderazgo
Episodios
  • Teaching AI Agents Ethical Behavior (Rebecca Bultsma)
    Dec 20 2025

    Can you trust an AI agent to act in line with your values — and who’s responsible when it doesn’t?

    In this episode, Andreas Welsch talks with AI ethics consultant Rebecca Bultsma about the pitfalls of rushing AI agents into business workflows and practical steps leaders should take before handing autonomy to software. Rebecca draws on her early ChatGPT experiments and academic work in data & AI ethics to explain why generative AI raises fresh ethical risks and how organizations can reduce harm.

    What you’ll learn:

    • Why generative AI and agents amplify old AI ethics problems (bias, hidden assumptions, and Western-centric worldviews).
    • Why you should build internal understanding first: experiment with low-stakes, traceable use cases before deploying public agents.
    • The importance of audit trails, explainability, and oversight to trace decisions and assign accountability when things go wrong.
    • Practical red flags: agents that transact autonomously, weak logging, and complacency about vendor claims.
    • A legal reality check: new laws (like California’s chatbot rules) are emerging and could increase liability for organizations that deploy chatbots or agents prematurely.

    The top takeaways:

    • Learn by experimenting personally and internally in your organization to discover where agents fail.
    • Start small with low-stakes, narrowly scoped tasks you can monitor and audit.
    • Don’t rush; rather, observe others' failures, train your people, and build governance before going public.

    If you’re a leader evaluating agents or responsible for AI governance, this episode gives clear, actionable advice for keeping your organization out of the headlines for the wrong reasons. Tune in to hear the whole conversation and learn how to turn AI hype into safer business outcomes.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    Más Menos
    16 m
  • Designing Workforces for Agentic AI: What HR Must Do Next (Todd Raphael)
    Dec 6 2025

    What actually changes when AI agents become part of your workforce — and which human skills still matter most?

    In this episode, host Andreas Welsch talks with HR and talent-intelligence veteran Todd Raphael about the practical realities of bringing agentic AI into organizations. They move beyond proofs-of-concept to ask the tough questions: How do agents fit into daily workflows, what invisible human contributions should you protect, and how should HR and IT collaborate to redesign roles, org charts, and the employee lifecycle?

    Listen for concrete thinking and strategic framing, including:

    • The hidden value humans bring: Why many critical contributions (trusted relationships, customer touchpoints, institutional memory) don’t appear on job descriptions — and what that means when you automate tasks.
    • Rethinking structure and advancement: How flatter org models and new measures of impact (knowledge, networks, influence) may change who gets promoted and how leadership is defined.
    • HR’s seat at the table: Why HR is uniquely positioned to plan holistically for hire-to-retire changes, from skills-based hiring to internal marketplaces, reskilling, and retention when agents handle more tasks.


    You’ll also hear examples and practical prompts for leaders: identify the intangible work that must remain human, map tasks vs. relationships before automating, and start workforce planning that considers people and agents together.

    If you’re an HR leader, people manager, or technology decision-maker trying to turn agent hype into durable business outcomes, this episode gives you a playbook to start redesigning work the right way.

    Tune in now to learn how to protect human advantage and build an effective human+agent workforce.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    Más Menos
    27 m
  • Agents Need IDs: How to Authenticate & Score Agent Trust (Tim Williams)
    Nov 22 2025

    When AI agents can self‑spawn, act at machine speed and delete their own trails, identity and trust become business-critical.

    In this episode, Andreas Welsch talks with Tim Williams—an experienced practitioner who’s helped organizations commercialize AI—about the security gaps agentic AI exposes and practical ways to close them. Tim explains why traditional username/passwords and persistent tokens won't cut it, how trust for agents should be treated like a credit score rather than a binary yes/no, and why observability and transaction-level controls are essential.

    Highlights you’ll get from the conversation:

    • Why agents operate at a different scale and cadence than humans, and the new risks that creates.
    • Real breach lessons (e.g., persistent token compromises) that show why persistent access is dangerous.
    • The concept of sliding trust: using a trust score to gate actions (low-risk vs high-risk transactions).
    • Short-lived, transaction-based approvals and why persistent credentials must be replaced.
    • Why cryptographically verifiable, immutable identifiers (and why blockchain can help) matter for accountability.
    • Practical governance: observability, human-in-the-loop checkpoints, and preparing infrastructure in parallel with agent adoption.

    Who this episode is for: business leaders deciding what to delegate to agents; security and identity teams rethinking access; product and platform builders designing safe workflows for autonomous systems.

    If you want actionable guidance on how to let agents accelerate your business without exposing you to runaway risk, tune in and learn how to turn agent hype into reliable business outcomes.

    Questions or suggestions? Send me a Text Message.

    Support the show

    ***********
    Disclaimer: Views are the participants’ own and do not represent those of any participant’s past, present, or future employers. Participation in this event is independent of any potential business relationship (past, present, or future) between the participants or between their employers.


    Level up your AI Leadership game with the AI Leadership Handbook (https://www.aileadershiphandbook.com) and shape the next generation of AI-ready teams with The HUMAN Agentic AI Edge (https://www.humanagenticaiedge.com).

    More details:
    https://www.intelligence-briefing.com
    All episodes:
    https://www.intelligence-briefing.com/podcast
    Get a weekly thought-provoking post in your inbox:
    https://www.intelligence-briefing.com/newsletter

    Más Menos
    26 m
Todavía no hay opiniones