Lunchtime BABLing with Dr. Shea Brown Podcast Por Babl AI Jeffery Recker Shea Brown arte de portada

Lunchtime BABLing with Dr. Shea Brown

Lunchtime BABLing with Dr. Shea Brown

De: Babl AI Jeffery Recker Shea Brown
Escúchala gratis

Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.2022 Lunchtime BABLing Economía Gestión Gestión y Liderazgo
Episodios
  • Model Drift to Bias and Discrimination: The Many Risks of AI: Part 2
    Mar 23 2026
    In Part 2 of this Lunchtime BABLing series on AI risk, Dr. Shea Brown, CEO of BABL AI, is joined again by Jeffery Recker to continue their lightning-round exploration of the real challenges organizations face when deploying AI. This episode dives deeper into critical concepts such as model drift, bias vs. discrimination, and growing explainability gaps in modern AI systems — especially as organizations increasingly rely on large language models and automated decision-making tools. Together, they discuss: -What model drift is and how organizations can detect and manage it -Why users (not just developers) should understand performance drift in AI systems -The important distinction between statistical bias and illegal discrimination -How bias can emerge even when demographic data isn’t explicitly used -The role of diversity of thought and structured risk assessments in uncovering AI risks -Why explainability is becoming harder as AI models grow more complex -The trade-offs between performance, trust, fairness, and regulatory compliance The conversation also explores broader questions around how AI is being used today, the limitations of “black-box” systems, and why validation, testing, and governance are becoming essential capabilities for organizations adopting AI at scale. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Más Menos
    35 m
  • Data Poisoning to Hallucinations: The Many Risks of AI Part 1
    Mar 9 2026
    Data Poisoning to Hallucinations: The Many Risks of AI | Part 1 In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker for a fast-paced, unscripted deep dive into the real risks behind today’s AI systems. From data poisoning and model inversion to prompt injection, membership inference, and AI hallucinations, this lightning-round conversation breaks down the security, governance, and reliability challenges organizations must understand before deploying AI at scale. But this episode doesn’t stop at definitions. Shea and Jeffery also explore: - The difference between direct vs. indirect prompt injection - Whether AI hallucinations can ever truly be “solved” - Why AI isn’t a truth machine - Whether we’re using AI the wrong way - What responsible validation should look like in enterprise AI deployment As AI systems move from experimentation into real-world decision-making, understanding these risks isn’t optional — it’s foundational. If you're working in AI governance, assurance, compliance, risk, or deploying AI inside your organization, this conversation will help you think more critically about how these systems actually behave. 🎯 Take the FREE assessment here: https://shea-1mb3pmep.scoreapp.com/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Más Menos
    34 m
  • AI Test, Evaluation, & Red Teaming Specialist Bootcamp
    Feb 23 2026
    In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown introduces the new AI Test, Evaluation, & Red Teaming Specialist Bootcamp—a hands-on, technical program designed to train the next generation of AI assurance professionals. Drawing directly from BABL AI’s internal methodologies used to audit and evaluate high-risk AI systems across industries, this bootcamp addresses one of the most critical gaps in the AI ecosystem: the lack of practical training in how to design, execute, and interpret rigorous AI testing and red teaming in real-world contexts. Dr. Brown explains: -Why AI testing, evaluation, and red teaming are essential for high-risk AI systems -How BABL AI developed its internal, risk-driven testing and assurance frameworks -The difference between auditing AI systems and directly evaluating and validating them -What participants will learn during the five-week, hands-on bootcamp -The prerequisites, structure, and technical depth of the program -How this bootcamp will evolve into BABL’s new AI Test, Evaluation, & Red Teaming Specialist Certification -This exclusive early adopter cohort is limited to approximately 30 participants and is designed for professionals with foundational knowledge in AI auditing, governance, or assurance who want to develop practical technical capabilities in AI evaluation and red teaming. -Participants will learn how to move systematically from an AI use case to defensible test results—building real test plans, executing evaluations, and developing assurance-relevant conclusions using BABL’s proven frameworks. Take the test to see if you are a good candidate for the AI Test, Evaluation, & Red Teaming Specialist Bootcamp: https://zfrmz.eu/RBroC4VLZ9I41ihKl1XV Learn more about BABL AI Certifications: www.babl.ai About Lunchtime BABLing: Lunchtime BABLing is hosted by Dr. Shea Brown, CEO of BABL AI, an independent AI assurance firm that audits algorithms for bias, risk, and governance. The podcast explores AI auditing, governance, regulation, and technical assurance practices shaping the future of trustworthy AI. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Más Menos
    28 m
Todavía no hay opiniones