Episodios

  • Applied Intelligence: Mastering the Craft
    Oct 12 2025

    Welcome to the Deep Dive. In this episode, drawing on Accenture’s extensive analysis into The art of AI maturity.

    Artificial intelligence (AI) has evolved from a scientific concept to a societal constant, and companies across industries are relying on and investing in AI to drive logistics, improve customer service, and increase efficiency. However, despite these ever-expanding use cases, most organizations are barely scratching the surface of AI’s full potential.

    Accenture’s research found that only 12% of firms globally have advanced their AI maturity enough to achieve superior growth and business transformation. We call these companies the “AI Achievers”. Achieving this high performance requires understanding that while there is a science to AI, there is also an art to AI maturity. Achievers succeed not through a single sophisticated capability, but by combining strengths across strategy, processes, and people.

    Advancing AI maturity is no longer a choice, but an opportunity facing every industry and leader. In this episode, we will discuss how Accenture, the global consulting firm, defines the AI maturity journey

    Más Menos
    13 m
  • The Blueprint for the Future: Implementing AI for the Intelligence Community
    Oct 14 2025

    In this episode, we will dive deep into this crucial topic. You will gain a comprehensive understanding of:

    • The importance and benefits of AI for intelligence.

    • Essential use cases across the intelligence cycle—from Planning and Direction to Analysis and Dissemination.

    • Key insights on effectively implementing AI within the Intelligence Community.

    Our goal is to explore how AI can be harnessed to enhance intelligence operations and the critical decision-making processes. Stay with us as we guide you through the future of intelligence!

    Más Menos
    15 m
  • New Security Risk: Why LLM Scale Doesn't Deter Backdoor Attacks
    Oct 12 2025

    Today, we are discussing a startling finding that fundamentally challenges how we think about protecting large language models (LLMs) from malicious attacks. We’re diving into a joint study released by Anthropic, the UK AI Security Institute, and The Alan Turing Institute.

    As you know, LLMs like Claude are pretrained on immense amounts of public text from across the internet, including blog posts and personal websites. This creates a significant risk: malicious actors can inject specific text to make a model learn undesirable or dangerous behaviors, a process widely known as poisoning. One major example of this is the introduction of backdoors. These are specific phrases, like the trigger

    Now, previous research often assumed that attackers needed to control a percentage of the training data. If true, attacking massive, frontier models would require impossibly large volumes of poisoned content.

    But the largest poisoning investigation to date has found a surprising result. In their experimental setup, they found that poisoning attacks require a near-constant number of documents regardless of model and training data size. This completely challenges the assumption that larger models need proportionally more poisoned data.

    The key takeaway is alarming: researchers found that as few as 250 malicious documents were sufficient to successfully produce a "backdoor" vulnerability in LLMs ranging from 600 million parameters up to 13 billion parameters—a twenty-fold difference in size. Creating just 250 documents is trivial compared to needing millions, meaning data-poisoning attacks may be far more practical and accessible than previously believed.

    We’ll break down the technical details, including the specific "denial-of-service" attack they tested, which forces the model to produce random, gibberish text when it encounters the trigger. We will also discuss why these findings favor the development of stronger defenses and what questions remain open for future research.

    Stay with us as we explore the vital implications of this major security finding on LLM deployment and safety.

    Más Menos
    13 m
  • 2025 Cybersecurity Awareness Month: Building a Cyber Strong America
    Oct 12 2025

    In this episode, we draw on information from our annual Cyber Security Awareness Kick-off webinar, featuring some truly fantastic speakers. We'll hear foundational guidance from the Cybersecurity and Infrastructure Security Agency (CISA), critical threat intelligence from the US Secret Service, and insights from the FBI Cyber Division. We’ll also gain perspective from the State of Ohio Chief Information Security Officer.

    We are diving into the most urgent threats facing our society today. Our panelists will explore the implications of the rapid evolution of AI, discussing how 43% of people admit to sharing sensitive work information with AI without their employer’s knowledge and how the technology is accelerating both offensive and defensive capabilities.

    We will also tackle the rising tide of scams and global cyber crime—a threat driven by transnational organized criminals—and how we must increase cross-sector data sharing to combat fraud. You’ll learn tangible steps you can take today, such as implementing effective multifactor authentication (MFA) and addressing the liability posed by legacy, end-of-life devices.

    Whether you are a seasoned cyber professional or just interested in securing your family online, this conversation will provide valuable insights into where the industry is headed.

    Stick around for this essential briefing on Building a Cyber Strong America.

    Más Menos
    16 m
  • The Perilous World of AI Data Security
    Sep 10 2025

    In this episode, we’re diving into one of the most critical challenges in artificial intelligence—data security. From supply chain risks and maliciously modified data to data drift that can quietly erode accuracy, protecting information throughout the AI system lifecycle is essential.

    We’ll explore insights from global cybersecurity agencies, including best practices and mitigation strategies designed to safeguard the integrity of data that powers AI and machine learning systems. Because in the end, the quality and security of data determine the trustworthiness of AI itself.

    So, let’s unpack how securing data can strengthen the future of AI.

    Más Menos
    20 m
  • Decoding the NIST AI Risk Framework: Building Trustworthy AI in a Complex World
    Sep 2 2025

    In this episode, we explore the NIST Artificial Intelligence Risk Management Framework, also known as the AI RMF 1.0. Released in January 2023, this free resource from NIST is designed to help organizations manage the unique risks of AI while promoting responsible and trustworthy use.

    We’ll break down the seven characteristics of trustworthy AI—like safety, security, accountability, fairness, and more—and dive into the four core functions: Govern, Map, Measure, and Manage. These principles guide organizations through the entire AI lifecycle, ensuring AI systems are not only powerful but also reliable and ethical.

    So, if you’re looking to strengthen your understanding of AI risk management and build trust in the future of AI, you’re in the right place. Let’s get started with the NIST AI RMF 1.0.

    Más Menos
    22 m
  • Beyond the Buzzwords: How Goldman Sachs Manages Cyber Risk
    Aug 27 2025

    In this episode, we’re diving into how Goldman Sachs, one of the world’s leading investment banks, manages cyber risk. Forget the buzzwords—this is about real-world strategies in operational resilience, business continuity, and disaster recovery. You’ll hear how these practices protect clients, stabilize markets, and keep the firm running through disruption. Our goal? To give you a clear shortcut to understanding Goldman’s multi-layered approach to digital security and operational stability.

    Más Menos
    20 m
  • Prioritizing Cybersecurity Risk and Opportunity in Enterprise Management
    Aug 25 2025

    In this episode, we unpack NIST IR 8286B-upd1, which guides organizations on aligning cybersecurity risk with enterprise goals. We cover how to prioritize risks, choose effective responses (accept, avoid, transfer, mitigate), and use the Cybersecurity Risk Register (CSRR) to communicate clearly with leadership. We also highlight the value of considering both threats and opportunities to strengthen enterprise resilience.

    Más Menos
    20 m
adbl_web_global_use_to_activate_DT_webcro_1694_expandible_banner_T1