Cybersecurity Podcast Podcast Por Sagamore.ai arte de portada

Cybersecurity Podcast

Cybersecurity Podcast

De: Sagamore.ai
Escúchala gratis

OFERTA POR TIEMPO LIMITADO | Obtén 3 meses por US$0.99 al mes

$14.95/mes despues- se aplican términos.

Welcome to the Cybersecurity Podcast — your go-to source for all things cyber! Whether you’re a seasoned professional or just diving in, we break down the latest threats, modern attack tactics, cloud security trends, AI-driven defenses, major data breaches, and the future of quantum computing and automation. Each episode is packed with insights, real-world stories, and expert advice to help you stay ahead of the curve.

Join our community of learners and defenders as we explore how to secure the digital world — together.

Sagamore.ai 2025
Episodios
  • Applied Intelligence: Mastering the Craft
    Oct 12 2025

    Welcome to the Deep Dive. In this episode, drawing on Accenture’s extensive analysis into The art of AI maturity.

    Artificial intelligence (AI) has evolved from a scientific concept to a societal constant, and companies across industries are relying on and investing in AI to drive logistics, improve customer service, and increase efficiency. However, despite these ever-expanding use cases, most organizations are barely scratching the surface of AI’s full potential.

    Accenture’s research found that only 12% of firms globally have advanced their AI maturity enough to achieve superior growth and business transformation. We call these companies the “AI Achievers”. Achieving this high performance requires understanding that while there is a science to AI, there is also an art to AI maturity. Achievers succeed not through a single sophisticated capability, but by combining strengths across strategy, processes, and people.

    Advancing AI maturity is no longer a choice, but an opportunity facing every industry and leader. In this episode, we will discuss how Accenture, the global consulting firm, defines the AI maturity journey

    Más Menos
    13 m
  • The Blueprint for the Future: Implementing AI for the Intelligence Community
    Oct 14 2025

    In this episode, we will dive deep into this crucial topic. You will gain a comprehensive understanding of:

    • The importance and benefits of AI for intelligence.

    • Essential use cases across the intelligence cycle—from Planning and Direction to Analysis and Dissemination.

    • Key insights on effectively implementing AI within the Intelligence Community.

    Our goal is to explore how AI can be harnessed to enhance intelligence operations and the critical decision-making processes. Stay with us as we guide you through the future of intelligence!

    Más Menos
    15 m
  • New Security Risk: Why LLM Scale Doesn't Deter Backdoor Attacks
    Oct 12 2025

    Today, we are discussing a startling finding that fundamentally challenges how we think about protecting large language models (LLMs) from malicious attacks. We’re diving into a joint study released by Anthropic, the UK AI Security Institute, and The Alan Turing Institute.

    As you know, LLMs like Claude are pretrained on immense amounts of public text from across the internet, including blog posts and personal websites. This creates a significant risk: malicious actors can inject specific text to make a model learn undesirable or dangerous behaviors, a process widely known as poisoning. One major example of this is the introduction of backdoors. These are specific phrases, like the trigger

    Now, previous research often assumed that attackers needed to control a percentage of the training data. If true, attacking massive, frontier models would require impossibly large volumes of poisoned content.

    But the largest poisoning investigation to date has found a surprising result. In their experimental setup, they found that poisoning attacks require a near-constant number of documents regardless of model and training data size. This completely challenges the assumption that larger models need proportionally more poisoned data.

    The key takeaway is alarming: researchers found that as few as 250 malicious documents were sufficient to successfully produce a "backdoor" vulnerability in LLMs ranging from 600 million parameters up to 13 billion parameters—a twenty-fold difference in size. Creating just 250 documents is trivial compared to needing millions, meaning data-poisoning attacks may be far more practical and accessible than previously believed.

    We’ll break down the technical details, including the specific "denial-of-service" attack they tested, which forces the model to produce random, gibberish text when it encounters the trigger. We will also discuss why these findings favor the development of stronger defenses and what questions remain open for future research.

    Stay with us as we explore the vital implications of this major security finding on LLM deployment and safety.

    Más Menos
    13 m
Todavía no hay opiniones