AI Security Ops Podcast Por Black Hills Information Security arte de portada

AI Security Ops

AI Security Ops

De: Black Hills Information Security
Escúchala gratis

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).© 2025 Black Hills Information Security Política y Gobierno
Episodios
  • LiteLLM Supply Chain Compromise | Episode 47
    Apr 13 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down the LiteLLM supply chain compromise–a real-world attack that shows how AI systems are being breached through the same old software supply chain weaknesses.

    What initially looked like a bad release quickly escalated into a full-scale compromise affecting a library downloaded millions of times per day. But LiteLLM wasn’t the starting point–it was just one link in a much larger attack chain involving compromised security tools, CI/CD pipelines, and stolen publishing credentials.

    The result? Malicious packages distributed at scale, harvesting secrets, enabling lateral movement, and establishing persistence across affected systems.

    We dig into:
    • What LiteLLM is and why it’s such a high-value target
    • How the attack chain started with compromised security tooling (Trivy, Checkmarx)
    • How unpinned dependencies enabled the compromise
    • The role of CI/CD pipelines in exposing sensitive credentials
    • What the malicious LiteLLM packages actually did (credential harvesting, persistence, lateral movement)
    • The scale of impact given LiteLLM’s widespread adoption
    • Why supply chain attacks are no longer theoretical–and no longer nation-state exclusive
    • How AI is lowering the barrier to entry for attackers
    • Why this wasn’t really an “AI vulnerability”–but an infrastructure failure
    • The growing risk of automated, agent-driven attack discovery

    This episode highlights a critical reality: the biggest risks in AI systems aren’t always in the models–they’re in the pipelines, dependencies, and infrastructure surrounding them.

    📚 Key Concepts & Topics

    Supply Chain Security
    • Dependency poisoning and malicious package distribution
    • CI/CD pipeline compromise
    • Version pinning and build integrity

    Credential & Secrets Exposure
    • API keys, SSH keys, and cloud credentials in pipelines
    • Risks of centralized AI gateways like LiteLLM

    Threat Actor Techniques
    • Tag rewriting and trusted reference hijacking
    • Multi-stage malware (harvest, lateral movement, persistence)
    • Use of lookalike domains for exfiltration

    AI & Security Reality Check
    • AI as an amplifier, not the root vulnerability
    • Traditional security failures in modern AI stacks
    • Automation lowering attacker barriers

    Defensive Strategies
    • Dependency pinning and isolation (Docker, VPS)
    • Atomic credential rotation
    • Treating CI/CD tools as critical infrastructure
    • Monitoring outbound traffic from build environments


    • (00:00) - Intro & Incident Overview
    • (01:26) - What Is LiteLLM & Why It Matters
    • (03:53) - Supply Chain Scope & Why This Is Dangerous
    • (07:31) - Why These Attacks Are Getting Easier (AI + Scale)
    • (10:48) - Attack Chain Breakdown (Trivy → Checkmarx → LiteLLM)
    • (11:50) - What the Malware Did & Impact at Scale
    • (14:23) - Detection, Response & Who Was Safe

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Más Menos
    20 m
  • Model Ablation | Episode 46
    Apr 2 2026

    In this episode of BHIS Presents: AI Security Ops, the team breaks down model ablation — a powerful interpretability technique that’s quickly becoming a serious concern in AI security.

    What started as a way to better understand how models work is now being used to remove safety mechanisms entirely. By identifying and disabling specific components inside a model, researchers — and attackers — can effectively strip out refusal behavior while leaving the rest of the model fully functional.

    The result? A fast, reliable way to “de-safety” AI systems without prompt engineering, fine-tuning, or significant compute.

    We dig into:
    • What model ablation is and how it works
    • The difference between ablation and pruning
    • How safety behaviors can be isolated inside model internals
    • Why refusal mechanisms are often localized (and fragile)
    • How ablation is being used as a jailbreak technique
    • Why this is more reliable than prompt-based attacks
    • Risks specific to open-weight models and public checkpoints
    • The growing “uncensored model” ecosystem
    • Why interpretability is a double-edged sword
    • Whether safety should be deeply embedded into model architecture
    • What this means for defenders and AI security strategy

    This episode explores a critical shift in AI risk: when safety controls can be surgically removed, they stop being security controls at all.

    📚 Key Concepts & Topics

    Model Internals & Interpretability
    • Neurons, attention heads, and residual stream analysis
    • Activation space and feature directions

    AI Security Risks
    • Prompt injection vs. structural attacks
    • Jailbreaking techniques and safety bypasses

    Model Access & Risk Surface
    • Open-weight vs. API-only models
    • Hugging Face and the uncensored model ecosystem

    AI Safety & Governance
    • Defense-in-depth for AI systems
    • Future standards for ablation resistance

    #AISecurity #ModelAblation #LLMSecurity #CyberSecurity #ArtificialIntelligence #AIResearch #BHIS #AIAgents #InfoSec

    • (00:00) - Intro & Show Overview
    • (01:27) - Removing AI Safety Mechanisms
    • (02:05) - What Is Model Ablation? (Technical Breakdown)
    • (04:01) - Open-Weight Models & Practical Limitations
    • (05:43) - Risks, Use Cases, and Ethical Tradeoffs
    • (07:32) - Security Implications & “You Can’t Ban Math”
    • (10:43) - Future Impact: Open Models Catching Up
    • (17:44) - Final Takeaway: Why “No” Isn’t Security

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Bronwen Aker - Host
    • Derek Banks - Host
    • Brian Fehrman - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Más Menos
    18 m
  • Embedding Space Attacks | Episode 45
    Mar 26 2026

    In this episode of BHIS Presents: AI Security Ops, the team explores embedding space attacks — a lesser-known but increasingly important threat in modern AI systems — and how attackers can manipulate the mathematical foundations of how models understand data.

    Unlike prompt injection, which targets instructions, embedding attacks operate at a deeper level by influencing how data is represented, retrieved, and interpreted inside vector spaces. By subtly altering embeddings or poisoning data sources, attackers can manipulate AI behavior without ever touching the model directly.

    Through a hands-on walkthrough of a custom notebook with rich visualizations, this episode breaks down how embeddings work, why they are critical to LLM-powered systems like RAG pipelines, and how attackers can exploit them in real-world scenarios.

    We dig into:
    - What embeddings are and how AI systems convert text into numerical representations
    - How vector spaces enable similarity search and retrieval in LLM applications
    - What embedding space attacks are and why they matter for AI security
    - How small perturbations in data can drastically change model behavior
    - The risks of poisoned data in RAG and vector databases
    - How attackers can influence search results and downstream AI outputs
    - Why these attacks are subtle, hard to detect, and often overlooked
    - The role of visualization in understanding embedding behavior
    - Real-world implications for AI-powered applications and workflows
    - Defensive considerations when building with embeddings and vector stores

    This episode focuses on the foundational layer of AI systems, showing how security risks extend beyond prompts and into the underlying data representations that power modern AI.

    📚 Key Concepts Covered

    AI Foundations
    - Embeddings and vector representations
    - Similarity search and vector space reasoning

    AI Security Risks
    - Embedding space manipulation
    - Data poisoning in vector databases
    - Retrieval manipulation in RAG systems

    Applications & Impact
    - LLM-powered search and assistants
    - AI pipelines using embeddings
    - Risks in production AI systems

    #AISecurity #Embeddings #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #InfoSec

    Join the 5,000+ cybersecurity professionals on our BHIS Discord server to ask questions and share your knowledge about AI Security.
    https://discord.gg/bhis

    • (00:00) - Intro & Episode Overview
    • (01:39) - What Are Embeddings? (AI Only Understands Numbers)
    • (03:44) - The Embedding Process (Text → Vectors)
    • (07:43) - Similarity, Classification & Vector Math
    • (09:55) - Visualizing Embedding Space (2D Projection)
    • (14:29) - Classifiers
    • (15:39) - Playing Games with Information
    • (18:06) - Attack Techniques: Synonyms & Context Manipulation
    • (20:29) - Context Padding
    • (27:10) - Collision Attacks, Defenses & Final Thoughts

    Click here to watch this episode on YouTube.

    Creators & Guests
    • Brian Fehrman - Host
    • Bronwen Aker - Host
    • Derek Banks - Host

    Brought to you by:

    Black Hills Information Security

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com

    Click here to view the episode transcript.

    Más Menos
    33 m
Todavía no hay opiniones