Episodios

  • SentinelOne and AWS Shape the Future of AI Security with Purple AI - Rachel Park, Brian Mendenhall - SWN #542
    Dec 30 2025

    SentinelOne announced a series of new innovative designations and integrations with Amazon Web Services (AWS), designed to bring the full benefits of AI security to AWS customers today. From securing GenAI usage in the workplace, to protecting AI infrastructure to leveraging agentic AI and automation to speed investigations and incident response, SentinelOne is empowering organizations to confidently build, operate, and secure the future of AI on AWS.

    SentinelOne shares its vision for the future of AI-driven cybersecurity, defining two interlinked domains: Security for AI—protecting models, agents, and data pipelines—and AI for Security—using intelligent automation to strengthen enterprise defense. With its Human + AI approach, SentinelOne integrates generative and agentic AI into every layer of its platform. The team also unveils the next evolution of Purple AI, an agentic analyst delivering auto-investigations, hyperautomation, and instant rule creation—advancing toward truly autonomous security.

    Visit https://www.securityweekly.com/swn for all the latest episodes!

    Show Notes: https://securityweekly.com/swn-542

    Más Menos
    38 m
  • AI-Era AppSec: Transparency, Trust, and Risk Beyond the Firewall - Felipe Zipitria, Steve Springett, Aruneesh Salhotra, Ken Huang - ASW #363
    Dec 30 2025
    In an era dominated by AI-powered security tools and cloud-native architectures, are traditional Web Application Firewalls still relevant? Join us as we speak with Felipe Zipitria, co-leader of the OWASP Core Rule Set (CRS) project. Felipe has been at the forefront of open-source security, leading the development of one of the world's most widely deployed WAF rule sets, trusted by organizations globally to protect their web applications. Felipe explains why WAFs remain a critical layer in modern defense-in-depth strategies. We'll explore what makes OWASP CRS the go-to choice for security teams, dive into the project's current innovations, and discuss how traditional rule-based security is evolving to work alongside — not against — AI. Segment Resources: github.com/coreruleset/coreruleset coreruleset.org The future of CycloneDX is defined by modularity, API-first design, and deeper contextual insight, enabling transparency that is not just comprehensive, but actionable. At its heart is the Transparency Exchange API, which delivers a normalized, format-agnostic model for sharing SBOMs, attestations, risks, and more across the software supply chain. As genAI transforms every sector of modern business, the security community faces a question: how do we protect systems we can't fully see or understand? In this fireside chat, Aruneesh Salhotra, Project Lead for OWASP AIBOM and Co-Lead of OWASP AI Exchange, discusses two groundbreaking initiatives that are reshaping how organizations approach AI security and supply chain transparency. OWASP AI Exchange has emerged as the go-to single resource for AI security and privacy, providing over 200 pages of practical advice on protecting AI and data-centric systems from threats. Through its official liaison partnership with CEN/CENELEC, the project has contributed 70 pages to ISO/IEC 27090 and 40 pages to the EU AI Act security standard OWASP, achieving OWASP Flagship project status in March 2025. Meanwhile, the OWASP AIBOM Project is establishing a comprehensive framework to provide transparency into how AI models are built, trained, and deployed, extending OWASP's mission of making security visible to the rapidly evolving AI ecosystem. This conversation explores how these complementary initiatives are addressing real-world challenges—from prompt injection and data poisoning to model provenance and supply chain risks—while actively shaping international standards and regulatory frameworks. We'll discuss concrete achievements, lessons learned from global collaboration, and the ambitious roadmap ahead as these projects continue to mature and expand their impact across the AI security landscape. Segment Resources: https://owasp.org/www-project-aibom/ https://www.linkedin.com/posts/aruneeshsalhotra_owasp-ai-aisecurity-activity-7364649799800766465-DJGM/ https://www.youtube.com/@OWASPAIBOM https://www.youtube.com/@RobvanderVeer-ex3gj https://owaspai.org/ Agentic AI introduces unique and complex security challenges that render traditional risk management frameworks insufficient. In this keynote, Ken Huang, CEO of Distributedapps.ai and a key contributor to AI security standards, outlines a new approach to manage these emerging threats. The session will present a practical strategy that integrates the NIST AI Risk Management Framework with specialized tools to address the full lifecycle of Agentic AI. Segment Resources: aivss.owasp.org https://kenhuangus.substack.com/p/owasp-aivss-the-new-framework-for https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-363
    Más Menos
    1 h y 7 m
  • Holiday Chat: Local AI datacenter activism, AI can't substitute good taste, and more - ESW #439
    Dec 29 2025

    For this week's episode of Enterprise Security Weekly, there wasn't a lot of time to prepare. I had to do 5 podcasts in about 8 days leading up to the holiday break, so I decided to just roll with a general chat and see how it went.

    Also, apologies, for any audio quality issues, as the meal I promised to make for dinner this day required a lot of prep, so I was in the kitchen for the whole episode! For reference, I made the recipe for morisqueta michoacana from Rick Martinez's cookbook, Mi Cocina. I used the wrong peppers (availability issue), so it came out green instead of red, but was VERY delicious.

    As for the episode, we discuss what we've been up to, with Jackie sharing her experiences fighting against Meta (allegedly, through some shell companies) building an AI datacenter in her town.

    We then get into discussing the limitations of AI, the potential of the AI bubble popping, and general limitations of AI that are becoming obvious. One of the key limitations is AI's inability to apply personal experience, have strong opinions, or any sense of 'taste'. I think I shared my observation that AI is becoming a sort of 'digital junk food'. "NO AI" has become a common phrase used by creators - a source of pride that media consumers seem to be celebrating and seeking out.

    Segment Resources:

    • Kagi absolutely did NOT sponsor this episode. I have become a big fan of paying for search so that I am not the product. There are other players in this market, but I've settled on Kagi.
    • We mention Ira Glass's bit on taste, which is a small bit of a longer talk he did on storytelling. The shorter bit is here, and is less than 2 minutes long.
    • The full talk is split into 4 parts and posted on a YouTube channel called "War Photography" for some reason.
    • Part 1: https://youtu.be/5pFI9UuC_fc
    • Part 2: https://youtu.be/dx2cI-2FJRs
    • Part 3: https://youtu.be/X2wLP0izeJE
    • Part 4: https://youtu.be/sp8pwkgR8
    • Finally, we also bring up a talk we also discussed on episode 437, Benedict Evans' AI Eats the World

    Visit https://www.securityweekly.com/esw for all the latest episodes!

    Show Notes: https://securityweekly.com/esw-439

    Más Menos
    1 h y 14 m
  • Holiday Special Part 2: You're Gonna Click the Link - Rob Allen - SWN #541
    Dec 26 2025

    You survived the click—but now the click has evolved. In Part 2, the crew follows phishing and ransomware down the rabbit hole into double extortion, initial access brokers, cyber insurance drama, and the unsettling rise of agentic AI that can click, run scripts, and make bad decisions for you. The conversation spans ransomware economics, why paying criminals is a terrible plan with no guarantees, and how AI is turning social engineering into a whole new wild west.

    Visit https://www.securityweekly.com/swn for all the latest episodes!

    Show Notes: https://securityweekly.com/swn-541

    Más Menos
    34 m
  • Building a Hacking Lab in 2025 - PSW #906
    Dec 25 2025

    The crew makes suggestions for building a hacking lab today! We will tackle:

    • What is recommended today to build a lab, given the latest advancements in tech
    • Hardware hacking devices and gadgets that are a must-have
    • Which operating systems should you learn
    • Virtualization technology that works well for a lab build
    • Using AI to help build your lab

    Visit https://www.securityweekly.com/psw for all the latest episodes!

    Show Notes: https://securityweekly.com/psw-906

    Más Menos
    1 h y 3 m
  • The CISO Holiday Party 2025: Leadership Lessons from the Year That Was - BSW #427
    Dec 24 2025

    Join Business Security Weekly for a roundtable-style year-in-review. The BSW hosts share the most surprising, inspiring, and humbling moments of 2025 in business security, culture, and personal growth. And a few of us might be dressed for the upcoming holiday season...

    Visit https://www.securityweekly.com/bsw for all the latest episodes!

    Show Notes: https://securityweekly.com/bsw-427

    Más Menos
    49 m
  • Holiday Special Part 1: You're Gonna Click the Link - Rob Allen - SWN #540
    Dec 23 2025

    It's the holidays, your defenses are down, your inbox is lying to you, and yes—you're gonna click the link. In Part 1 of our holiday special, Doug White and a panel of very smart people explain why social engineering still works decades later, why training alone won't save you, and why the real job is surviving after the click. From phishing and smishing to click-fix attacks, access control disasters, and stories that prove humans remain the weakest—and most entertaining—link in security, this episode sets the stage for the attack we all know is coming.

    Visit https://www.securityweekly.com/swn for all the latest episodes!

    Show Notes: https://securityweekly.com/swn-540

    Más Menos
    36 m
  • Modern AppSec: OWASP SAMM, AI Secure Coding, Threat Modeling & Champions - Sebastian Deleersnyder, Dustin Lehr, James Manico, Adam Shostack - ASW #362
    Dec 23 2025

    Using OWASP SAMM to assess and improve compliance with the Cyber Resilience Act (CRA) is an excellent strategy, as SAMM provides a framework for secure development practices such as secure by design principles and handling vulns.

    Segment Resources:

    • https://owaspsamm.org/
    • https://cybersecuritycoalition.be/resource/a-strategic-approach-to-product-security-with-owasp-samm/

    As genAI becomes a more popular tool in software engineering, the definition of "secure coding" is changing. This session explores how artificial intelligence is reshaping the way developers learn, apply, and scale secure coding practices — and how new risks emerge when machines start generating the code themselves. We'll dive into the dual challenge of securing both human-written and AI-assisted code, discuss how enterprises can validate AI outputs against existing security standards, and highlight practical steps teams can take to build resilience into the entire development pipeline. Join us as we look ahead to the convergence of secure software engineering and AI security — where trust, transparency, and tooling will define the future of code safety.

    Segment Resources:

    • https://manicode.com/ai/

    Understand the history of threat modeling with Adam Shostack. Learn how threat modeling has evolved with the Four Question Framework and can work in your organizations in the wake of the AI revolution.

    Whether you're launching a formal Security Champions program or still figuring out where to start, there's one truth every security leader needs to hear: You already have allies in your org -- they're just waiting to be activated. In this session, we'll explore how identifying and empowering your internal advocates is the fastest, most sustainable way to drive security culture change. These are your early adopters: the developers, engineers, and team leads who already "get it," even if their title doesn't say "security."

    We'll unpack:

    • Why you need help from people outside the security org to actually be effective
    • Where to find your natural allies (hint: it starts with listening, not preaching)
    • How to support and energize those allies so they influence the majority
    • What behavioral science tells us about spreading change across an organization

    Segment Resources:

    • Security Champion Success Guide: https://securitychampionsuccessguide.org/
    • Related interviews/podcasts: https://www.youtube.com/playlist?list=PLPb14P8f4T1ITv3p3Y3XtKsyEAA8W526h
    • How to measure success and impact of culture change and champions: https://www.linkedin.com/pulse/from-soft-skills-hard-data-measuring-success-security-yhmse/
    • Global Community of Champions sign up: https://docs.google.com/forms/d/e/1FAIpQLScyXPAMf9M8idpDMwO4p2h5Ng8I0ffofZuY70BbmgCZNPUS5Q/viewform

    This interview is sponsored by the OWASP GenAI Security Project. Visit https://securityweekly.com/owaspappsec to watch all of CyberRisk TV's interviews from the OWASP 2025 Global AppSec Conference!

    Visit https://www.securityweekly.com/asw for all the latest episodes!

    Show Notes: https://securityweekly.com/asw-362

    Más Menos
    1 h y 8 m