Episodios

  • Inside the DARPA AI Cyber Challenge: Securing Tomorrow’s Critical Infrastructure Through AI and Healthy Competition | An RSAC Conference 2025 Conversation with Andrew Carney | On Location Coverage with Sean Martin and Marco Ciappelli
    Apr 28 2025
    During RSAC Conference 2025, Andrew Carney, Program Manager at DARPA, and (remotely via video) Dr. Kathleen Fisher, Professor at Tufts University and Program Manager for the AI Cyber Challenge (AIxCC), guide attendees through an immersive experience called Northbridge—a fictional city designed to showcase the critical role of AI in securing infrastructure through the DARPA-led AI Cyber Challenge.Inside Northbridge: The Stakes Are RealNorthbridge simulates the future of cybersecurity, blending AI, infrastructure, and human collaboration. It’s not just a walkthrough — it’s a call to action. Through simulated attacks on water systems, healthcare networks, and cyber operations, visitors witness firsthand the tangible impacts of vulnerabilities in critical systems. Dr. Fisher emphasizes that the AI Cyber Challenge isn’t theoretical: the vulnerabilities competitors find and fix directly apply to real open-source software relied on by society today.The AI Cyber Challenge: Pairing Generative AI with Cyber ReasoningThe AI Cyber Challenge (AIxCC) invites teams from universities, small businesses, and consortiums to create cyber reasoning systems capable of autonomously identifying and fixing vulnerabilities. Leveraging leading foundation models from Anthropic, Google, Microsoft, and OpenAI, the teams operate with tight constraints—working with limited time, compute, and LLM credits—to uncover and patch vulnerabilities at scale. Remarkably, during semifinals, teams found and fixed nearly half of the synthetic vulnerabilities, and even discovered a real-world zero-day in SQLite.Building Toward DEFCON Finals and BeyondThe journey doesn’t end at RSA. As the teams prepare for the AIxCC finals at DEFCON 2025, DARPA is increasing the complexity of the challenge—and the available resources. Beyond the competition, a core goal is public benefit: all cyber reasoning systems developed through AIxCC will be open-sourced under permissive licenses, encouraging widespread adoption across industries and government sectors.From Competition to CollaborationCarney and Fisher stress that the ultimate victory isn’t in individual wins, but in strengthening cybersecurity collectively. Whether securing hospitals, water plants, or financial institutions, the future demands cooperation across public and private sectors.The Northbridge experience offers a powerful reminder: resilience in cybersecurity is built not through fear, but through innovation, collaboration, and a relentless drive to secure the systems we all depend on.___________Guest: Andrew Carney, AI Cyber Challenge Program Manager, Defense Advanced Research Projects Agency (DARPA) | https://www.linkedin.com/in/andrew-carney-945458a6/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesThe DARPA AIxCC Experience at RSAC 2025 Innovation Sandbox: https://www.rsaconference.com/usa/programs/sandbox/darpaLearn more and catch more stories from RSAC Conference 2025 coverage: https://www.itspmagazine.com/rsac25___________KEYWORDSandrew carney, kathleen fisher, marco ciappelli, sean martin, darpa, aixcc, cybersecurity, rsac 2025, defcon, ai cybersecurity, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us
    Más Menos
    28 m
  • Vibe Coding: Creativity Meets Risk in the Age of AI-Driven Development | A Conversation with Izar Tarandach | Redefining CyberSecurity with Sean Martin
    Apr 17 2025
    ⬥GUEST⬥Izar Tarandach, Sr. Principal Security Architect for a large media company | On LinkedIn: https://www.linkedin.com/in/izartarandach/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin sits down with Izar Tarandach, Senior Principal Security Architect at a major entertainment company, to unpack a concept gaining traction across some developer circles: vibe coding.Vibe coding, as discussed by Izar and Sean, isn’t just about AI-assisted development—it’s about coding based on a feeling or a flow, often driven by prompts to large language models (LLMs). It’s being explored in organizations from startups to large tech companies, where the appeal lies in speed and ease: describe what you want, and the machine generates the code. But this emerging approach is raising significant concerns, particularly in security circles.Izar, who co-hosts the Security Table podcast with Matt Coles and Chris Romeo, calls attention to the deeper implications of vibe coding. At the heart of his concern is the risk of ignoring past lessons. Generating code through AI may feel like progress, but without understanding what’s being written or how it fits into the broader architecture, teams risk reintroducing old vulnerabilities—at scale.One major issue: the assumption that code generated by AI is inherently good or secure. Izar challenges that notion, reminding listeners that today’s coding models function like junior developers—they may produce working code, but they’re also prone to mistakes, hallucinations, and a lack of contextual understanding. Worse yet, organizations may begin to skip traditional checks like code reviews and secure development lifecycles, assuming the machine already got it right.Sean highlights a potential opportunity—if used wisely, vibe coding could allow developers to focus more on outcomes and user needs, rather than syntax and structure. But even he acknowledges that, without collaboration and proper feedback loops, it’s more of a one-way zone than a true jam session between human and machine.Together, Sean and Izar explore whether security leaders are aware of vibe-coded systems running in their environments—and how they should respond. Their advice: assume you already have vibe-coded components in play, treat that code with the same scrutiny as anything else, and don’t trust blindly. Review it, test it, threat model it, and hold it to the same standards.Tune in to hear how this new style of development is reshaping conversations about security, responsibility, and collaboration in software engineering.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Inspiring LinkedIn Post — https://www.linkedin.com/posts/izartarandach_sigh-vibecoding-when-will-we-be-able-activity-7308105048926879744-fNMSSecurity Table Podcast: Vibe Coding: What Could Possibly Go Wrong? — https://securitytable.buzzsprout.com/2094080/episodes/16861651-vibe-coding-what-could-possibly-go-wrongWebinar: Secure Coding = Developer Power, An ITSPmagazine Webinar with Manicode Security — https://www.crowdcast.io/c/secure-coding-equals-developer-power-how-to-convince-your-boss-to-invest-in-you-an-itspmagazine-webinar-with-manicode-security-ad147fba034a⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYqInterested in sponsoring this show with a podcast ad placement? Learn more:👉 https://itspm.ag/podadplc
    Más Menos
    36 m
  • Building and Securing Intelligent Workflows: Why Your AI Strategy Needs Agentic AI Threat Modeling and a Zero Trust Mindset | A Conversation with Ken Huang | Redefining CyberSecurity with Sean Martin
    Mar 25 2025
    ⬥GUEST⬥Ken Huang, Co-Chair, AI Safety Working Groups at Cloud Security Alliance | On LinkedIn: https://www.linkedin.com/in/kenhuang8/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On LinkedIn: https://www.linkedin.com/in/imsmartin/ | Website: https://www.seanmartin.com⬥EPISODE NOTES⬥In this episode of Redefining CyberSecurity, host Sean Martin speaks with Ken Huang, Co-Chair of the Cloud Security Alliance (CSA) AI Working Group and author of several books including Generative AI Security and the upcoming Agent AI: Theory and Practice. The conversation centers on what agentic AI is, how it is being implemented, and what security, development, and business leaders need to consider as adoption grows.Agentic AI refers to systems that can autonomously plan, execute, and adapt tasks using large language models (LLMs) and integrated tools. Unlike traditional chatbots, agentic systems handle multi-step workflows, delegate tasks to specialized agents, and dynamically respond to inputs using tools like vector databases or APIs. This creates new possibilities for business automation but also introduces complex security and governance challenges.Practical Applications and Emerging Use CasesKen outlines current use cases where agentic AI is being applied: startups using agentic models to support scientific research, enterprise tools like Salesforce’s AgentForce automating workflows, and internal chatbots acting as co-workers by tapping into proprietary data. As agentic AI matures, these systems may manage travel bookings, orchestrate ticketing operations, or even assist in robotic engineering—all with minimal human intervention.Implications for Development and Security TeamsDevelopment teams adopting agentic AI frameworks—such as AutoGen or CrewAI—must recognize that most do not come with out-of-the-box security controls. Ken emphasizes the need for SDKs that add authentication, monitoring, and access controls. For IT and security operations, agentic systems challenge traditional boundaries; agents often span across cloud environments, demanding a zero-trust mindset and dynamic policy enforcement.Security leaders are urged to rethink their programs. Agentic systems must be validated for accuracy, reliability, and risk—especially when multiple agents operate together. Threat modeling and continuous risk assessment are no longer optional. Enterprises are encouraged to start small: deploy a single-agent system, understand the workflow, validate security controls, and scale as needed.The Call for Collaboration and Mindset ShiftAgentic AI isn’t just a technological shift—it requires a cultural one. Huang recommends cross-functional engagement and alignment with working groups at CSA, OWASP, and other communities to build resilient frameworks and avoid duplicated effort. Zero Trust becomes more than an architecture—it becomes a guiding principle for how agentic AI is developed, deployed, and defended.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥BOOK | Generative AI Security: https://link.springer.com/book/10.1007/978-3-031-54252-7BOOK | Agentic AI: Theories and Practices, to be published August by Springer: https://link.springer.com/book/9783031900259BOOK | The Handbook of CAIO (with a business focus): https://www.amazon.com/Handbook-Chief-AI-Officers-Revolution/dp/B0DFYNXGMRMore books at Amazon, including books published by Cambridge University Press and John Wiley, etc.: https://www.amazon.com/stores/Ken-Huang/author/B0D3J7L7GNVideo Course Mentioned During this Episode: "Generative AI for Cybersecurity" video course by EC-Council with 255 people rated averaged 5 starts: https://codered.eccouncil.org/course/generative-ai-for-cybersecurity-course?logged=falsePodcast: The 2025 OWASP Top 10 for LLMs: What’s Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.seanmartin.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYqInterested in sponsoring this show with a podcast ad placement? Learn more:👉 https://itspm.ag/podadplc
    Más Menos
    43 m
  • Detection vs. Noise: What MITRE ATT&CK Evaluations Reveal About Your Security Tools | A Conversation with Allie Mellen | Redefining CyberSecurity with Sean Martin
    Mar 17 2025
    ⬥GUEST⬥Allie Mellen, Principal Analyst, Forrester | On LinkedIn: https://www.linkedin.com/in/hackerxbella/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine and Host of Redefining CyberSecurity Podcast | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥In this episode, Allie Mellen, Principal Analyst on the Security and Risk Team at Forrester, joins Sean Martin to discuss the latest results from the MITRE ATT&CK Ingenuity Evaluations and what they reveal about detection and response technologies.The Role of MITRE ATT&CK EvaluationsMITRE ATT&CK is a widely adopted framework that maps out the tactics, techniques, and procedures (TTPs) used by threat actors. Security vendors use it to improve detection capabilities, and organizations rely on it to assess their security posture. The MITRE Ingenuity Evaluations test how different security tools detect and respond to simulated attacks, helping organizations understand their strengths and gaps.Mellen emphasizes that MITRE’s evaluations do not assign scores or rank vendors, which allows security leaders to focus on analyzing performance rather than chasing a “winner.” Instead, organizations must assess raw data to determine how well a tool aligns with their needs.Alert Volume and the Cost of Security DataOne key insight from this year’s evaluation is the significant variation in alert volume among vendors. Some solutions generate thousands of alerts for a single attack scenario, while others consolidate related activity into just a handful of actionable incidents. Mellen notes that excessive alerting contributes to analyst burnout and operational inefficiencies, making alert volume a critical metric to assess.Forrester’s analysis includes a cost calculator that estimates the financial impact of alert ingestion into a SIEM. The results highlight how certain vendors create a massive data burden, leading to increased costs for organizations trying to balance security effectiveness with budget constraints.The Shift Toward Detection and Response EngineeringMellen stresses the importance of detection engineering, where security teams take a structured approach to developing and maintaining high-quality detection rules. Instead of passively consuming vendor-generated alerts, teams must actively refine and tune detections to align with real threats while minimizing noise.Detection and response should also be tightly integrated. Forrester’s research advocates linking every detection to a corresponding response playbook. By automating these processes through security orchestration, automation, and response (SOAR) solutions, teams can accelerate investigations and reduce manual workloads.Vendor Claims and the Reality of Security ToolsWhile many vendors promote their performance in the MITRE ATT&CK Evaluations, Mellen cautions against taking marketing claims at face value. Organizations should review MITRE’s raw evaluation data, including screenshots and alert details, to get an unbiased view of how a tool operates in practice.For security leaders, these evaluations offer an opportunity to reassess their detection strategy, optimize alert management, and ensure their investments in security tools align with operational needs.For a deeper dive into these insights, including discussions on AI-driven correlation, alert fatigue, and security team efficiency, listen to the full episode.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥Inspiring Post: https://www.linkedin.com/posts/hackerxbella_go-beyond-the-mitre-attck-evaluation-to-activity-7295460112935075845-N8GW/Blog | Go Beyond The MITRE ATT&CK Evaluation To The True Cost Of Alert Volumes: https://www.forrester.com/blogs/go-beyond-the-mitre-attck-evaluation-to-the-true-cost-of-alert-volumes/⬥ADDITIONAL INFORMATION⬥✨ More Redefining CyberSecurity Podcast: 🎧 https://www.itspmagazine.com/redefining-cybersecurity-podcastRedefining CyberSecurity Podcast on YouTube:📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYqInterested in sponsoring this show with a podcast ad placement? Learn more:👉 https://itspm.ag/podadplc
    Más Menos
    36 m
  • The Cyber Resilience Act: How the EU is Reshaping Digital Product Security | A Conversation with Sarah Fluchs | Redefining CyberSecurity with Sean Martin
    Mar 11 2025
    ⬥GUEST⬥Sarah Fluchs, CTO at admeritia | CRA Expert Group at EU Commission | On LinkedIn: https://www.linkedin.com/in/sarah-fluchs/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥The European Commission’s Cyber Resilience Act (CRA) introduces a regulatory framework designed to improve the security of digital products sold within the European Union. In a recent episode of Redefining CyberSecurity, host Sean Martin spoke with Sarah Fluchs, Chief Technology Officer at admeritia and a member of the CRA expert group at the EU Commission. Fluchs, who has spent her career in industrial control system cybersecurity, offers critical insights into what the CRA means for manufacturers, retailers, and consumers.A Broad Scope: More Than Just Industrial AutomationUnlike previous security regulations that focused on specific sectors, the CRA applies to virtually all digital products. Fluchs emphasizes that if a device is digital and sold in the EU, it likely falls under the CRA’s requirements. From smartwatches and baby monitors to firewalls and industrial control systems, the regulation covers a wide array of consumer and business-facing products.The CRA also extends beyond just hardware—software and services required for product functionality (such as cloud-based components) are also in scope. This broad application is part of what makes the regulation so impactful. Manufacturers now face mandatory cybersecurity requirements that will shape product design, development, and post-sale support.What the CRA RequiresThe CRA introduces mandatory cybersecurity standards across the product lifecycle. Manufacturers will need to:Ensure products are free from known, exploitable vulnerabilities at the time of release.Implement security by design, considering cybersecurity from the earliest stages of product development.Provide security patches for the product’s defined lifecycle, with a minimum of five years unless justified otherwise.Maintain a vulnerability disclosure process, ensuring consumers and authorities are informed of security risks.Include cybersecurity documentation, requiring manufacturers to provide detailed security instructions to users.Fluchs notes that these requirements align with established security best practices. For businesses already committed to cybersecurity, the CRA should feel like a structured extension of what they are already doing, rather than a disruptive change.Compliance Challenges: No Detailed Checklist YetOne of the biggest concerns among manufacturers is the lack of detailed compliance guidance. While other EU regulations provide extensive technical specifications, the CRA’s security requirements span just one and a half pages. This ambiguity is intentional—it allows flexibility across different industries—but it also creates uncertainty.To address this, the EU will introduce harmonized standards to help manufacturers interpret the CRA. However, with tight deadlines, many of these standards may not be ready before enforcement begins. As a result, companies will need to conduct their own cybersecurity risk assessments and demonstrate due diligence in securing their products.The Impact on Critical Infrastructure and Industrial SystemsWhile the CRA is not specifically a critical infrastructure regulation, it has major implications for industrial environments. Operators of critical systems, such as utilities and manufacturing plants, will benefit from stronger security in the components they rely on.Fluchs highlights that many security gaps in industrial environments stem from weak product security. The CRA aims to fix this by ensuring that manufacturers, rather than operators, bear the responsibility for secure-by-design components. This shift could significantly reduce cybersecurity risks for organizations that rely on complex supply chains.A Security Milestone: Holding Manufacturers AccountableThe CRA represents a fundamental shift in cybersecurity responsibility. For the first time, manufacturers, importers, and retailers must guarantee the security of their products or risk being banned from selling in the EU.Fluchs points out that while the burden of compliance is significant, the benefits for consumers and businesses will be substantial. Security-conscious companies may even gain a competitive advantage, as customers start to prioritize products that meet CRA security standards.For those in the industry wondering how strictly the EU will enforce compliance, Fluchs reassures that the goal is not to punish manufacturers for small mistakes. Instead, the EU Commission aims to improve cybersecurity without unnecessary bureaucracy.The Bottom LineThe Cyber Resilience Act is set to reshape cybersecurity expectations for digital products. While manufacturers face new compliance challenges, consumers and businesses...
    Más Menos
    44 m
  • Hackers, Policy, and the Future of Cybersecurity: Inside The Hackers’ Almanack from DEF CON and the Franklin Project | A Conversation with Jake Braun | Redefining CyberSecurity with Sean Martin
    Mar 3 2025
    ⬥GUEST⬥Jake Braun, Acting Principal Deputy National Cyber Director, The White House | On LinkedIn: https://www.linkedin.com/in/jake-braun-77372539/⬥HOST⬥Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martin⬥EPISODE NOTES⬥Cybersecurity is often framed as a battle between attackers and defenders, but what happens when hackers take on a different role—one of informing policy, protecting critical infrastructure, and even saving lives? That’s the focus of the latest Redefining Cybersecurity podcast episode, where host Sean Martin speaks with Jake Braun, former Acting Principal Deputy National Cyber Director at the White House and current Executive Director of the Cyber Policy Initiative at the University of Chicago.Braun discusses The Hackers’ Almanack, a project developed in partnership with DEF CON and the Franklin Project to document key cybersecurity findings that policymakers, industry leaders, and technologists should be aware of. This initiative captures some of the most pressing security challenges emerging from DEF CON’s research community and translates them into actionable insights that could drive meaningful policy change.DEF CON, The Hackers’ Almanack, and the Franklin ProjectDEF CON, one of the world’s largest hacker conferences, brings together tens of thousands of security researchers each year. While the event is known for its groundbreaking technical discoveries, Braun explains that too often, these findings fail to make their way into the hands of policymakers who need them most. That’s why The Hackers’ Almanack was created—to serve as a bridge between the security research community and decision-makers who shape regulations and national security strategies.This effort is an extension of the Franklin Project, named after Benjamin Franklin, who embodied the intersection of science and civics. The initiative includes not only The Hackers’ Almanack but also a volunteer-driven cybersecurity support network for under-resourced water utilities, a critical infrastructure sector under increasing attack.Ransomware: Hackers Filling the Gaps Where Governments Have StruggledOne of the most striking sections of The Hackers’ Almanack examines the state of ransomware. Despite significant government efforts to disrupt ransomware groups, attacks remain as damaging as ever. Braun highlights the work of security researcher Vangelis Stykas, who successfully infiltrated ransomware gangs—not to attack them, but to gather intelligence and warn potential victims before they were hit.While governments have long opposed private-sector hacking in retaliation against cybercriminals, Braun raises an important question: Should independent security researchers be allowed to operate in this space if they can help prevent attacks? This isn’t just about hacktivism—it’s about whether traditional methods of law enforcement and national security are enough to combat the ransomware crisis.AI Security: No Standards, No Rules, Just ChaosArtificial intelligence is dominating conversations in cybersecurity, but according to Braun, the industry still hasn’t figured out how to secure AI effectively. DEF CON’s AI Village, which has been studying AI security for years, made a bold statement: AI red teaming, as it exists today, lacks clear definitions and standards. Companies are selling AI security assessments with no universally accepted benchmarks, leaving buyers to wonder what they’re really getting.Braun argues that industry leaders, academia, and government must quickly come together to define what AI security actually means. Are we testing AI applications? The algorithms? The data sets? Without clarity, AI red teaming risks becoming little more than a marketing term, rather than a meaningful security practice.Biohacking: The Blurry Line Between Innovation and BioterrorismPerhaps the most controversial section of The Hackers’ Almanack explores biohacking and its potential risks. Researchers at the Four Thieves Vinegar Collective demonstrated how AI and 3D printing could allow individuals to manufacture vaccines and medical devices at home—at a fraction of the cost of commercial options. While this raises exciting possibilities for healthcare accessibility, it also raises serious regulatory and ethical concerns.Current laws classify unauthorized vaccine production as bioterrorism, but Braun questions whether that definition should evolve. If underserved communities have no access to life-saving treatments, should they be allowed to manufacture their own? And if so, how can regulators ensure safety without stifling innovation?A Call to ActionThe Hackers’ Almanack isn’t just a technical report—it’s a call for governments, industry leaders, and the security community to rethink how we approach cybersecurity, technology policy, and even ...
    Más Menos
    41 m
  • The 2025 OWASP Top 10 for LLMs: What’s Changed and Why It Matters | A Conversation with Sandy Dunn and Rock Lambros | Redefining CyberSecurity with Sean Martin
    Feb 13 2025
    ⬥GUESTS⬥Sandy Dunn, Consultant Artificial Intelligence & Cybersecurity, Adjunct Professor Institute for Pervasive Security Boise State University | On Linkedin: https://www.linkedin.com/in/sandydunnciso/Rock Lambros, CEO and founder of RockCyber | On LinkedIn | https://www.linkedin.com/in/rocklambros/Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinView This Show's Sponsors⬥EPISODE NOTES⬥The rise of large language models (LLMs) has reshaped industries, bringing both opportunities and risks. The latest OWASP Top 10 for LLMs aims to help organizations understand and mitigate these risks. In a recent episode of Redefining Cybersecurity, host Sean Martin sat down with Sandy Dunn and Rock Lambros to discuss the latest updates to this essential security framework.The OWASP Top 10 for LLMs: What It Is and Why It MattersOWASP has long been a trusted source for security best practices, and its LLM-specific Top 10 is designed to guide organizations in identifying and addressing key vulnerabilities in AI-driven applications. This initiative has rapidly gained traction, becoming a reference point for AI security governance, testing, and implementation. Organizations developing or integrating AI solutions are now evaluating their security posture against this list, ensuring safer deployment of LLM technologies.Key Updates for 2025The 2025 iteration of the OWASP Top 10 for LLMs introduces refinements and new focus areas based on industry feedback. Some categories have been consolidated for clarity, while new risks have been added to reflect emerging threats.• System Prompt Leakage (New) – Attackers may manipulate LLMs to extract system prompts, potentially revealing sensitive operational instructions and security mechanisms.• Vector and Embedding Risks (New) – Security concerns around vector databases and embeddings, which can lead to unauthorized data exposure or manipulation.Other notable changes include reordering certain risks based on real-world impact. Prompt Injection remains the top concern, while Sensitive Information Disclosure and Supply Chain Vulnerabilities have been elevated in priority.The Challenge of AI SecurityUnlike traditional software vulnerabilities, LLMs introduce non-deterministic behavior, making security testing more complex. Jailbreaking attacks—where adversaries bypass system safeguards through manipulative prompts—remain a persistent issue. Prompt injection attacks, where unauthorized instructions are inserted to manipulate output, are also difficult to fully eliminate.As Dunn explains, “There’s no absolute fix. It’s an architecture issue. Until we fundamentally redesign how we build LLMs, there will always be risk.”Beyond Compliance: A Holistic Approach to AI SecurityBoth Dunn and Lambros emphasize that organizations need to integrate AI security into their overall IT and cybersecurity strategy, rather than treating it as a separate issue. AI governance, supply chain integrity, and operational resilience must all be considered.Lambros highlights the importance of risk management over rigid compliance: “Organizations have to balance innovation with security. You don’t have to lock everything down, but you need to understand where your vulnerabilities are and how they impact your business.”Real-World Impact and AdoptionThe OWASP Top 10 for LLMs has already been widely adopted, with companies incorporating it into their security frameworks. It has been translated into multiple languages and is serving as a global benchmark for AI security best practices.Additionally, initiatives like HackerPrompt 2.0 are helping security professionals stress-test AI models in real-world scenarios. OWASP is also facilitating industry collaboration through working groups on AI governance, threat intelligence, and agentic AI security.How to Get InvolvedFor those interested in contributing, OWASP provides open-access resources and welcomes participants to its AI security initiatives. Anyone can join the discussion, whether as an observer or an active contributor.As AI becomes more ingrained in business and society, frameworks like the OWASP Top 10 for LLMs are essential for guiding responsible innovation. To learn more, listen to the full episode and explore OWASP’s latest AI security resources.⬥SPONSORS⬥LevelBlue: https://itspm.ag/attcybersecurity-3jdk3ThreatLocker: https://itspm.ag/threatlocker-r974⬥RESOURCES⬥OWASP GenAI: https://genai.owasp.org/Link to the 2025 version of the Top 10 for LLM Applications: https://genai.owasp.org/llm-top-10/Getting Involved: https://genai.owasp.org/contribute/OWASP LLM & Gen AI Security Summit at RSAC 2025: https://genai.owasp.org/event/rsa-conference-2025/AI Threat Mind Map: https://github.com/subzer0girl2/AI-Threat-Mind-MapGuide for Preparing and Responding to Deepfake Events: https://...
    Más Menos
    47 m
  • Shadow IT: Securing Your Organization in a World of Unapproved Apps | A Zero Trust World Conversation with Ryan Bowman | On Location Coverage with Sean Martin and Marco Ciappelli
    Feb 7 2025
    Zero Trust World 2025, hosted by ThreatLocker, is fast approaching (February 19-21), bringing together security professionals, IT leaders, and business executives to discuss the principles and implementation of Zero Trust. Hosted by ThreatLocker, this event offers a unique opportunity to explore real-world security challenges and solutions.In a special On Location with Sean and Marco episode recorded ahead of the event, Ryan Bowman, VP of Solutions Engineering at ThreatLocker, shares insights into his upcoming session, The Dangers of Shadow IT. Shadow IT—the use of unauthorized applications and systems within an organization—poses a significant risk to security, operations, and compliance. Bowman’s session aims to shed light on this issue and equip attendees with strategies to address it effectively.Understanding Shadow IT and Its RisksBowman explains that Shadow IT is more than just an inconvenience—it’s a growing challenge for businesses of all sizes. Employees often turn to unauthorized tools and services because they perceive them as more efficient, cost-effective, or user-friendly than the official solutions provided by IT teams. While this may seem harmless, the reality is that these unsanctioned applications create serious security vulnerabilities, increase operational risk, and complicate compliance efforts.One of the most pressing concerns is data security. Employees using unauthorized platforms for communication, file sharing, or project management may unknowingly expose sensitive company data to external risks. When employees leave the organization or access is revoked, data stored in these unofficial systems can remain accessible, increasing the risk of breaches or data loss.Procurement issues also play a role in the Shadow IT problem. Bowman highlights cases where organizations unknowingly pay for redundant software services, such as using both Teams and Slack for communication, leading to unnecessary expenses. A lack of centralized oversight results in wasted resources and fragmented security controls.Zero Trust as a MindsetA recurring theme throughout the discussion is that Zero Trust is not just a technology or a product—it’s a mindset. Bowman emphasizes that implementing Zero Trust requires organizations to reassess their approach to security at every level. Instead of inherently trusting employees or systems, organizations must critically evaluate every access request, application, and data exchange.This mindset shift extends beyond security teams. IT leaders must work closely with employees to understand why Shadow IT is being used and find secure, approved alternatives that still support productivity. By fostering open communication and making security a shared responsibility, organizations can reduce the temptation for employees to bypass official IT policies.Practical Strategies to Combat Shadow ITBowman’s session will not only highlight the risks associated with Shadow IT but also provide actionable strategies to mitigate them. Attendees can expect insights into:• Identifying and monitoring unauthorized applications within their organization• Implementing policies and security controls that balance security with user needs• Enhancing employee engagement and education to prevent unauthorized technology use• Leveraging solutions like ThreatLocker to enforce security policies while maintaining operational efficiencyBowman also stresses the importance of rethinking traditional IT stereotypes. While security teams often impose strict policies to minimize risk, they must also ensure that these policies do not create unnecessary obstacles for employees. The key is to strike a balance between control and usability.Why This Session MattersWith organizations constantly facing new security threats, understanding the implications of Shadow IT is critical. Bowman’s session at Zero Trust World 2025 will provide a practical, real-world perspective on how organizations can protect themselves without stifling innovation and efficiency.Beyond the technical discussions, the conference itself offers a unique chance to engage with industry leaders, network with peers, and gain firsthand experience with security tools in hands-on labs. With high-energy sessions, interactive learning opportunities, and keynotes from industry leaders like ThreatLocker CEO Danny Jenkins and Dr. Zero Trust, Chase Cunningham, Zero Trust World 2025 is shaping up to be an essential event for anyone serious about cybersecurity.For those interested in staying ahead of security challenges, attending Bowman’s session on The Dangers of Shadow IT is a must.Guest: Ryan Bowman, VP of Solutions Engineering, ThreatLocker [@ThreatLocker | On LinkedIn: https://www.linkedin.com/in/ryan-bowman-3358a71b/Hosts: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber] | On ITSPmagazine: https://www.itspmagazine.com/sean-martinMarco Ciappelli, ...
    Más Menos
    24 m
adbl_web_global_use_to_activate_webcro805_stickypopup