AI Security Podcast Podcast Por TechRiot.io arte de portada

AI Security Podcast

AI Security Podcast

De: TechRiot.io
Escúchala gratis

The #1 source for AI Security insights for CISOs and cybersecurity leaders. Hosted by two former CISOs, the AI Security Podcast provides expert, no-fluff discussions on the security of AI systems and the use of AI in Cybersecurity. Whether you're a CISO, security architect, engineer, or cyber leader, you'll find practical strategies, emerging risk analysis, and real-world implementations without the marketing noise. These conversations are helping cybersecurity leaders make informed decisions and lead with confidence in the age of AI.TechRiot.io
Episodios
  • How Lovable Manages 100+ Daily Changes, Vibe Coding & Shadow AI
    Apr 2 2026

    What does it actually look like to run security inside one of Europe's fastest-growing AI companies? In this episode, recorded live at the Munich Cybersecurity Conference (MCSC), Ashish Rajan sat down with Igor Andriushchenko Head of Security at Lovable, the AI-native platform that lets anyone build and ship full applications without writing a line of code.

    Igor joined Lovable as employee #40. Six months later, the team had grown to 150+. Developers were running multi-agent workflows overnight, PMs were pushing pull requests, and the volume of code changes was hitting numbers that challenged every traditional security process they had. This is the security story nobody talks about in AI-native scale-ups and Igor lived it.

    In this episode, they cover: why your CI/CD pipeline is being load-tested to destruction by AI-generated churn · how to use PAM (Privileged Access Management) as a practical guardrail for AI agents that can't escalate to production secrets · why the allow-list vs deny-list logic is reversed for AI agents compared to traditional security · the overlooked SCA supply chain risk when AI recommends unmaintained or hallucinated packages · why old SAST tools are failing and what the new generation of agentic code scanners does differently · how to identify and manage advanced, intermediate, and basic AI users in your org without killing their productivity · and the practical "crawl, walk, run" approach to building internal AI security tooling that actually sticks.

    Igor also shares how Lovable's security team built an incident response AI skill, uses reachability analysis agents to triage SCA findings for enterprise customers, and why the real investment isn't in the AI model, it's in the skills ecosystem and data connections underneath.


    Questions asked:

    (00:00) Introduction: Securing the AI Workforce(03:50) Who is Igor Andriushchenko? (Head of Security, Lovable) (06:10) The Churn of Change: Why AI Will Break Your CI/CD (10:40) The FOMO Problem: Don't Force AI Adoption (11:50) The "Air Pocket" Strategy for Safe AI Experimentation (14:00) The Context Paradox: More Access = Dumber AI (17:40) Managing Agent Sprawl and "Advanced" Users (19:40) Why You Must Treat AI Agents Like Human Developers (PAM Controls) (22:30) The Need for AI Telemetry & Visibility (27:50) Blurring Roles: When PMs Become Developers (31:30) Why You Must Use "Deny Lists" Instead of "Allow Lists" for AI (34:30) AI SAST vs. Traditional SAST: Finding Business Logic Flaws (39:40) Supply Chain Risks: When AI Recommends Dead Libraries (45:40) Building Custom AI Skills for Incident Response (52:50) Fun Questions: Battlefield, Team Culture, and Comfort Food

    Más Menos
    57 m
  • Questions Every CISO Must Ask AI Security Vendors
    Mar 18 2026

    RSA Conference 2026 is here and the AI agent hype machine is louder than ever. In this episode, Ashish and Caleb cut through the noise and arm CISOs, practitioners, and security teams with a clear-eyed view of what's actually happening in AI security this year.

    From the vendor floor at RSAC to the future of internal security automation, Caleb and Ashish speak about why 70% of "AI agent security" vendors can't even define what an agent is, why security team consolidation around 2–3 major platforms (plus internal AI capability) may be the most underrated CISO strategy of 2026, and why the window from vulnerability disclosure to live exploitation has collapsed from months to under two days.

    They also explore the emerging idea of a centralised AI automation function inside security teams and why the future of security isn't buying more point solutions, it's building internal AI capability on top of a standardised vendor stack.


    Questions asked:

    (00:00) Introduction: Preparing for RSAC 2026(03:50) The Year of the "AI Agent" Marketing Hype (06:50) The Secret to AI Context: Enterprise Search (Glean) (09:50) Why Your SOC Needs a Centralized AI Platform Team (13:30) The #1 Question to Ask Vendors at RSAC: API Access (16:50) The Myth of MCP (Model Context Protocol) as the Gold Standard (20:50) Why RSAC is Too Noisy: Vibe Coding & 1,000 New Startups (22:30) Is Capital Raised the Only Signal of Trust? (24:50) Prediction: CISOs Will Fire 500 Vendors and Consolidate (30:50) The Build vs. Buy Debate for AI Security Features (35:50) Surviving RSAC: Sorting Signal from Noise (38:50) The Problem with "End-to-End" AI Agent Claims (41:50) Are AI-Driven Attacks Real? (44:50) The Zero-Day Clock: From 5 Months to 2 Days (48:50) RSAC Events: Live Recordings and CISO Panels


    Resources spoken about during the episode:

    RSAC 2026

    BSidesSF 2026

    Glean

    Zero Day Clock

    Más Menos
    51 m
  • Will Foundation Models Kill Security Startups?
    Mar 5 2026

    Did Anthropic just kill the AppSec industry? Following the announcement of Claude Code Security, a tool that finds, reasons about, and fixes code vulnerabilities, major security stocks dropped by 8% .In this episode of the AI Security Podcast, Ashish and Caleb break down the reality behind the hype. Caleb explains why using AI for SAST (Static Application Security Testing) is "a no-brainer," noting that many open-source projects and startups have already been doing exactly what Anthropic announced . We discuss why this actually validates the shift toward AI-automated remediation.The conversation goes deeper into the future of the cybersecurity market: Will giant foundation models start acquiring security companies? Will they offer "premium gas" (cheaper tokens) for building on their platforms? And most importantly, what does this mean for AppSec engineers whose jobs involve triaging false positives?

    Questions asked:

    (00:00) Introduction: The Claude Code Security Announcement(02:50) What is Claude Code Security? (Finding & Reasoning about VULNs) (03:50) Market Overreaction: Why Security Stocks Dropped 8% (05:10) Why AI-Powered SAST is Not New (OpenAI & Open Source doing it already) (07:20) Will AI Take AppSec Jobs? (Triaging False Positives) (09:00) "Shift Left" on Steroids: Auto-Fixing and PR Submission (11:30) The Threat to Legacy Vendors: Why CrowdStrike's Moat is Safe (14:30) Historical Context: AI is the New Calculator/Typewriter (18:20) The "Gasoline" Theory: Foundation Models as Fuel (21:00) Will Anthropic Acquire Security Startups? (26:30) Anthropic's Go-To-Market Strategy: Building AI SOCs (33:30) Startup Survival: Can Innovation Outpace Big Tech? (41:30) The Future of Threat Intel: Is the Legacy Moat Disappearing? (48:20) Negotiating with Vendors using AI Leverage (53:30) Using Evals for Organizational Anomaly Detection

    Más Menos
    1 h
Todavía no hay opiniones