The Secure Developer Podcast Por Snyk arte de portada

The Secure Developer

The Secure Developer

De: Snyk
Escúchala gratis

OFERTA POR TIEMPO LIMITADO. Obtén 3 meses por US$0.99 al mes. Obtén esta oferta.
Securing the future of DevOps and AI: real talk with industry leaders.2016 - 2024 Snyk Desarrollo Personal Economía Gestión Gestión y Liderazgo Éxito Personal
Episodios
  • Autonomous Identity Governance With Paul Querna
    Sep 23 2025

    Episode Summary

    Can multi-factor authentication really “solve” security, or are attackers already two steps ahead? In this episode of The Secure Developer, we sit down with Paul Querna, CTO and co-founder at ConductorOne, to unpack the evolving landscape between authentication and authorisation. In our conversation, Paul delves into the difference between authorisation and authentication, why authorisation issues have only been solved for organisations that invest properly, and why that progress has pushed attackers toward session theft and abusing standing privilege.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan sits down with Paul Querna, CTO and co-founder of ConductorOne, to discuss the evolving landscape of identity and access management (IAM). The conversation begins by challenging the traditional assumption that multi-factor authentication (MFA) is a complete solution, with Paul explaining that while authentication is "solved-ish," attackers are now moving to steal sessions and exploit authorization weaknesses. He shares his journey into the identity space, which began with a realization that old security models based on firewalls and network-based trust were fundamentally broken.

    The discussion delves into the critical concept of least privilege, a core pillar of the zero-trust movement. Paul highlights that standing privilege—where employees accumulate access rights over time—is a significant risk that attackers are increasingly targeting, as evidenced by reports like the Verizon Data Breach Investigations Report. This is even more critical with the rise of AI, where agents could potentially have overly broad access to sensitive data. They explore the idea of just-in-time authorization and dynamic access control, where privileges are granted for a specific use case and then revoked, a more mature approach to security.

    Paul and Danny then tackle the provocative topic of using AI to control authorization. While they agree that AI-driven decisions are necessary to maintain user experience and business speed, they acknowledge that culturally, we are not yet ready to fully trust AI with such critical governance decisions. They discuss how AI could act as an orchestrator, making recommendations for low-risk entitlements while high-risk ones remain policy-controlled. Paul also touches on the complexity of this new world, with non-human identities, personal productivity agents, and the need for new standards like extensions to OAuth. The episode concludes with Paul sharing his biggest worries and hopes for the future. He is concerned about the speed of AI adoption outpacing security preparedness, but is excited by the potential for AI to automate away human toil, empowering IAM and security teams to focus on strategic, high-impact work that truly secures the organization.

    Links

    • ConductorOne
    • Verizon Data Breach Investigations Report
    • AWS CloudWatch
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    31 m
  • Retrieval-Augmented Generation With Bob Remeika From Ragie
    Sep 16 2025

    Episode Summary

    Bob Remeika, CEO and Co-Founder of Ragie, joins host Danny Allan to demystify Retrieval-Augmented Generation (RAG) and its role in building secure, powerful AI applications. They explore the nuances of RAG, differentiating it from fine-tuning, and discuss how it handles diverse data types while mitigating performance challenges. The conversation also covers the rise of AI agents, security best practices like data segmentation, and the exciting future of AI in amplifying developer productivity.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan is joined by Bob Remeika, co-founder and CEO of Ragie, a company focused on providing a RAG-as-a-Service platform for developers. The conversation dives deep into Retrieval-Augmented Generation (RAG) and its practical applications in the AI world.

    Bob explains RAG as a method for providing context to large language models (LLMs) that they have not been trained on. This is particularly useful for things like a company's internal data, such as a parental leave policy, that would be unknown to a public model. The discussion differentiates RAG from fine-tuning an LLM, highlighting that RAG doesn't require a training step, making it a simple way to start building an AI application. The conversation also covers the challenges of working with RAG, including the variety of data formats (like text, audio, and video) that need to be processed and the potential for performance slowdowns with large datasets.

    The episode also explores the most common use cases for RAG-based systems, such as building internal chatbots and creating AI-powered applications for users. Bob addresses critical security concerns, including how to manage authorization and prevent unauthorized access to data using techniques like data segmentation and metadata tagging. The discussion then moves to the concept of "agents," which Bob defines as multi-step, action-oriented AI systems. Bob and Danny discuss how a multi-step approach with agents can help mitigate hallucinations by building in verification steps. Finally, they touch on the future of AI, with Bob expressing excitement about the "super leverage" that AI provides to amplify developer productivity, allowing them to get 10x more done with a smaller team. Bob and Danny both agree that AI isn't going to replace developers, but rather make them more valuable by enabling them to be more productive.

    Links

    • Ragie - Fully Managed Multimodal RAG-as-a-Service for Developers
    • Ragie Connect
    • OpenAI
    • Gemini 2.5
    • Claude Sonnet
    • o4-mini
    • Claude
    • Claude Opus
    • Cursor
    • Snowflake
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    37 m
  • Securing The Future Of AI With Dr. Peter Garraghan
    Sep 2 2025

    Episode Summary

    Machine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.

    Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.

    The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.

    The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.

    Links

    • Mindgard - Automated AI Red Teaming & Security Testing
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    38 m
Todavía no hay opiniones