Episodios

  • Retrieval-Augmented Generation With Bob Remeika From Ragie
    Sep 16 2025

    Episode Summary

    Bob Remeika, CEO and Co-Founder of Ragie, joins host Danny Allan to demystify Retrieval-Augmented Generation (RAG) and its role in building secure, powerful AI applications. They explore the nuances of RAG, differentiating it from fine-tuning, and discuss how it handles diverse data types while mitigating performance challenges. The conversation also covers the rise of AI agents, security best practices like data segmentation, and the exciting future of AI in amplifying developer productivity.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan is joined by Bob Remeika, co-founder and CEO of Ragie, a company focused on providing a RAG-as-a-Service platform for developers. The conversation dives deep into Retrieval-Augmented Generation (RAG) and its practical applications in the AI world.

    Bob explains RAG as a method for providing context to large language models (LLMs) that they have not been trained on. This is particularly useful for things like a company's internal data, such as a parental leave policy, that would be unknown to a public model. The discussion differentiates RAG from fine-tuning an LLM, highlighting that RAG doesn't require a training step, making it a simple way to start building an AI application. The conversation also covers the challenges of working with RAG, including the variety of data formats (like text, audio, and video) that need to be processed and the potential for performance slowdowns with large datasets.

    The episode also explores the most common use cases for RAG-based systems, such as building internal chatbots and creating AI-powered applications for users. Bob addresses critical security concerns, including how to manage authorization and prevent unauthorized access to data using techniques like data segmentation and metadata tagging. The discussion then moves to the concept of "agents," which Bob defines as multi-step, action-oriented AI systems. Bob and Danny discuss how a multi-step approach with agents can help mitigate hallucinations by building in verification steps. Finally, they touch on the future of AI, with Bob expressing excitement about the "super leverage" that AI provides to amplify developer productivity, allowing them to get 10x more done with a smaller team. Bob and Danny both agree that AI isn't going to replace developers, but rather make them more valuable by enabling them to be more productive.

    Links

    • Ragie - Fully Managed Multimodal RAG-as-a-Service for Developers
    • Ragie Connect
    • OpenAI
    • Gemini 2.5
    • Claude Sonnet
    • o4-mini
    • Claude
    • Claude Opus
    • Cursor
    • Snowflake
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    37 m
  • Securing The Future Of AI With Dr. Peter Garraghan
    Sep 2 2025

    Episode Summary

    Machine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.

    Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.

    The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.

    The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.

    Links

    • Mindgard - Automated AI Red Teaming & Security Testing
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    38 m
  • The Future is Now with Michael Grinich (WorkOS)
    Aug 12 2025

    Episode Summary

    Will AI replace developers? In this episode, Snyk CTO Danny Allan chats with Michael Grinich, the founder and CEO of WorkOS, about the evolving landscape of software development in the age of AI. Michael shares a fascinating analogy, comparing the shift in software engineering to the historical evolution of music, from every family having a piano to the modern era of digital creation with tools like GarageBand. They explore the concept of "vibe coding," the future of development frameworks, and how lessons from the browser wars—specifically the advent of sandboxing—can inform how we build secure AI-driven applications.

    Show Notes

    In this episode, Danny Allan, CTO at Snyk, is joined by Michael Grinich, Founder and CEO of WorkOS, to explore the profound impact of AI on the world of software development. Michael discusses WorkOS's mission to enhance developer joy by providing robust, enterprise-ready features like authentication, user management, and security, allowing developers to remain in a creative flow state. The conversation kicks off with the provocative question of whether AI will replace developers. Michael offers a compelling analogy, comparing the current shift to the historical evolution of music, from a time when a piano was a household staple to the modern era where tools like GarageBand and Ableton have democratized music creation. He argues that while the role of a software engineer will fundamentally change, it won't disappear; rather, it will enable more people to create software in entirely new ways.

    The discussion then moves into the practical and security implications of this new paradigm, including the concept of "vibe coding," where applications can be generated on the fly based on a user's description. Michael cautions that you can't "vibe code" your security infrastructure, drawing a parallel to the early, vulnerable days of web browsers before sandboxing became a standard. He predicts that a similar evolution is necessary for the AI world, requiring new frameworks with tightly defined security boundaries to contain potentially buggy, AI-generated code.

    Looking to the future, Michael shares his optimism for the emergence of open standards in the AI space, highlighting the collaborative development around the Model Context Protocol (MCP) by companies like Anthropic, OpenAI, Cloudflare, and Microsoft. He believes this trend toward openness, much like the open standards of the web (HTML, HTTP), will prevent a winner-take-all scenario and foster a more innovative and accessible ecosystem. The episode wraps up with a look at the incredible energy in the developer community and how the challenge of the next decade will be distributing this powerful new technology to every industry in a safe, secure, and trustworthy manner.

    Links

    • WorkOS - Your app, enterprise ready
    • WorkOS on YouTube
    • MIT
    • MCP Night 2025
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    33 m
  • Open Authorization In The World Of AI With Aaron Parecki
    Jun 10 2025

    Episode Summary

    How do we apply the battle-tested principles of authentication and authorization to the rapidly evolving world of AI and Large Language Models (LLMs)? In this episode, we're joined by Aaron Parecki, Director of Identity Standards at Okta, to explore the past, present, and future of OAuth. We dive into the lessons learned from the evolution of OAuth 1.0 to 2.1, discuss the critical role of standards in securing new technologies, and unpack how identity frameworks can be extended to provide secure, manageable access for AI agents in enterprise environments.

    Show Notes

    In this episode, host Danny Allan is joined by a very special guest, Aaron Parecki, the Director of Identity Standards at Okta, to discuss the critical intersection of identity, authorization, and the rise of artificial intelligence. Aaron begins by explaining the history of OAuth, which was created to solve the problem of third-party applications needing access to user data without the user having to share their actual credentials. This foundational concept of delegated access has become ubiquitous, but as technology evolves, so do the challenges.

    Aaron walks us through the evolution of the OAuth standard, from the limitations of OAuth 1 to the flexibility and challenges of OAuth 2, such as the introduction of bearer tokens. He explains how the protocol was intentionally designed to be extensible, allowing for later additions like OpenID Connect to handle identity and DPoP to enhance security by proving possession of a token. This modular design is why he is now working on OAuth 2.1—a consolidation of best practices—instead of a complete rewrite.

    The conversation then shifts to the most pressing modern challenge: securing AI agents and LLMs that need to interact with multiple services on a user's behalf. Aaron details the new "cross-app access" pattern he is working on, which places the enterprise Identity Provider (IDP) at the center of these interactions. This approach gives enterprise administrators crucial visibility and control over how data is shared between applications, solving a major security and management headache. For developers building in this space today, Aaron offers practical advice: leverage individual user permissions through standard OAuth flows rather than creating over-privileged service accounts.

    Links

    • Okta
    • OpenID Foundation
    • IETF
    • The House Files PDX (YouTube Channel)
    • WIMSE
    • AuthZEN Working Group
    • aaronpk on GitHub
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    36 m
  • The Evolution Of Platform Engineering With Massdriver CEO Cory O’Daniel
    May 27 2025
    Episode SummaryDive into the ever-evolving world of platform engineering with Cory O’Daniel, CEO and co-founder of Massdriver. This episode explores the journey of DevOps, the challenges of building and scaling infrastructure, and the crucial role of creating effective abstractions to empower developers. Cory shares his insights on the shift towards platform engineering as a means to build more secure and efficient software by default.Show NotesIn this episode of The Secure Developer, host Danny Allan sits down with Cory O’Daniel, CEO and co-founder of Massdriver, to discuss the dynamic landscape of platform engineering. Cory, a seasoned software engineer and first-time CEO, shares his extensive experience in the Infrastructure as Code (IaC) space, tracing his journey from early encounters with EC2 to founding Massdriver. He offers candid advice for developers aspiring to become CEOs, emphasizing the importance of passion and early customer engagement. The conversation delves into the evolution of DevOps over the past two decades, highlighting the constant changes in how software is run, from mainframes to serverless containers and now AI. Cory argues that the true spirit of DevOps lies in operations teams producing products that developers can easily use. He points out the challenge of scaling operations expertise, suggesting that IT and Cloud practices need to mature in software development to create better abstractions for developers, rather than expecting developers to become infrastructure experts. A significant portion of the discussion focuses on the current state of abstractions in IaC. Cory contends that existing public abstractions, like open-source Terraform modules, are often too generic and don't account for specific business logic, security, or compliance requirements. He advocates for operations teams building their own prescriptive modules that embed organizational standards, effectively shifting security left by design rather than by burdening developers. The episode also touches upon the potential and limitations of AI in the operations space, with Cory expressing skepticism about AI's current ability to handle the contextual complexities of infrastructure without significant, organization-specific training data. Finally, Cory shares his optimism for the future of platform engineering, viewing it as a return to the original intentions of DevOps, where operations teams ship software with ingrained security and compliance, leading to more secure systems by default.LinksMassDriverAnsibleChefTerraformDevOps is BullshitElephant in the CloudDockerPostgresOpenTofuHelmRedisElixirSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
    Más Menos
    40 m
  • The Future Of API Security With FireTail’s Jeremy Snyder
    May 13 2025

    Episode Summary

    Jeremy Snyder is the co-founder and CEO of FireTail, a company that enables organizations to adopt AI safely without sacrificing speed or innovation. In this conversation, Jeremy shares his deep expertise in API and AI security, highlighting the second wave of cloud adoption and his pivotal experiences at AWS during key moments in its growth from startup onwards.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan sits down with Jeremy Snyder, the Co-founder and CEO of FireTail, to unravel the complexities of API security and explore its critical intersection with the burgeoning field of Artificial Intelligence. Jeremy brings a wealth of experience, tracing his journey from early days in computational linguistics and IT infrastructure, through a pivotal period at AWS during its startup phase, to eventually co-founding FireTail to address the escalating challenges in API security driven by modern, decoupled software architectures.

    The conversation dives deep into the common pitfalls and crucial best practices for securing APIs. Jeremy clearly distinguishes between authentication (verifying identity) and authorization (defining permissions), emphasizing that failures in authorization are a leading cause of API-related data breaches. He sheds light on vulnerabilities like Broken Object-Level Authorization (BOLA), explaining how seemingly innocuous practices like using sequential integer IDs can expose entire datasets if server-side checks are missed. The discussion also touches on the discoverability of backend APIs and the persistent challenges surrounding multi-factor authentication, including the human element in security weaknesses like SIM swapping.

    Looking at current trends, Jeremy shares insights from FireTail's ongoing research, including their annual "State of API Security" report, which has uncovered novel attack vectors such as attempts to deploy malware via API calls. A significant portion of the discussion focuses on the new frontier of AI security, where APIs serve as the primary conduit for interaction—and potential exploitation. Jeremy details how AI systems and LLM integrations introduce new risks, citing a real-world example of how a vulnerability in an AI's web crawler API could be leveraged for DDoS attacks. He speculates on the future evolution of APIs, suggesting that technologies like GraphQL might become more prevalent to accommodate the non-deterministic and data-hungry nature of AI agents. Despite the evolving threats, Jeremy concludes with an optimistic view, noting that the gap between business adoption of new technologies and security teams' responses is encouragingly shrinking, leading to more proactive and integrated security practices.

    Links

    • FireTail
    • Rapid7
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    38 m
  • The Case For Steward Ownership And Open Source With Melanie Rieback
    Apr 29 2025

    Episode Summary

    Is the traditional Silicon Valley startup model harming the security industry? In this episode of The Secure Developer, Danny Allan talks with Melanie Rieback, founder of Radically Open Security, about shaking up the industry with nonprofit business models. Tuning in, you’ll learn about the inner workings of Radically Open Security as a non-profit organization and the positive impact its donations have had on the open source ecosystem.

    We discuss the benefits of a steward-ownership business model, why it pairs so well with open source, and its power to reform venture capital and align incentives with long-term sustainability. For those interested in diving deeper, Melanie shares resources from her startup incubator, Nonprofit Ventures, and her free online Post Growth Entrepreneurship course. Tune in to learn why reforming our business models is vital for preserving and protecting our open source ecosystem and, by extension, security!

    Show Notes

    In this episode, Snyk CTO Danny Allan chats with Dr. Melanie Rieback, founder of Radically Open Security, about her journey from academia and pen testing to founding a cybersecurity company with a radically different business model. Melanie shares the motivations behind creating a not-for-profit organization that donates 90% of its profits to the NLnet Foundation, supporting open source and digital rights initiatives. They discuss the discontent with traditional cybersecurity business practices, including lack of transparency and ethical concerns like selling zero-days.

    Melanie explains Radically Open Security's structure, operating as a collective primarily using contractors, and how this model has allowed them to grow to 50 people while serving major clients and offering pro-bono work for nonprofits and critical open source projects like the Tor Project and Tails. The conversation then broadens to discuss alternative business models like steward ownership, where profit rights are separated from voting rights, aiming to lock value within the company and prevent mission drift often caused by traditional VC funding.

    They explore the concept of "Post Growth Entrepreneurship," which Melanie teaches, focusing on non-extractive business models and reforming finance itself. The discussion touches upon whether the tech industry, particularly open source, is moving towards more sustainable and ethical models, citing examples like Signal, Proton, Mastodon, and Mozilla. Melanie emphasizes that the culture of open source developers is often inherently altruistic, not greedy, but can be compromised by traditional funding systems. Finally, Melanie offers resources for listeners interested in learning more about these alternative models.

    Links

    • Radically Open Security
    • Radically Open Security on LinkedIn
    • NLnet Foundation
    • Nonprofit Ventures
    • Post Growth Entrepreneurship Course
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    44 m
  • Advancing AppSec With AI With Akira Brand
    Apr 15 2025

    Episode Summary

    In this episode of The Secure Developer, Danny Allan sits down with Akira Brand, AVP of Application Security at PRA Group, to explore the evolving landscape of application security and AI. Akira shares her unconventional journey from opera to cybersecurity, discusses why AppSec is fundamentally a customer service role and breaks down how AI is reshaping security workflows. Tune in to hear insights on integrating security seamlessly into development, AI’s role in secure coding, and the future of AppSec in a rapidly shifting tech landscape.

    Show Notes

    In this engaging episode, The Secure Developer welcomes Akira Brand, AVP of Application Security at PRA Group, for an in-depth discussion on the intersection of AI and application security. Akira’s unique background in opera and stage direction offers a fresh perspective on fostering collaboration in security teams and influencing organizational culture.

    Key Topics Covered:

    • From Opera to AppSec: Akira shares her journey from classical music to cybersecurity and how her experience in stage direction translates into leading security teams.
    • AppSec as a Customer Service Role: The importance of serving software engineers by providing security solutions that fit seamlessly into their workflows.
    • The ‘Give Them the Pickle’ Approach: How meeting developers where they are and educating them can lead to better security adoption.
    • AI’s Role in Secure Development: How AI-driven tools are transforming the way security is integrated into the software development lifecycle.
    • Challenges in Security Culture: Why security is still an afterthought in many development processes and how to change that mindset.
    • Future of AI in Security: The promise and risks of AI-assisted security tools and the need for standards to keep pace with rapid technological advancements.

    Links

    • PRA Group
    • Turing School
    • Brian Holt
    • Frontend Masters
    • Resilia
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    35 m