The Secure Developer Podcast Por Snyk arte de portada

The Secure Developer

The Secure Developer

De: Snyk
Escúchala gratis

Securing the future of DevOps and AI: real talk with industry leaders.2016 - 2024 Snyk Desarrollo Personal Economía Gestión Gestión y Liderazgo Éxito Personal
Episodios
  • Retrieval-Augmented Generation With Bob Remeika From Ragie
    Sep 16 2025

    Episode Summary

    Bob Remeika, CEO and Co-Founder of Ragie, joins host Danny Allan to demystify Retrieval-Augmented Generation (RAG) and its role in building secure, powerful AI applications. They explore the nuances of RAG, differentiating it from fine-tuning, and discuss how it handles diverse data types while mitigating performance challenges. The conversation also covers the rise of AI agents, security best practices like data segmentation, and the exciting future of AI in amplifying developer productivity.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan is joined by Bob Remeika, co-founder and CEO of Ragie, a company focused on providing a RAG-as-a-Service platform for developers. The conversation dives deep into Retrieval-Augmented Generation (RAG) and its practical applications in the AI world.

    Bob explains RAG as a method for providing context to large language models (LLMs) that they have not been trained on. This is particularly useful for things like a company's internal data, such as a parental leave policy, that would be unknown to a public model. The discussion differentiates RAG from fine-tuning an LLM, highlighting that RAG doesn't require a training step, making it a simple way to start building an AI application. The conversation also covers the challenges of working with RAG, including the variety of data formats (like text, audio, and video) that need to be processed and the potential for performance slowdowns with large datasets.

    The episode also explores the most common use cases for RAG-based systems, such as building internal chatbots and creating AI-powered applications for users. Bob addresses critical security concerns, including how to manage authorization and prevent unauthorized access to data using techniques like data segmentation and metadata tagging. The discussion then moves to the concept of "agents," which Bob defines as multi-step, action-oriented AI systems. Bob and Danny discuss how a multi-step approach with agents can help mitigate hallucinations by building in verification steps. Finally, they touch on the future of AI, with Bob expressing excitement about the "super leverage" that AI provides to amplify developer productivity, allowing them to get 10x more done with a smaller team. Bob and Danny both agree that AI isn't going to replace developers, but rather make them more valuable by enabling them to be more productive.

    Links

    • Ragie - Fully Managed Multimodal RAG-as-a-Service for Developers
    • Ragie Connect
    • OpenAI
    • Gemini 2.5
    • Claude Sonnet
    • o4-mini
    • Claude
    • Claude Opus
    • Cursor
    • Snowflake
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    37 m
  • Securing The Future Of AI With Dr. Peter Garraghan
    Sep 2 2025

    Episode Summary

    Machine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.

    Show Notes

    In this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.

    Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.

    The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.

    The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.

    Links

    • Mindgard - Automated AI Red Teaming & Security Testing
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    38 m
  • The Future is Now with Michael Grinich (WorkOS)
    Aug 12 2025

    Episode Summary

    Will AI replace developers? In this episode, Snyk CTO Danny Allan chats with Michael Grinich, the founder and CEO of WorkOS, about the evolving landscape of software development in the age of AI. Michael shares a fascinating analogy, comparing the shift in software engineering to the historical evolution of music, from every family having a piano to the modern era of digital creation with tools like GarageBand. They explore the concept of "vibe coding," the future of development frameworks, and how lessons from the browser wars—specifically the advent of sandboxing—can inform how we build secure AI-driven applications.

    Show Notes

    In this episode, Danny Allan, CTO at Snyk, is joined by Michael Grinich, Founder and CEO of WorkOS, to explore the profound impact of AI on the world of software development. Michael discusses WorkOS's mission to enhance developer joy by providing robust, enterprise-ready features like authentication, user management, and security, allowing developers to remain in a creative flow state. The conversation kicks off with the provocative question of whether AI will replace developers. Michael offers a compelling analogy, comparing the current shift to the historical evolution of music, from a time when a piano was a household staple to the modern era where tools like GarageBand and Ableton have democratized music creation. He argues that while the role of a software engineer will fundamentally change, it won't disappear; rather, it will enable more people to create software in entirely new ways.

    The discussion then moves into the practical and security implications of this new paradigm, including the concept of "vibe coding," where applications can be generated on the fly based on a user's description. Michael cautions that you can't "vibe code" your security infrastructure, drawing a parallel to the early, vulnerable days of web browsers before sandboxing became a standard. He predicts that a similar evolution is necessary for the AI world, requiring new frameworks with tightly defined security boundaries to contain potentially buggy, AI-generated code.

    Looking to the future, Michael shares his optimism for the emergence of open standards in the AI space, highlighting the collaborative development around the Model Context Protocol (MCP) by companies like Anthropic, OpenAI, Cloudflare, and Microsoft. He believes this trend toward openness, much like the open standards of the web (HTML, HTTP), will prevent a winner-take-all scenario and foster a more innovative and accessible ecosystem. The episode wraps up with a look at the incredible energy in the developer community and how the challenge of the next decade will be distributing this powerful new technology to every industry in a safe, secure, and trustworthy manner.

    Links

    • WorkOS - Your app, enterprise ready
    • WorkOS on YouTube
    • MIT
    • MCP Night 2025
    • Snyk - The Developer Security Company

    Follow Us

    • Our Website
    • Our LinkedIn
    Más Menos
    33 m
Todavía no hay opiniones