AI Engineering Podcast

De: Tobias Macey
  • Resumen

  • This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
    © 2024 Boundless Notions, LLC.
    Más Menos
Episodios
  • Protecting AI Systems: Understanding Vulnerabilities and Attack Surfaces
    May 3 2025
    SummaryIn this episode of the AI Engineering Podcast Kasimir Schulz, Director of Security Research at HiddenLayer, talks about the complexities and security challenges in AI and machine learning models. Kasimir explains the concept of shadow genes and shadow logic, which involve identifying common subgraphs within neural networks to understand model ancestry and potential vulnerabilities, and emphasizes the importance of understanding the attack surface in AI integrations, scanning models for security threats, and evolving awareness in AI security practices to mitigate risks in deploying AI systems.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Kasimir Schulz about the relationships between the various models on the market and how that information helps with selecting and protecting models for your applicationsInterviewIntroductionHow did you get involved in machine learning?Can you start by outlining the current state of the threat landscape for ML and AI systems?What are the main areas of overlap in risk profiles between prediction/classification and generative models? (primarily from an attack surface/methodology perspective)What are the significant points of divergence?What are some of the categories of potential damages that can be created through the deployment of compromised models?How does the landscape of foundation models introduce new challenges around supply chain security for organizations building with AI?You recently published your findings on the potential to inject subgraphs into model architectures that are invisible during normal operation of the model. Along with that you wrote about the subgraphs that are shared between different classes of models. What are the key learnings that you would like to highlight from that research?What action items can organizations and engineering teams take in light of that information?Platforms like HuggingFace offer numerous variations of popular models with variations around quantization, various levels of finetuning, model distillation, etc. That is obviously a benefit to knowledge sharing and ease of access, but how does that exacerbate the potential threat in the face of backdoored models?Beyond explicit backdoors in model architectures, there are numerous attack vectors to generative models in the form of prompt injection, "jailbreaking" of system prompts, etc. How does the knowledge of model ancestry help with identifying and mitigating risks from that class of threat?A common response to that threat is the introduction of model guardrails with pre- and post-filtering of prompts and responses. How can that approach help to address the potential threat of backdoored models as well?For a malicious actor that develops one of these attacks, what is the vector for introducing the compromised model into an organization?Once that model is in use, what are the possible means by which the malicious actor can detect its presence for purposes of exploitation?What are the most interesting, innovative, or unexpected ways that you have seen the information about model ancestry used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on ShadowLogic/ShadowGenes?What are some of the other means by which the operation of ML and AI systems introduce attack vectors to organizations running them?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksHiddenLayerZero-Day VulnerabilityMCP Blog PostPython Pickle Object SerializationSafeTensorsDeepseekHuggingface TransformersKROP == Knowledge Return Oriented PromptingXKCD "Little Bobby Tables"OWASP Top 10 For LLMsCVE AI Systems Working GroupRefusal Vector AblationFoundation ModelShadowLogicShadowGenesBytecodeResNet == Resideual Neural NetworkYOLO == You Only Look OnceNetronBERTRoBERTAShodanCTF == Capture The FlagTitan Bedrock Image GeneratorThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    52 m
  • Understanding The Operational And Organizational Challenges Of Agentic AI
    Apr 21 2025
    SummaryIn this episode of the AI Engineering podcast Julian LaNeve, CTO of Astronomer, talks about transitioning from simple LLM applications to more complex agentic AI systems. Julian shares insights into the challenges and considerations of this evolution, emphasizing the importance of starting with simpler applications to build operational knowledge and intuition. He discusses the parallels between microservices and agentic AI, highlighting the need for careful orchestration and observability to manage complexity and ensure reliability, and explores the technical requirements for deploying AI systems, including data infrastructure, orchestration tools like Apache Airflow, and understanding the probabilistic nature of AI models.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsSeamless data integration into AI applications often falls short, leading many to adopt RAG methods, which come with high costs, complexity, and limited scalability. Cognee offers a better solution with its open-source semantic memory engine that automates data ingestion and storage, creating dynamic knowledge graphs from your data. Cognee enables AI agents to understand the meaning of your data, resulting in accurate responses at a lower cost. Take full control of your data in LLM apps without unnecessary overhead. Visit aiengineeringpodcast.com/cognee to learn more and elevate your AI apps and agents.Your host is Tobias Macey and today I'm interviewing Julian LaNeve about how to avoid putting the cart before the horse with AI applications. When do you move from "simple" LLM apps to agentic AI and what's the path to get there?InterviewIntroductionHow did you get involved in machine learning?How do you technically distinguish "agentic AI" (e.g., involving planning, tool use, memory) from "simpler LLM workflows" (e.g., stateless transformations, RAG)? What are the key differences in operational complexity and potential failure modes?What specific technical challenges (e.g., state management, observability, non-determinism, prompt fragility, cost explosion) are often underestimated when teams jump directly into building stateful, autonomous agents?What are the pre-requisites from a data and infrastructure perspective before going to production with agentic applications?How does that differ from the chat-based systems that companies might be experimenting with?Technically, where do you most often see ambitious agent projects break down during development or early deployment?Beyond generic data quality, what specific data engineering practices become critical when building reliable LLM applications? (e.g., Designing data pipelines for efficient RAG chunking/embedding, versioning prompts alongside data, caching strategies for LLM calls, managing vector database ETL).From an implementation complexity standpoint, what characterizes tasks well-suited for initial LLM workflow adoption versus those genuinely requiring agentic capabilities?Can you share examples (anonymized if necessary) highlighting how organizations successfully engineered these simpler LLM workflows? What specific technical designs, tooling choices, or MLOps practices were key to their reliability and scalability?What are some hard-won technical or operational lessons from deploying and scaling LLM workflows in production environments? Any surprising performance bottlenecks, cost issues, or monitoring challenges engineers should anticipate?What technical maturity signals (e.g., robust CI/CD for ML, established monitoring/alerting for pipelines, automated evaluation frameworks, cost tracking mechanisms) suggest an engineering team might be ready to tackle the challenges of building and operating agentic systems?How does the technical stack and engineering process need to evolve when moving from orchestrated LLM workflows towards more complex agents involving memory, planning, and dynamic tool use? What new components and failure modes must be engineered for?How do you foresee orchestration platforms evolving to better serve the needs of AI engineers building LLM apps? What are the most interesting, innovative, or unexpected ways that you have seen organizations build toward advanced AI use cases?What are the most interesting, unexpected, or challenging lessons that you have learned while working on supporting AI services?When is AI the wrong choice?What is the single most critical piece of engineering advice you would give to fellow AI engineers who are tasked with integrating LLMs into production systems right now?Contact InfoLinkedInGitHubParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?LinksAstronomerAirflowAnthropicBuilding Effective Agents post from AnthropicAirflow 3.0MicroservicesPydantic AILangchainLlamaIndexLLM As A JudgeSWE (SoftWare Engineer) BenchCursorWindsurfOpenTelemetryDAG == Directed ...
    Más Menos
    1 h y 12 m
  • The Power of Community in AI Development with Oumi
    Mar 16 2025
    SummaryIn this episode of the AI Engineering Podcast Emmanouil (Manos) Koukoumidis, CEO of Oumi, about his vision for an open platform for building, evaluating, and deploying AI foundation models. Manos shares his journey from working on natural language AI services at Google Cloud to founding Oumi with a mission to advance open-source AI, emphasizing the importance of community collaboration and accessibility. He discusses the need for open-source models that are not constrained by proprietary APIs, highlights the role of Oumi in facilitating open collaboration, and touches on the complexities of model development, open data, and community-driven advancements in AI. He also explains how Oumi can be used throughout the entire lifecycle of AI model development, post-training, and deployment.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Manos Koukoumidis about Oumi, an all-in-one production-ready open platform to build, evaluate, and deploy AI modelsInterviewIntroductionHow did you get involved in machine learning?Can you describe what Oumi is and the story behind it?There are numerous projects, both full suites and point solutions, focused on every aspect of "AI" development. What is the unique value that Oumi provides in this ecosystem?You have stated the desire for Oumi to become the Linux of AI development. That is an ambitious goal and one that Linux itself didn't start with. What do you see as the biggest challenges that need addressing to reach a critical mass of adoption?In the vein of "open source" AI, the most notable project that I'm aware of that fits the proper definition is the OLMO models from AI2. What lessons have you learned from their efforts that influence the ways that you think about your work on Oumi?On the community building front, HuggingFace has been the main player. What do you see as the benefits and shortcomings of that platform in the context of your vision for open and collaborative AI?Can you describe the overall design and architecture of Oumi?How did you approach the selection process for the different components that you are building on top of?What are the extension points that you have incorporated to allow for customization/evolution?Some of the biggest barriers to entry for building foundation models are the cost and availability of hardware used for training, and the ability to collect and curate the data needed. How does Oumi help with addressing those challenges?For someone who wants to build or contribute to an open source model, what does that process look like?How do you envision the community building/collaboration process?Your overall goal is to build a foundation for the growth and well-being of truly open AI. How are you thinking about the sustainability of the project and the funding needed to grow and support the community?What are the most interesting, innovative, or unexpected ways that you have seen Oumi used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Oumi?When is Oumi the wrong choice?What do you have planned for the future of Oumi?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email hosts@aiengineeringpodcast.com with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksOumiCloud PaLMGoogle GeminiDeepMindLSTM == Long Short-Term MemoryTransfomers)ChatGPTPartial Differential EquationOLMOOSI AI definitionMLFlowMetaflowSkyPilotLlamaRAGPodcast EpisodeSynthetic DataPodcast EpisodeLLM As JudgeSGLangvLLMFunction Calling LeaderboardDeepseekThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    Más Menos
    56 m
adbl_web_global_use_to_activate_webcro805_stickypopup

Lo que los oyentes dicen sobre AI Engineering Podcast

Calificaciones medias de los clientes

Reseñas - Selecciona las pestañas a continuación para cambiar el origen de las reseñas.