The Agile Embedded Podcast Podcast Por Luca Ingianni Jeff Gable arte de portada

The Agile Embedded Podcast

The Agile Embedded Podcast

De: Luca Ingianni Jeff Gable
Escúchala gratis

Obtén 3 meses por US$0.99 al mes + $20 crédito Audible

Learn how to get your embedded device to market faster AND with higher quality. Join Luca Ingianni and Jeff Gable as they discuss how agile methodologies apply to embedded systems development, with a particular focus on safety-critical industries such as medical devices.2021-2024 Jeff Gable & Luca Ingianni
Episodios
  • Crossover with Embedded AI Podcast
    Nov 18 2025
    In this special crossover episode with the brand-new Embedded AI Podcast, Luca and Jeff are joined by Ryan Torvik, Luca's co-host on the Embedded AI podcast, to explore the intersection of AI-powered development tools and agile embedded systems engineering. The hosts discuss practical strategies for using Large Language Models (LLMs) effectively in embedded development workflows, covering topics like context management, test-driven development with AI, and maintaining code quality standards in safety-critical systems. The conversation addresses common anti-patterns that developers encounter when first adopting LLM-assisted coding, such as "vibe coding" yourself off a cliff by letting the AI generate too much code at once, losing control of architectural decisions, and failing to maintain proper test coverage. The hosts emphasize that while LLMs can dramatically accelerate prototyping and reduce boilerplate coding, they require even more rigorous engineering discipline - not less. They discuss how traditional agile practices like small commits, continuous integration, test-driven development, and frequent context resets become even more critical when working with AI tools. For embedded systems engineers working in safety-critical domains like medical devices, automotive, and aerospace, the episode provides valuable guidance on integrating AI tools while maintaining deterministic quality processes. The hosts stress that LLMs should augment, not replace, static analysis tools and human code reviews, and that developers remain fully responsible for AI-generated code. Whether you're just starting with AI-assisted development or looking to refine your approach, this episode offers actionable insights for leveraging LLMs effectively while keeping the reins firmly in hand. ## Key Topics * [03:45] LLM Interface Options: Web, CLI, and IDE Plugins - Choosing the Right Tool for Your Workflow* [08:30] Prompt Engineering Fundamentals: Being Specific and Iterative with LLMs* [12:15] Building Effective Base Prompts: Learning from Experience vs. Starting from Templates* [16:40] Context Window Management: Avoiding Information Overload and Hallucinations* [22:10] Understanding LLM Context: Files, Prompts, and Conversation History* [26:50] The Nature of Hallucinations: Why LLMs Always Generate, Never Judge* [29:20] Test-Driven Development with AI: More Critical Than Ever* [35:45] Avoiding 'Vibe Coding' Disasters: The Importance of Small, Testable Increments* [42:30] Requirements Engineering in the AI Era: Becoming More Specific About What You Want* [48:15] Extreme Programming Principles Applied to LLM Development: Small Steps and Frequent Commits* [52:40] Context Reset Strategies: When and How to Start Fresh Sessions* [56:20] The V-Model Approach: Breaking Down Problems into Manageable LLM-Sized Chunks* [01:01:10] AI in Safety-Critical Systems: Augmenting, Not Replacing, Deterministic Tools* [01:06:45] Code Review in the AI Age: Maintaining Standards Despite Faster Iteration* [01:12:30] Prototyping vs. Production Code: The Superpower and the Danger* [01:16:50] Shifting Left with AI: Empowering Product Owners and Accelerating Feedback Loops* [01:19:40] Bootstrapping New Technologies: From Zero to One in Minutes Instead of Weeks* [01:23:15] Advice for Junior Engineers: Building Intuition in the Age of AI-Assisted Development ## Notable Quotes > "All of us are new to this experience. Nobody went to school back in the 80s and has been doing this for 40 years. We're all just running around, bumping into things and seeing what works for us." — Ryan Torvik > "An LLM is just a token generator. You stick an input in, and it returns an output, and it has no way of judging whether this is correct or valid or useful. It's just whatever it generated. So it's up to you to give it input data that will very likely result in useful output data." — Luca Ingianni > "Tests tell you how this is supposed to work. You can have it write the test first and then evaluate the test. Using tests helps communicate - just like you would to another person - no, it needs to function like this, it needs to have this functionality and behave in this way." — Ryan Torvik > "I find myself being even more aggressively biased towards test-driven development. While I'm reasonably lenient about the code that the LLM writes, I am very pedantic about the tests that I'm using. I will very thoroughly review them and really tweak them until they have the level of detail that I'm interested in." — Luca Ingianni > "It's really forcing me to be a better engineer by using the LLM. You have to go and do that system level understanding of the problem space before you actually ask the LLM to do something. This is what responsible people have been saying - this is how you do engineering." — Ryan Torvik > "I can use LLMs to jumpstart me or bootstrap me from zero to one. Once there's something on the screen that kind of works, I can usually then apply my ...
    Más Menos
    55 m
  • AI-enhanced Embedded Development (May 2025 Edition)
    Nov 12 2025
    In this episode, Jeff interviews Luca about his intensive experience presenting at five conferences in two and a half days, including the Embedded Online Conference and a German conference where he delivered a keynote on AI-enhanced software development. Luca shares practical insights from running an LLM-only hackathon where participants were prohibited from manually writing any code that entered version control—forcing them to rely entirely on AI tools.The conversation explores technical challenges in AI-assisted embedded development, particularly the importance of context management when working with LLMs. Luca reveals that effective AI-assisted coding requires treating prompts like code itself—version controlling them, refining them iteratively, and building project-specific prompt libraries. He discusses the economics of LLM-based development (approximately one cent per line of code), the dramatic tightening of feedback loops from days to minutes, and how this fundamentally changes agile workflows for embedded teams.The episode concludes with a discussion about the evolving role of embedded developers—from code writers to AI supervisors and eventually to product owners with deep technical skills. Luca and Jeff address concerns about maintaining core software engineering competencies while embracing these powerful new tools, emphasizing that understanding the craft remains essential even as the tools evolve.Key Topics[02:15] LLM-only hackathon constraints: No human-written code in version control[04:30] Context management as the critical skill for effective LLM-assisted development[08:45] Explicit context control: Files, directories, API documentation, and web content integration[11:20] LLM hallucinations: When AI invents file contents and generates diffs against phantom code[13:00] Economics of AI-assisted coding: Approximately $0.01 per line of code[15:30] Tightening feedback loops: From day-long iterations to minutes in agile embedded workflows[17:45] Rapid technical debt accumulation: How LLMs can create problems faster than humans notice[19:30] The essential role of comprehensive testing in AI-assisted development workflows[22:00] Challenges with TDD and LLMs: Getting AI to take small steps and wait for feedback[26:15] Treating prompts like code: Version control, libraries, and project-specific prompt management[29:40] External context management: Coding style guides, plan files, and todo.txt workflows[32:00] LLM attention patterns: Beginning and end of context receive more focus than middle content[34:30] The evolving developer role: From coder to prompt engineer to AI supervisor to technical product owner[38:00] Code wireframing: Rapid prototyping for embedded systems using AI-generated implementations[40:15] Maintaining software engineering skills in the age of AI: The importance of manual practice[43:00] Software engineering vs. software carpentry: Architecture and goals over syntax and implementationNotable Quotes"One of the hardest things to get an LLM to do is nothing. Sometimes I just want to brainstorm with it and say, let's look at the code base, let's figure out how we're going to tackle this next piece of functionality. And then it says, 'Yeah, I think we should do it like this. You know what? I'm going to do it right now.' And it's so terrible. Stop. You didn't even wait for me to weigh in." — Luca Ingianni"LLMs making everything faster also means they can create technical debt at a spectacular rate. And it gets a little worse because if you're not paying close attention and if you're not disciplined, then it kind of passes you by at first. It generates code and the code kind of looks fine. And you say, yeah, let's keep going. And then you notice that actually it's quite terrible." — Luca Ingianni"I would not trust myself to review an LLM's code and be able to spot all of the little subtleties that it gets wrong. But if I at least have tests that express my goals and maybe also my worries in terms of robustness, then I can feel a lot safer to iterate very quickly within those guardrails." — Luca Ingianni"Roughly speaking, the way I was using the tool, I was spending about a cent per line. Which is about two orders of magnitude below what a human programmer roughly costs. It really is a fraction. So that's nice because it makes certain things approachable. It changes certain build versus buy decisions." — Luca Ingianni"You can tighten your feedback loops to an absurd degree. Maybe before, if you had a really tight feedback loop between a product owner and a developer, it was maybe a day long. And now it can be minutes or quarters of an hour. It is so much faster. And that's not just a quantitative step. It's also a qualitative step." — Luca Ingianni"Some of my best performing prompts came from a place of desperation where one of my prompts is literally 'wait wait wait you didn't do what we agreed you would do you did not read the files carefully.' And I'd like to use...
    Más Menos
    28 m
  • Zephyr with Luka Mustafa
    Nov 9 2025
    In this comprehensive episode, Luka Mustafa, founder and CEO of Irnas Product Development, provides an in-depth exploration of Zephyr RTOS and its transformative impact on embedded development. We dive deep into how Zephyr's Linux Foundation-backed ecosystem enables hardware-agnostic development, dramatically reducing the time spent on foundational code versus business-value features. Luka shares practical insights from five years of specializing in Zephyr development, demonstrating how projects can achieve remarkable portability - including running the same Bluetooth code on different chip architectures in just an hour, and even executing embedded applications natively on Linux for development purposes.The discussion covers Zephyr's comprehensive testing framework (Twister), CI/CD integration capabilities, and the cultural shift required when moving from traditional bare-metal development to this modern RTOS approach. We explore real-world applications from low-power IoT devices consuming just 5 microamps to complex multi-core systems, while addressing the learning curve challenges and when Zephyr might not be the right choice. This episode is essential listening for embedded teams considering modernizing their development practices and leveraging community-driven software ecosystems.Key Topics[03:15] Zephyr RTOS fundamentals and Linux Foundation ecosystem benefits[08:30] Hardware abstraction and device tree implementation for portable embedded code[12:45] Nordic Semiconductor strategic partnership and silicon vendor support landscape[18:20] Native POSIX development capabilities and cross-platform debugging strategies[25:10] Learning curve challenges: EE vs CS background adaptation to Zephyr development[32:40] Resource requirements and low-power implementation on constrained microcontrollers[38:15] Multi-vendor chip support: STMicroelectronics, NXP, and industry adoption trends[42:30] Safety-critical applications and ongoing certification processes[45:50] Organizational transformation strategies and cultural adaptation challenges[52:20] Zbus inter-process communication and modular development architecture[58:45] Twister testing framework and comprehensive CI/CD pipeline integration[65:30] Sample-driven development methodology and long-lived characterization tests[72:15] Production testing automation and shell interface utilization[78:40] Model-based development integration and requirements traceability[82:10] When not to use Zephyr: Arduino simplicity vs RTOS complexity trade-offsNotable Quotes"With Zephyr, porting a Bluetooth project from one chip architecture to another took an hour for an intern, compared to what would traditionally be months of effort." — Luka Mustafa"How many times have you written a logging subsystem? If the answer is more than zero, then it shouldn't be the case. Someone needs to write it once, and every three years someone needs to rewrite it with a better idea." — Luka Mustafa"The real benefit comes from doing things the Zephyr way in Zephyr, because then you are adopting all of the best practices of developing the code, using all of the subsystems to the maximum extent." — Luka Mustafa"You want to make sure your team is spending time on things that make money for you, not on writing logging, for example." — Luka MustafaZephyr Project - Linux Foundation-backed RTOS project providing comprehensive embedded development ecosystemTwister Testing Framework - Zephyr's built-in testing framework for unit tests, hardware-in-the-loop, and CI/CD integrationZbus Inter-Process Communication - Advanced event bus system for modular embedded development and component decouplingiirnas - Open-source examples of Zephyr best practices and CI/CD pipeline implementationsCarles Cufi's Talk - Detailed presentation on Nordic's strategic decision to support Zephyr RTOS You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click hereAre you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/
    Más Menos
    46 m
Todas las estrellas
Más relevante
Two experienced developers sharing their knowledge and advice in a casual but focused discussion. They understand real-world challenges and are reasonable about implementing these concepts instead of having a dogmatic approach.

Great advice and easy to absorb!

Se ha producido un error. Vuelve a intentarlo dentro de unos minutos.