BONUS Building Reliable Software with Unreliable AI Tools With Lada Kesseler Podcast Por  arte de portada

BONUS Building Reliable Software with Unreliable AI Tools With Lada Kesseler

BONUS Building Reliable Software with Unreliable AI Tools With Lada Kesseler

Escúchala gratis

Ver detalles del espectáculo

Obtén 3 meses por US$0.99 al mes

AI Assisted Coding: Building Reliable Software with Unreliable AI Tools In this special episode, Lada Kesseler shares her journey from AI skeptic to pioneer in AI-assisted development. She explores the spectrum from careful, test-driven development to quick AI-driven experimentation, revealing practical patterns, anti-patterns, and the critical role of judgment in modern software engineering. From Skeptic to Pioneer: Lada's AI Coding Journey "I got a new skill for free!" Lada's transformation began when she discovered Anthropic's Claude Projects. Despite being skeptical about AI tools throughout 2023, she found herself learning Angular frontend development with AI—a technology she had no prior experience with. This breakthrough moment revealed something profound: AI could serve as an extension of her existing development skills, enabling her to acquire new capabilities without the traditional learning curve. The journey evolved through WindSurf and Claude Code, each tool expanding her understanding of what's possible when developers collaborate with AI. Understanding Vibecoding vs. AI-Assisted Development "AI assisted coding requires judgment, and it's never been as important to exercise judgment as now." Lada introduces the concept of "vibecoding" as one extreme on a new dimension in software development—the spectrum from careful, test-driven development to quick, AI-driven experimentation. The key insight isn't that one approach is superior, but that developers must exercise judgment about which approach fits their context. She warns against careless AI coding for production systems: "You just talk to a computer, you say, do this, do that. You don't really care about code... For some systems, that's fine. When the problem arises is when you put the stuff to production and you really care about your customers. Please, please don't do that." This wisdom highlights that with great power comes great responsibility—AI accelerates both good and bad practices. The Answer Injection Anti-Pattern When Working With AI "You're limiting yourself without knowing, you're limiting yourself just by how you formulate your questions. And it's so hard to detect." One of Lada's most important discoveries is the "answer injection" anti-pattern—when developers unconsciously constrain AI's responses by how they frame their questions. She experienced this firsthand when she asked an AI about implementing a feature using a specific approach, only to realize later that she had prevented the AI from suggesting better alternatives. The solution? Learning to ask questions more openly and reformulating problems to avoid self-imposed limitations. As she puts it, "Learn to ask the right way. This is one of the powers this year that's been kind of super cool." This skill of question formulation has become as critical as any technical capability. Answer injection is when we—sometimes, unknowingly—ask a leading question that also injects a possible answer. It's an anti-pattern because LLM's have access to far more information than we do. Lada's advice: "just ask for anything you need", the LLM might have a possible answer for you. Never Trust a Single LLM: Multi-Agent Collaboration "Never trust the output of a single LLM. When you ask it to develop a feature, and then you ask the same thing to look at that feature, understand the code, find the issues with it—it suddenly finds improvements." Lada shares her experiments with swarm programming—using multiple AI instances that collaborate and cross-check each other's work. She created specialized agents (architect, developer, tester) and even built systems using AppleScript and Tmux to make different AI instances communicate with each other. This approach revealed a powerful pattern: AI reviewing AI often catches issues that a single instance would miss. The practical takeaway is simple but profound—always have one AI instance review another's work, treating AI output with the same healthy skepticism you'd apply to any code review. Code Quality Matters MORE with AI "This thing is a monkey, and if you put it in a good codebase, like any developer, it's gonna replicate what it sees. So it behaves much better in the better codebase, so refactor!" Lada emphasizes that code quality becomes even more critical when working with AI. Her systems "work silently" and "don't make a lot of noise, because they don't break"—a result of maintaining high standards even when AI makes rapid development tempting. She uses a memorable metaphor: AI is like a monkey that replicates what it sees. Put it in a clean, well-structured codebase, and it produces clean code. Put it in a mess, and it amplifies that mess. This insight transforms refactoring from a nice-to-have into a strategic necessity—good architecture and clean code directly improve AI's ability to contribute effectively. Managing Complexity: The Open Question "If I just let it do things,...
Todavía no hay opiniones