BONUS From 3,000 Scripts to 3 Tools - Building AI-Last Software With Peter Swimm Podcast Por  arte de portada

BONUS From 3,000 Scripts to 3 Tools - Building AI-Last Software With Peter Swimm

BONUS From 3,000 Scripts to 3 Tools - Building AI-Last Software With Peter Swimm

Escúchala gratis

Ver detalles del espectáculo
BONUS: From 3,000 Scripts to 3 Tools - Building AI-Last Software With Conversational AI Pioneer Peter Swimm In this special BONUS episode, Peter Swimm—conversational AI veteran, creator of BotKit (the open-source chatbot framework that powered Slack and Teams bots), and former Principal Product Manager at Microsoft Copilot Studio—shares what 25+ years in tech taught him about working with AI. From his brutal experiment of running an entire business on voice-based AI for a week, to why he treats AI more like R2-D2 than C-3PO, Peter offers a grounded, practical perspective on where AI fits in software development teams. From BotKit to Copilot Studio: A Front-Row Seat to the AI Evolution "We had the number one bot in the Slack app store, because there were only 8 bots, and ours used regex. To show you how far we've come." Peter's journey into conversational AI started with a newspaper ad and a creative writing background. When Slack launched its API, Peter and BotKit co-creator Ben Brown immediately saw that building bots wasn't just a technical challenge—it was a social and creative one, like writing scripts for plays that interface with people in their daily lives. That insight powered BotKit into becoming the backbone of Slack and Teams bots, and eventually led to Microsoft acquiring the company. Peter spent years inside Microsoft shaping Copilot Studio, working on connectors that bridge the gap between APIs and real-world work. But the experience also gave him a healthy dose of perspective: he can show you slide decks from 2016 that promise the same things today's AI pitches promise, always saying "within 5 years." That pattern recognition shapes his practical, no-hype approach. The 3,000 Scripts Experiment: Why AI-Last Beats AI-First "At the end of the day, if I've been prompting all day, I should have a computer program that works offline, that works without a subscription. Otherwise, I didn't really make anything." Peter ran a week-long experiment trying to run his entire business using only voice-based conversational AI. The result: 3,000 generated scripts. After static code analysis, he discovered it was really only 5 programs made thousands of times—and those 5 programs were really just 2 or 3 core abilities. He deleted 36 gigabytes of generated code and kept 50 megabytes of what actually worked. This brutal compression led him to an "AI-last" philosophy: build reliable runtime software that works confidently in one click, then use AI only for exploration, connection-making, and creative riffing. The payoff is striking—within 3 weeks of a given application, his team sees a 90% reduction in AI usage in the first week, dropping to 0% within 13 days, because once a computer program does everything you need, you don't need AI anymore. R2-D2, Not C-3PO: How to Think About AI on Your Team "I think of our AI use more like R2-D2 than C-3PO. R2-D2 doesn't talk—bonus points. He doesn't interject his fear. He saves your butt. He's silent until you need him, and visible when you need him." Peter's Star Wars analogy captures his team's philosophy on AI integration. AI should be like a smarter linter—a quiet, capable tool that handles the boring, repetitive tasks so humans can focus on creativity and shipping. His team treats AI as a "super junior" with infinite time: set it up as if it invented Python, have it write buy-the-book code with unit tests, and then a human reviews and accepts (or rejects) the output. The tooling isn't consistent enough to ship autonomously or commit directly into the codebase—even frontier providers don't fully understand what their models do. The practical benefit is enormous for setup and configuration: what used to be a painful, arcane process of tracking down dozens of AWS or Azure docs becomes a 20-minute "hello world" that's actually a working proof of concept. Your job isn't to become an expert at cloud services—it's to ship product. The Biggest Mistake: Automating Broken Processes at AI Speed "All it does is automate all the mistakes you made, all the way, at AI speed." When asked about the most common mistake organizations make with AI, Peter is blunt: they port their existing infrastructure into AI-governed systems instead of rebuilding from the ground up. Companies with a self-inflated opinion of their processes think AI is just a million-person force multiplier—so they'll ship faster. But if your process was broken before AI, you'll just generate broken output at unprecedented scale. That 3,000-script experiment proved this firsthand. Peter's recommendation: rebuild from the bolts up. Start with AI-last architecture where reliable, offline-capable software handles the core, and AI is reserved for the edges—filling gaps, translating between systems, and making connections that don't exist yet. SaaS Is Bloated: The Case for AI Transformation Layers "The one thing AI is good at is transforming between boundaries." ...
Todavía no hay opiniones