How The Future Works with Brian Elliott Podcast Por  arte de portada

How The Future Works with Brian Elliott

How The Future Works with Brian Elliott

Escúchala gratis

Ver detalles del espectáculo
Welcome back to Snafu w/ Robin Zander. In this episode, I'm joined by Brian Elliott, former Slack executive and co-founder of Future Forum. We discuss the common mistakes leaders make about AI and why trust and transparency are more crucial than ever. Brian shares lessons from building high-performing teams, what makes good leadership, and how to foster real collaboration. He also reflects on raising values-driven kids, the breakdown of institutional trust, and why purpose matters. We touch on the early research behind Future Forum and what he'd do differently today. Brian will also be joining us live at Responsive Conference 2025, and I'm excited to continue the conversation there. If you haven't gotten your tickets yet, get them here. What Do Most People Get Wrong About AI? (1:53) "Senior leaders sit on polar ends of the spectrum on this stuff. Very, very infrequently, sit in the middle, which is kind of where I find myself too often." Robin notes Brian will be co-leading an active session on AI at Responsive Conference with longtime collaborator Helen Kupp. He tees up the conversation by saying Brian holds "a lot of controversial opinions" on AI, not that it's insignificant, but that there's a lot of "idealization." Brian says most senior leaders fall into one of two camps: Camp A: "Oh my God, this changes everything." These are the fear-mongers shouting: "If you don't adopt now, your career is over." Camp B: "This will blow over." They treat AI as just another productivity fad, like others before it. Brian positions himself somewhere in the middle but is frustrated by both ends of the spectrum. He points out that the loudest voices (Mark Benioff, Andy Jassy, Zuckerberg, Sam Altman) are "arms merchants" – they're pushing AI tools because they've invested billions. These tools are massively expensive to build and run, and unless they displace labor, it's unclear how they generate ROI. believe in AI's potential and aggressively push adoption inside their companies. So, naturally, these execs have to: But "nothing ever changes that fast," and both the hype and the dismissal are off-base. Why Playing with AI Matters More Than Training (3:29) AI is materially different from past tech, but what's missing is attention to how adoption happens. "The organizational craft of driving adoption is not about handing out tools. It's all emotional." Adoption depends on whether people respond with fear or aspiration, not whether they have the software. Frontline managers are key: it's their job to create the time and space for teams to experiment with AI. Brian credits Helen Kupp for being great at facilitating this kind of low-stakes experimentation. Suggests teams should "play with AI tools" in a way totally unrelated to their actual job. Example: take a look at your fridge, list the ingredients you have, and have AI suggest a recipe. "Well, that's a sucky recipe, but it could do that, right?" The point isn't utility, it's comfort and conversation: What's OK to use AI for? Is it acceptable to draft your self-assessment for performance reviews with AI? Should you tell your boss or hide it? The Purpose of Doing the Thing (5:30) Robin brings up Ezra Klein's podcast in The New York Times, where Ezra asks: "What's the purpose of writing an essay in college?" AI can now do better research than a student, faster and maybe more accurately. But Robin argues that the act of writing is what matters, not just the output. Says: "I'm much better at writing that letter than ChatGPT can ever be, because only Robin Zander can write that letter." Example: Robin and his partner are in contract on a house and wrote a letter to the seller – the usual "sob story" to win favor. All the writing he's done over the past two years prepared him to write that one letter better. "The utility of doing the thing is not the thing itself – it's what it trains." Learning How to Learn (6:35) Robin's fascinated by "skills that train skills" – a lifelong theme in both work and athletics. He brings up Josh Waitzkin (from Searching for Bobby Fischer), who went from chess prodigy to big wave surfer to foil board rider. Josh trained his surfing skills by riding a OneWheel through NYC, practicing balance in a different context. Robin is drawn to that kind of transfer learning and "meta-learning" – especially since it's so hard to measure or study. He asks: What might AI be training in us that isn't the thing itself? We don't yet know the cognitive effects of using generative AI daily, but we should be asking. Cognitive Risk vs. Capability Boost (8:00) Brian brings up early research suggesting AI could make us "dumber." Outsourcing thinking to AI reduces sharpness over time. But also: the "10,000 repetitions" idea still holds weight – doing the thing builds skill. There's a tension between "performance mode" (getting the thing done) and "growth mode" (learning). He relates it to ...
Todavía no hay opiniones