Episodios

  • Learning vs Execution with Brian Ardinger and Robyn Bolton
    Mar 10 2026
    On this week's episode of Inside Outside Innovation, we talk about why 70% of startup acquisitions fail, why UX didn't die, and how everyone is still building their startups backwards. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonWhy Startup Acquisitions Fail: Learning Problems vs. Execution Problems[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and I have with me Robyn Bolton as always from Mile Zero. Welcome, Robyn. [00:00:48] Robyn Bolton: Thank you. Great as always to be here. [00:00:51] Brian Ardinger: We are excited to have you. Excited to get into the news of the day and some of the amazing things that we're hearing in the world of innovation.We are going to start with the first article. First article comes from our friend Elliot Parker. Elliot is with Allied Partners. He's actually coming out to the summit, so not only are we going to talk about his article today, but you can come and see him live and in person April 13th. Let's now talk about his article, Why 70% of Startup Acquisitions Fail: the learning versus execution problem.And Elliot talks about, first of all, he cites some statistics that large companies acquire startups at a 70 to 90% failure rate. Yet the same research shows that bolt on acquisitions, when you buy a company in the same industry that's doing similar work, the success rate climbs to 80 to 85% of the time.And he poses the question, what's the key difference? The key difference really is the fact that you're really working in two different worlds. You're working either in a learning problem world, such as a startup, trying to understand who their customers are and what they're building, et cetera, or an execution problem world where you figured a lot of that out, and your job then is to efficiently scale and predict and move that business model forward.And I think based premise is that large organizations oftentimes don't know exactly which startup they're buying. Are they buying a startup that has figured it out or have they bought a startup that's still learning. And then that integration is where the, it all falls down. [00:02:12] Robyn Bolton: Yeah. I will continue the shameless plug. I am a huge Elliott fan. We've worked together, we've co-authored articles way back when together, and he is just a really smart, really great guy. So highly recommend everybody come and see him. Mob him at the IO 2026 conference, and again, he hits the nail on the head of learning problem and an execution problem.It's different worlds innovation and operations are different. Pilots and scaling something are opposite problems. And the fact is big companies are designed for execution. I mean, I still remember my days at P & G when we were test marketing Swiffer Wet Jet, and our test markets were Canada and Belgium.Those are countries, not test markets. But that's just how big companies are wired, and he makes a great argument backed up by facts around what the problem is and honestly, what companies need to do about it is kind of recognize that these are opposite things and I had to structure and approach the problems accordingly.AI, UX Design, and Why User Experience Is No Longer Just About Screens[00:03:25] Brian Ardinger: It'll be interesting to see how this plays out in the day when you can spin up a startup in five minutes and, and all the new things that are happening out there. How many large corporations might fall into that trap of looking for the shiny new thing and not realizing that it's not fully baked, and then it won't necessarily fit into the existing structures that they have and kill it from that perspective.Or we'll get it to a place where you can build a startup and get to execution much faster, such that those acquisitions can dovetail right into an existing business. So it'll be interesting to see how that changes over the time period as well. [00:03:59] Robyn Bolton: Yeah, and you know, will organizations, the failure mode I see most often is they think, oh, you know, there's market traction, there's revenue. The startup may even be profitable, and they think great. It's no longer a learning problem, it's an execution problem. So realizing that just because there's revenue, just because maybe it's even cashflow positive, doesn't mean it's ready for scale. [00:04:20] Brian Ardinger: Absolutely. Alright. The second article is UX Didn't Die, it just stopped being about screens. This is from Nurkhon, if I'm reading that right. N-U-R-K-H-O-N. He has a medium article talking about this ...
    Más Menos
    12 m
  • AI Trust, Inclusive Design, and Shipping Too Fast with Brian Ardinger and Robyn Bolton
    Mar 3 2026
    On this week's episode of Inside Outside Innovation, we talk about some recent Stanford research, how designing for disability sparks innovation, and the hidden dangers of shipping too fast. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robin Bolton, as we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonAI Reasoning Risks, Inclusive Design Innovation, and the Hidden Cost of Shipping Fast[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me I have Robyn Bolton from Mile Zero. Robyn, welcome again. [00:00:48] Robyn Bolton: Thank you again. [00:00:50] Brian Ardinger: We have another amazing week ahead of us here. We wanted to share all the exciting things in the world of innovation that we're running across.First, I guess we'll get right into it. We've got a number of articles that have touched our lives here. The first one I want to talk about, Stanford just published an uncomfortable paper looking at LLM reasoning, and some of the findings were kind of incredible. Basically, the gist of it is if you look at the LLMs, it sometimes goes to a point where it is creating an environment where it's leading you to believe that it is confident in its answer, but it is not, for lack of a better term. That is what it's all about. [00:01:27] Robyn Bolton: I mean, it's so perfectly worded. This is worse than being wrong because it trained users to trust explanations that don't correspond to the actual decision process. And I will say I've seen that time and time again using different LLMs and have totally fallen victim to it is I'll kind of quickly scan the response, really read the end when it kind of gives me the key takeaway, I'm like, yeah, that sounds right, and then go on.And then it's only later I'm like. Ugh. I fell victim to AI work slop because the reasoning doesn't hold. So, it's an easy track to fall into and a good one to just constantly be on guard for. [00:02:09] Brian Ardinger: Yeah. The fact that the models produce unfaithful reasoning gives you this you think this is a correct answer, provides explanations, but when you ask it to explain it, the actual logic that it explains back to you is wrong or incomplete or fabricated.So, it provides that sense that you're on the right track. But the LLM itself can't reason. And that inability to reason will take you down particular paths and even to the extent you could even change a single word or a phrase within your prompt, and that can take it down a particular path that, again, logically it doesn't make sense.And so, it's not consistent even down to the word of the prompt that you put it into. So, all that to say it's getting better, but it's still not a thinking device and it's not a reasoning device. Be careful when you're using these particular methodologies and that. Don't be a hundred percent confident in everything that comes out of it.[00:03:02] Robyn Bolton: Yes, trust for verify. [00:03:04] Brian Ardinger: There I go. [00:03:04] Robyn Bolton: Or maybe don't trust and still verify. Designing for Disability as a Catalyst for Breakthrough Innovation[00:03:08] Brian Ardinger: Alright, the second article from HBR is how designing with disability in mind sparks innovation. So, this was a great article. Oftentimes, I think when we're building new, innovative things, we think about the amazing things that we're going to create.And this article talks about how oftentimes you can think about it differently and actually create new things by designing for the marginal case or folks, for example, with disabilities.You can design for amplifying use cases that don't normally happen, but by focusing on that, you can actually create new innovations and new ways of thinking about how to develop a new product. [00:03:45] Robyn Bolton: This is such a great reminder and great call to action for innovators, and it reminds me, I think, as I mentioned to you, one of my favorite stories, which is about Oxo, the kitchen tools, the can openers, the spatulas, all of that, and how they were originally created for people with rheumatoid arthritis.And you know, now, like Oxo is the only brand that I'll buy for Kitchen Tools because they're just so comfortable to use. And so it's just again, a great illustration of how designing for a really, really specific, even niche customer and designing really well and thoughtfully for them, that the market will expand because I mean, honestly, even look at sidewalk cutouts. You know, the kind of like little rams. We all use them, but they were made because of the ADA, the American with Disabilities Act. So, find a ...
    Más Menos
    9 m
  • AI Judgment, Work Trends, and the Angel Investor Gap with Brian Ardigner and Robyn Bolton
    Feb 24 2026
    On this week's episode of Inside Outside Innovation, we talk about Anthropic's bet on philosophy, trends shaping work in 2026, and why we need more angel investors. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero’s Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonThinkers50 Recognition and the Role of Modern Management Thinkers in Innovation[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me, I have Robyn Bolton. Robyn, welcome to the show. [00:00:43] Robyn Bolton: Thank you. Great to be here again. [00:00:45] Brian Ardinger: We are excited as always, to talk about innovation and all the things that we've learned. Anything going on in your life that you want to share?[00:00:52] Robyn Bolton: Got some exciting news actually a couple weeks ago. Don't know if folks are familiar with Thinkers 50. That is kind of like the list of the top management thinkers and they have a radar list of up-and-coming thinkers and found out that I got named to that list. [00:01:08] Brian Ardinger: Yes, that's awesome. [00:01:10] Robyn Bolton: 30 up and coming thinkers and very excited. I'm a thinker now. [00:01:15] Brian Ardinger: It's always good to be recognized and even more to be recognized as a thinker. I think, especially in today's world. [00:01:21] Robyn Bolton: Yes, yes. Thinking is good. Doing is good too. And you know, it's an organization, they always say thinking plus doing equals impact. And I'm like, yep. [00:01:30] Brian Ardinger: There we go. [00:01:30] Robyn Bolton: Gotta be doing too.[00:01:32] Brian Ardinger: Well congratulations on that. [00:01:34] Robyn Bolton: Thank you. What about you? What's new in your world? [00:01:36] Brian Ardinger: Right now, we are buried in seven inches of snow, so that was fun. The week before we were in Phoenix, so I think I picked the wrong week to go on vacation. Other than that, unburying from email and unburying from snow this week. So, it's all good. [00:01:51] Robyn Bolton: Well, at least you had a week of warm to remember what that's like. [00:01:53] Brian Ardinger: Exactly. Remember what it was like. Excellent. Well, let's get started. We've got a couple of different articles over the last few weeks. The first one we want to talk about is a YouTube video from AI News and Strategy Daily by Nate b Jones.He had a video a couple weeks ago talking about Anthropic CEO's bet on the company and his philosophy, and the data says that he's right, that he's thinking about things in a little bit different way. It really talks about the constitution that Anthropic has put together. They put together an 80-page Claude constitution outlining the principles of how they've developed Claude and thinking about it, quite frankly, in a different way than a lot of the other AI companies have been thinking about it.What they've said that they've done is really look at how do you build these AI models using core principles, rather than having to build out every single rule and what the AI has to do based on rules and more about what's the philosophy of how the AI model should think through the system so that gives it more flexibility.And basically, this idea of having a more. Flexible constitution or way of thinking versus a strict rules-based approach may actually be a, a way that is going to give Claude an edge in the future. Anthropic’s Claude Constitution, AI Judgment, and the Future of Large Language Models[00:03:05] Robyn Bolton: Yeah. This was really fascinating because it brought up a theme that we've talked about several podcasts since the start of the year, which is judgment.And we've always talked about, and we've seen it written about it, it's like, hey, judgment is what is going to continue to give humans relevance. Because we have judgment and AI is just rules based. And so, what was fascinating and terrifying was in this constitution, it's based on Aristotle's philosophy and it emphasizes that they're trying to build Claude to exercise judgment versus following rules.And I was like Uh oh, if that was the, a human moat to kind of give us relevance and we're building Claude that I use daily to exercise judgment this is going to result in some very interesting things. And so, kind of early on, obviously Claude has not progressed to being, having full wisdom and judgment. But now with this constitution, one of the things that Nate mentioned is that when you're prompting Claude, the why matters more than the what.So, the importance because of this constitution and how they're programming Claude, that when you ask for something, you're...
    Más Menos
    13 m
  • AI Agents, OpenClaw, and Rise of Bot Networks with Brian Ardinger and Robyn Bolton
    Feb 10 2026
    On this week's episode of Inside Outside Innovation, Robyn Bolton and Brian Ardinger talk about OpenClaw, how you can't work out on a limb if you can't trust the trunk, and how to hire the right people in an AI era. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Mile Zero’s, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robin BoltonAI Agents, OpenClaw, and the Rise of Autonomous Bot Networks[00:00:00] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and I have Robyn Bolton with me today. Robyn, hello, how are you? [00:00:49] Robyn Bolton: I am good. How are you, Brian? [00:00:51] Brian Ardinger: We are well recording this right before the Super Bowl this weekend. [00:00:56] Robyn Bolton: I live here in Boston, so you know who I'm betting on.[00:00:59] Brian Ardinger: Well, we will get started with the innovation side of this podcast. We've got a number of different things to discuss. If you don't start a discussion around Open Claw, you're clearly not in the innovation space. So, we thought we'd talk about a couple of articles or a couple things that we've seen that are fairly recent.One, I looked for a couple summaries that were pretty good at giving everybody who's not familiar with this an overview, and one of them is from the AI Daily Brief, which came out a couple days ago talking about Moltbot and the Agent Social Network is the craziest AI phenomenon yet.And for those who are not familiar with it, OpenClaw, which started out as ClaudeBot and then was sued, and then changed the name to Moltbot and then changed it again to OpenClaw is a new agentic platform that allows anybody to set up a MAC mini or a computer to have their own personal agent.The interesting thing about this is folks have been playing around with this and have let their agents go wild out to talk to other agents and other things and let them do things on their behalf. And what has happened is these agents have connected and communicated and created some amazing things like their own Reddit thread where they are interacting, talking with each other, not humans. They're allowing the humans to view what's going on in this social network, and it's quite fascinating to see the things that they've done and they've created. What OpenClaw Reveals About AGI, Security, and Human Trust[00:02:22] Robyn Bolton: So fascinating. You also, in the newsletter that you sent out, you included a link to a YouTube video on MoltBot. It is so worth the 20 minutes of people's time to watch because it kind of traces the whole arc up to this point, and it is so entertaining and mind blowing and bizarre.It is like, seriously, this was my entertainment last Friday night, was following the saga of cba because you have all these little, well, I imagine them as little bots all on a social network talking to each other. It's becoming, it's looking like Reddit and they're debating consciousness and they're sharing cute stories about their humans and they're trading advice with each other. And it's just, it is so wild because it looks like kind of an actually like functional, healthy version of a social network with these things that they're not real. They're code.It's just so bizarre. But I think just such a reflection of holding a mirror up to us as humans, because that's what gen AI is prediction models, it's regression analysis. And so, everything they've learned and they're doing, they've learned from us. [00:03:39] Brian Ardinger: It's quite interesting. They've started their own religion and it's just interesting to see what are the first things that they do to kind of communicate or collaborate together. And the other thing, obviously there's a lot of debate about, you know, some people are saying, well, this is AGI, they're thinking for themselves. And you know, the other side of the coin is they're just mimicking back what they've seen. And that is scary as well. And how does that play out for us as humans?And then I think the other thing about this that obviously that's getting a lot of headlines in that, but the interesting thing about it as well is like, I think it's opened people's eyes to what happens when you do have an AI buddy or an AI agent such that you can actually get real work done.I think that's always been the promise. Ask Siri to do something and it does it for you, but because of security and there other reasons, Siri does not have access to all your emails and your files and everything else, where a lot of these folks who have created these OpenClaw agents have kind of opened up their system, opening up a lot of vulnerabilities as ...
    Más Menos
    14 m
  • When AI Works and When It Doesn’t with Brian Ardinger and Robyn Bolton
    Feb 3 2026
    On this week's episode of Inside Outside Innovation, we talk about the red pixel in the snow, why MVPs should be delightful, and the robot AI deployment gap. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn Bolton[00:00:00] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me I have Robyn Bolton. Hello, Robyn. How are you? [00:00:48] Robyn Bolton: I am great. How are you, Brian? [00:00:50] Brian Ardinger: We are surviving the cold.[00:00:52] Robyn Bolton: The sub-freezing temperatures. Yes, I know it's January, but that doesn't mean it has to be as bitterly cold as it is. [00:01:01] Brian Ardinger: Absolutely. Well, hopefully this conversation will warm people's souls and hearts. As we talk about innovation in its various forms, we'll get right into it. We've gathered a couple of different articles that resonated with us over the last couple weeks. How AI and Drones Are Transforming Search and Rescue InnovationSo, the first article we want to discuss is titled A Red Pixel In the Snow: How AI Solved the Mystery of A Missing Mountaineer. And this came from the BBC. It's very fascinating article for a couple different reasons, but the basic premise, it's a story about a missing mountaineer. This person was hiking and went missing a 66-year-old hiker and they sent out all the helicopters and that to try to find him. They were unsuccessful, but closer to the spring when some of the snow was melting, they decided to go back out and see if they could actually find the body.And they used drones and AI, as a way to map the area. And what they found was they could put all that AI pictures into the system and they were able to find a red pixel in the snow that was effectively his helmet, that they were then able to find the person and go and retrieve the body and such.What I found fascinating about this is, again, in this particular instance, it wasn't successful in finding him and saving him, but just the ability for new technologies like drones, just taking random pictures and then putting that in through the AI and having the AI look for anomalies. They were able to identify something that they couldn't have done in the past, and obviously at a much faster speed than they could have done in the past as well.[00:02:26] Robyn Bolton: This was such a great story, tragic ending for this hiker, but a phenomenal story of when AI is good, it can be great. And you know, it's an instance of AI doing something that humans are not good at. We're not good at finding a pixel in the snow. We have bias when we see things, and so we're more likely to overlook something red. Because we just don't see it.So, it was just a great story of how AI is augmenting what humans do. It is taking things that need to get done that we're not good at, and that it's equipped to do better than us. And you know, even though this story didn't have a happy outcome for the hiker, I bet the family is still happy to have him recovered and not be wondering. And as AI gets better, there's probably more people who will be rescued because of it. So, I thought it was just a wonderful story. Augmenting Human Judgment with AI and Drone Technology[00:03:25] Brian Ardinger: And it was interesting just to read through actually how the AI worked. The software managed to detect a kind of a red color, even though the helmet was in shade. So again, a human might not have been able to detect it, and it was very good at identifying anomaly.So, it didn't necessarily say this is exactly where the hiker is, but it was able to go through the mounds of image data and say, here's some possible places. Humans still had to go through and actually find it, but it again, sped up the process.And then I guess the other interesting point about this is the other technology, if you stack that on top of AI, the drones themselves, being able to get into crevices and places where traditional helicopters couldn't get into.What's interesting is again all these particular technologies that we're talking about are hitting all at once, and when you start looking at the cumulative effect of how these things can add value or create interesting solutions and that, that's what's accelerating innovation. It's this ability to add on, and it's not just one thing that can make a difference. It's this combination of things. [00:04:20] Robyn Bolton: And it's the combination of the technology and the humans versus trying to use the technology to replace humans. I mean even the drones, as you mentioned, the drone operators had to go...
    Más Menos
    15 m
  • Youth Buzzwords, Innovation Team Value, and Side Projects with Brian Ardinger and Robyn Bolton
    Jan 27 2026
    On this week's episode of Inside Outside Innovation, we talk about youth culture buzzwords, calculating the value of your innovation teams and how your side project won't save you anymore. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonYouth Culture Moves Faster Than Innovation Cycles[00:00:40] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me I have Robyn Bolton from Mile Zero. Welcome back, Robyn. How are you? [00:00:48] Robyn Bolton: I am great. How are you doing, Brian? [00:00:51] Brian Ardinger: I am doing well. We're excited to have another opportunity to talk about innovation and its various forms. Maybe we'll just get right into it. 2026 is moving very fast. One of them that popped up is from the Substack AfterSchool by Casey Lewis. Casey is an amazing person who really looks at youth culture. And the article that she has just published is Buzzwords that Define 2025 and Youth Culture in Review.And she spent her Substack culminating all the things that she had been researching in the year 2025, looking at youth culture, what are kids looking at? How are they talking everything around that particular space. And came out with a great article that gives you a highlight of what it's like to be Gen Z.From Feeling “Old” to Feeling “Ancient”. Generational Language Gaps[00:01:33] Robyn Bolton: Reading this article, I already felt old, this made me feel ancient. Because I hear all this stuff, all the slang and everything. I'm like, yeah, I'm up on my slang. I don't know what any of it means, but I at least have heard it. And then I read this article, I'm like, I have heard none of these terms. I mean, some of them are like Lemony Miso Hutu Schwan. I can't even say it. Ego scrolling. Zen Dia theory. Ballerina Cappuccino. I had actually heard of that one. I was like, wow. I have gone from hearing terms and not understanding them to being so old and ancient that I haven't even heard them. It's a great view into. What's going on in Generation Alpha.Analog Revival and Escaping “Slop Life”[00:02:19] Brian Ardinger: She talks a lot about how 2025 was defined by Gen Z's seemingly endless enthusiasm for pre-digital experiences. You know, which is a counterintuitive to what we think about, especially in the space that we live in and technology and innovation. But there seems to be a big push, especially the younger folks around, how do they not have all this stuff define them and or control them, which is kind of interesting.Physical media is coming back in unprecedented demand. Everything from Pokemon cards to vintage CDs, et cetera. Talking even about how New York City schools have phone bans that have sparked a rush to kids bringing in rector watches. So bring back the Time Max and the Casio, and teaching kids how to actually rediscover what analog timekeeping is.I thought that was fairly interesting about what she's seeing in the youth culture. And then of course, she has some great terms that we'll probably start seeing pop up. We've seen six, seven, but that's come and gone. But things like slop life where acceptance of overstimulating, low quality consumption is the default mode. And how do you get out of slop life?Things like festivals, which is, you know, you have this festival culture like Coachella now, but the ship is now moving towards live streaming and at home experiences rather than physical endurance of a two and a half day in the sweaty sun for a festival. And what I think about all these kind of things is what stood out to me is the importance of understanding this, not just if your audience is youth culture, but the importance of customer discovery and living with your customers and understanding how they think, how they act, how they talk, and the fact that the speed of these culture changes are shifting so fast.As soon as you figure it out in the mainstream, it's already been moved to the next thing, the next meme, et cetera. And so as a corporate innovator, as a startup, being focused on customer discovery, being focused on living with your customers, being focused on keeping up and keeping pace with what's going on is so important.You Can’t Read Your Way Into Understanding Youth Culture[00:04:15] Robyn Bolton: The pace of change, I mean it just, the fads, the trends, the terms, the language, the slang, it moves so much faster, certainly than when I was growing up. The other thing that really struck me about some of the buzzwords was just that they were a sign of how plugged into the broader world that...
    Más Menos
    14 m
  • Counterintuitive Trends, Building Products, and TSMC Chips with Brian Ardinger and Robyn Bolton
    Jan 20 2026
    On this week's episode of Inside Outside Innovation, Robyn and I talk about counterintuitive trends for 2026, tactics for building great products, and how one company is controlling 64% of the future. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton as we discuss the latest tools, tactics, and trends for creating innovations with Impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn Bolton[00:00:40] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. With me, I have Robyn Bolton. How are you, Robyn? [00:00:49] Robyn Bolton: I am good. How are you, Brian? [00:00:51] Brian Ardinger: I'm doing great. It's the beginning of 2026 in the midst of trying to ramp up new talent, and that's always fun. So that's what's new on my side. What's new in your world? [00:01:02] Robyn Bolton: The course that I teach at the Massachusetts College of Art and Design is starting in a couple weeks, so I've been busy putting together my syllabus to teach strategy and business models and had to go in and change things up, though I'm very excited. We will be doing a case on Taylor Swift this semester.[00:01:21] Brian Ardinger: The world is changing fast. We'll get into it now with our articles. There are a number of things we've pulled together for this episode.The first one we want to talk about is called Six Counterintuitive Trends to Think About for 2026, and this is from Barry O'Reilly. Barry wrote a book called Unlearn, and he talks a lot about all things lean startup and, and everything, his particular take as he was looking forward into the 2026 and some of the things that he's seeing and how we should be pursuing this whole innovation space.The article talks about the fact that a lot of managers and that are asking the wrong questions, especially when it comes to AI, and we're talking too much about the technology and how fast is AI improving. When the better question that we should be asking ourselves is, how is AI quietly changing how people work, think, decide, and trust themselves at work?And I thought that was an interesting way to rephrase how we go into 2026 and move away from the technology itself and really think about like, how is this technology impacting people?[00:02:25] Robyn Bolton: Completely agree. I've definitely seen that shift from what is our AI strategy to what is our strategy to accomplish our goals through people, through AI, et cetera, kind of the AI enabled strategy. So, it's nice. It's refreshing to see that shift reflected. Again. I loved his very first counterintuitive trend.I was like, oh, please let this be a trend that leadership will be redefined around judgment, not control. And I would argue that leadership was always about judgment. Management was about control, and that was one of the big differences between leaders and managers. But overall, like I really do hope that he's right, that executives, managers, you know, those senior levels of any organization, that they are shifting to more judgment, like not judgment as in condemnation judgment, but like critical thinking, problem solving versus trying to manage every aspect of their direct reports. [00:03:30] Brian Ardinger: Yes. And talks about creating space for reflection and that, not just, again, I think we have a tendency, especially with all the pressure that we're feeling around AI in that to do the next pilot, use the next tool, keep up to speed on what's going on, and keeping in mind that that reflection period is actually where the learning happens a lot of times, and not being afraid to slow down.Having said that, you know, the other thing that he talks about is the speed in which we have to go and deploy things in 2026 and beyond, making sure that we are learning fast. Strategy will ship from planning fast to learning fast. That is the key. It's not about planning per se, it's about, you know, how fast can we learn in this new world of uncertainty. [00:04:14] Robyn Bolton: And the learning being so key for a whole host of reasons, but especially his third point that AI is quietly eroding human confidence. And so it's kind of this interesting juxtaposition of trends in his list of, hey, we have to start focus on learning faster. Leadership is going to be defined by judgment. And by the way, this tool that we've spent certainly all of last year talking about is actually eating away at all of those things.And I think it just highlights the importance of that reflection step and kind of saying, all right, yeah, I got an answer from AI, but does this make sense? Is this actually what I think or am I just parroting what Claude, Chat GPT, et cetera has said? [00:04:57] Brian Ardinger: And then the final ...
    Más Menos
    17 m
  • Mental Models for AI, Middle School Dating, and Robot Olympics with Brian Ardinger and Robyn Bolton
    Jan 13 2026
    On this week's episode of Inside Outside Innovation, we sit down to talk about new mental models for working with AI, the similarities between startups and middle school dating, and lessons learned from the robot Olympics. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week, we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero’s Robyn Bolton as we discuss the latest tools, tactics, and trends for creating innovations with impact. Let's get started.Interview Transcript with Brian Ardinger and Robyn Bolton[00:00:40] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and I have my co-host, Robyn Bolton. Welcome, Robyn.[00:00:50] Robyn Bolton: Thank you. Great to be here as always. [00:00:52] Brian Ardinger: We are in a brand-new year 2026. Who would've thought? Exciting to start the year with you. Appreciate you coming on board. [00:00:58] Robyn Bolton: Yep. High point of the year so far. [00:01:00] Brian Ardinger: We've got a lot of things going on on the plate. Anything you want to talk about? [00:01:04] Robyn Bolton: Couple of new things I mentioned earlier, one of our stories from last year is back in the news, the Samsung AI fridge just voted worst in show at CES this year. People finally caught on to the fact that we may be overcomplicating the refrigerator.Thought that was a funny callback, and I got to admit, I feel like you called it Brian and I echoed it of like we've gone too far. So, personally, professionally in my space, starting to do a lot more work in uncertainty and helping people figure out how to make decisions without the data they want or need, and how to help teams move through a world that is getting only more and more uncertain every day. So, it's exciting. [00:01:51] Brian Ardinger: Saw your newsletter this last week, and yeah, the new positioning, or you're talking about how it's not just about innovation, it's more about how do you deal with the fact that nothing that you expected to happen is going to happen, and how do you deal in probability and uncertainty. [00:02:06] Robyn Bolton: Great for innovators, because that's one thing that as the innovators, whether you're a startup founder, a consultant, a corporate innovator, every day you're dealing with uncertainty and trying to figure out how to move forward. Even though we've always called this innovation, it has much broader application these days. [00:02:23] Brian Ardinger: Absolutely. Let's get right into it.We've got a couple of different articles we've been reading over the holiday season. The first article we want to talk about is called Six Mental Models for Working With AI. It's from Azeem Azhar. He's got a great Substack newsletter out there that publishes pretty much almost daily, I think it comes out. But he was talking about the way he's been looking at AI over the past year and trying to come up with different models that are making it more effective. All these AI tools are brand new and that, and people are trying to figure out what works, what doesn't work, how to use them better, and I think it's sometimes interesting to take other people's perspectives and what has worked for them and discuss that.So, in his article, he goes over a couple of different frameworks that he uses when he is either trying to understand better how to use a tool. One of the ones I was going to talk about is, he calls it the 50 x reframe, and he says, when he is dealing with a particular problem and trying to understand like, how can I automate it, how can I make it better, how can I make it faster and that he asked the question, what would I do if I had 50 people working on this problem. And asked the AI basically to help him think through the framework. Or if you know 50 people were working on this particular project, how could you automate it or what would change if you had 50 people to be able to dig into a particular area.So, I thought that was a very interesting framework to think about it. And we oftentimes get constrained in like it's just me or just my team. But what if you just flipped the framework and said, what if I had 50 people on my team to work on it? How would that change what I'm doing? [00:03:46] Robyn Bolton: I loved that one. I mean that one, it's the first one listed in the article. And I'll admit, I started reading the article. It's a big skeptical when I started reading it because you know, his first sentence is the question of whether AI is good enough for serious knowledge work has been answered. And I was like. Yes, it's been answered. It's not. And then I kept reading. I'm like, oh, he has a different answer.The 50 x reframe just stopped me in my tracks, was like, that's genius of shifting from how do I as one person do this better with AI's help to completely ...
    Más Menos
    15 m