• How to Think For Yourself When Everyone Disagrees With You
    Feb 24 2026
    When neuroscientists scanned the brains of people going along with a group, they expected to find lying. What they found instead was something far stranger. The group wasn't changing people's answers. It was changing what they actually saw. We'll get to that study in a minute. But first, I want you to remember the last time you were in a meeting, and you knew something was wrong. The numbers didn't add up. The risk was being underestimated. And someone needed to say it. Then the most senior person in the room spoke first: "I think this is exactly what we need." Heads nodded. Finance agreed. Marketing agreed. The consultant agreed. And by the time it was your turn, you heard yourself saying, "I have some minor concerns, but overall I think it's solid." You're not alone. Research shows that roughly half of employees stay silent at work rather than voice a concern. And among those who stayed quiet, 40% estimated they wasted 2 weeks or more replaying what they didn't say. Two weeks. Mentally rehearsing the point they should have made in a meeting that's already over. That silence isn't a character flaw. It's your neurology working against you. And today I'm going to show you exactly why it happens and how to stop it. It starts with what was happening inside your head during that meeting you just remembered. Why Your Brain Surrenders to the Group Most people know about the Asch conformity experiments from the 1950s. People were asked to match line lengths, and seventy-five percent went along with answers that were obviously wrong. That result gets cited everywhere. But the more important study came fifty years later, and it revealed something the Asch experiment never could. In 2005, neuroscientist Gregory Berns at Emory University put people inside an MRI machine and ran a similar conformity task, this time with three-dimensional shape rotation. Like Asch, he planted actors who gave wrong answers. But unlike Asch, he could watch what was happening inside people's brains while the conformity was occurring. Berns expected the MRI to show activity in the prefrontal cortex, the brain's decision-making center, when people went along with wrong answers. That would mean they were knowingly lying to fit in. Just a social calculation. That's not what the scans showed. People who conformed showed no increased activity in decision-making regions. Instead, the activity showed up in the parts of the brain that handle visual and spatial perception, the occipital and parietal areas. The group wasn't changing people's answers. It was changing what they actually saw. Their brains were rewriting their experience to match the room. And the people who resisted the group? Their scans told a different story. Heightened activity in the amygdala, the brain's threat detection center. The same circuitry that fires when you encounter physical danger lit up when someone disagreed with the group. Berns put it plainly. The fear of social isolation activates the same neural machinery as the fear of genuine threats to survival. When you caved in that meeting, your neurology wasn't malfunctioning. It was doing exactly what it was designed to do. Keep you safe inside the tribe. This is why what I call mindjacking works so well. Algorithms manufacture social proof by showing you what's trending, what your friends liked, and what similar people chose. Your wiring responds the same way it does at the conference table. You're fighting your own threat-detection system every time you try to hold an independent position within a group. You can't turn off the wiring. But you can learn to catch it in the act. And that starts with one critical distinction. The First Skill: Separating Updating from Caving Sometimes the people around you know something you don't. Changing your mind in a group isn't always a surrender. Sometimes it's the smartest move in the room. The real skill is knowing which one just happened. You can test this in real time. When you feel your position shifting in a group, ask yourself three questions. First: Did someone introduce information I didn't have before? If the CFO reveals a data point that genuinely changes the calculus, updating your view isn't a weakness. It's intelligence. That's new evidence. Second: Can I articulate why I changed my mind, in specific terms? If you can say, "I shifted because of the margin data in Q3 that I hadn't seen," that's a real update. If you can only say, "I don't know, everyone seemed to think it was fine," that's capitulation. Third: Would I have reached this same conclusion alone, with the same information? This is the killer question. If the answer is no, and you only arrived at this position because others were already there, you haven't updated. You've surrendered. Getting this wrong is costly. And not just the one time. When you capitulate and call it updating, you train yourself to stop trusting your own analysis. Do it enough times, and you won't even bother preparing, because you ...
    Show more Show less
    20 mins
  • Better Decisions Under Pressure
    Feb 17 2026
    "We need an answer by the end of the day." Ten words. And the moment you hear them, something shifts inside your chest. Your pulse ticks up. Your focus narrows. Careful thinking stops. The clock starts. You probably haven't even asked the most important question yet. Is that deadline real? Most of the urgency you feel every day is fake. Manufactured by someone who benefits from you deciding fast instead of deciding well. Most people can't tell a real deadline from a manufactured one. By the end of this, you will. Let's get into it. What Time Pressure Actually Does to Your Brain Last episode, we talked about decision fatigue. How your brain degrades over a long day. Time pressure is different. Fatigue is a slow drain. Time pressure is a switch. When the clock is ticking, your brain stops analyzing and starts reacting. Normally, the front of your brain runs the show: careful analysis, weighing trade-offs, long-term thinking. Under time pressure, a faster, older, more emotional region takes over. You don't feel less accurate. You feel more confident. Decades of decision science research have found that under time pressure, people's confidence in their decisions goes up while their actual accuracy goes down. You're not just thinking worse. You're thinking worse while being more sure you're right. That false confidence makes you predictably worse at three specific things. Evaluating trade-offs. You lock onto whichever side your gut grabs first.Considering consequences beyond the immediate. Second-order thinking goes offline.Recognizing what you don't know. Because you feel certain, you stop looking for what you're missing. And that's exactly what manufactured urgency is designed to exploit. This is mindjacking in its purest form. Someone engineers the pressure, your brain switches modes, and you make their decision instead of yours. The Urgency Trap: Real vs. Manufactured Not all time pressure is the same. Some deadlines are real. Your tax filing date is real. The board meeting on Thursday is real. The patient who needs a decision in the next ten minutes? That's real. These deadlines exist because of actual constraints in the world, not because someone manufactured them. A huge portion of the urgency you experience? It's engineered. "This offer expires at midnight." Really? Will the company stop wanting your money tomorrow? "We need your decision today." Why today? What actually changes between today and Wednesday? Manufactured urgency is one of the most effective persuasion tools ever invented. Countdown timers on websites that reset when you refresh the page. "Limited time" sales that somehow run every month. Negotiators who invent deadlines because pressure extracts concessions. Manufactured urgency is everywhere. And it works because of what we just covered. Time pressure flips you into fast-decision mode. When someone engineers urgency, they're not just rushing you. They're changing which part of your mind makes the call. The decisions that actually shape your career almost never show up with a countdown timer. The urgency trap pulls your attention to whatever is loudest, while the ones that matter sit quietly in the background. Until it's too late. Five Tests for Manufactured Urgency How do you tell the difference? I use five tests. Test One: The Source Test. Ask yourself: who benefits from me deciding quickly? If the answer is "the person creating the deadline," that's a red flag. Real deadlines serve the situation. Fake deadlines serve the person imposing them. The car salesperson who says "this price is only good today"? That deadline serves the dealership, not you. The surgeon who says "we need to operate within the hour"? That deadline serves the patient. Test Two: The Consequence Test. Ask: what actually happens if I wait? Not what I'm told will happen. What actually happens. "The offer expires." Does it? What would happen if you called back next week? In most cases, the offer magically reappears. Real deadlines have real, verifiable consequences. Manufactured ones have threats that evaporate on contact. Test Three: The History Test. Has this "urgent" situation happened before? If the company has run "ending soon" promotions every month for a year, that's not urgency. That's a business model. If a colleague marks everything "urgent" in their emails, that's not urgency. That's a habit. Test Four: The Reversibility Test. This one builds on our earlier work in the series. How reversible is this decision? If you can cancel, return, or renegotiate, urgency matters less. But if the decision is hard to reverse, like a long-term contract or a major hire, artificial urgency is especially dangerous. The less reversible the decision, the more suspicious you should be of anyone rushing you. Test Five: The Separation Test. Remove yourself from the pressure source and check if the urgency survives. Step out of the room. Sleep on it. Call back tomorrow. Real urgency persists when you leave. Manufactured ...
    Show more Show less
    17 mins
  • How to Beat Decision Fatigue
    Feb 10 2026
    A nurse in Pennsylvania had been on her feet for twelve hours. She was supposed to go home, but the unit was short-staffed, so she stayed. During that overtime, a patient was diagnosed with cancer and needed two chemotherapy doses. She administered the first, placed the second in a drawer, and headed home. She forgot about the second dose. It wasn't discovered until the next day. The patient was fine; they got the treatment in time. But think about what happened. This wasn't a careless nurse. This was a dedicated professional who stayed late to help her team. Her skills didn't fail. Her knowledge didn't fail. Her energy failed, and her judgment went with it. That's the trap. We assume our thinking stays constant, that the brain in hour fourteen is the same brain that showed up in hour one. It's not. Last episode, we tackled deciding under uncertainty. But fatigue does something different. Uncertainty makes you hesitate. Fatigue makes you stop caring. Why Your Brain Makes Worse Decisions by Evening You've probably heard the popular saying: "Making too many decisions wears you out, so by evening your judgment is shot." That idea dominated psychology for twenty years. Researchers believed decision-making drained from a limited mental reserve, like a battery running down. Then, independent labs tried to reproduce those results at scale, and the effect vanished. One study, 23 labs, over 2,000 people, found nothing. A second, 36 labs, 3,500 people, same result. The experience is real, though. People do make worse decisions after a long day of mental effort. What was wrong was the explanation. Your brain doesn't drain like a battery. After sustained effort, it shifts priorities. It starts favoring speed and ease over accuracy. Not because it can't think carefully, but because it decides careful thinking isn't worth the effort. Decision fatigue isn't your brain shutting down. It's your brain quietly lowering its standards without telling you. Decision Fatigue in the Real World That science isn't abstract. It plays out every day. Researchers at Brigham and Women's Hospital tracked over 21,000 patient visits. Doctors prescribed unnecessary antibiotics more frequently as the day went on. Not because afternoon patients were sicker. Because saying "here's a prescription" is easier than explaining why you don't need one. Five percent more patients received antibiotics they didn't need, purely because of timing. The same pattern shows up everywhere. Surgeons make more conservative calls later in the day. Hand hygiene compliance drops across a twelve-hour shift. Financial analysts grow less accurate with each additional stock prediction they make in a single day. The drift always goes in the same direction: toward whatever requires the least effort. That drift explains something we've been exploring across this series. When you're exhausted, someone else's conclusion isn't just tempting, it's a relief. The algorithm's recommendation saves you from having to evaluate. The expert's opinion saves you from forming your own. That's mindjacking, finding the open door. Fatigue doesn't just degrade your thinking. It makes you grateful to hand it over. Your Four Warning Signals Knowing the science is useful. But what matters more is catching fatigue in yourself before it costs you. Here are four signals that your judgment is compromised. Signal 1: The Default Drift. Someone proposes a plan that sounds... fine. Not great, not terrible. Two hours ago, you'd have pushed back, asked harder questions. Now you just nod. You're not agreeing because you're convinced. You're agreeing because disagreeing takes energy you no longer have. Signal 2: The Irritability Spike. A colleague asks a reasonable question, and it feels like an interruption. When your emotional response is out of proportion to the situation, it's not the situation. Your reserves are low. Signal 3: The Shortcut Reflex. A decision that should take twenty minutes takes thirty seconds. You skip the analysis, go with your gut. There's a version of this that sounds like confidence. "I trust my instincts." But late in the day, that phrase is often code for "I'm too tired to think this through." Signal 4: The Surrender. You stop forming conclusions and start borrowing them. Someone says, "I think we should go with Option B" and you feel a wave of relief. Not because Option B is right, but because you no longer have to figure it out. That relief is the signal. When outsourcing, your judgment feels like a gift instead of a loss, you're running on empty. If two or more of these show up at the same time, stop. Your judgment isn't reliable right now. Don't trust it with anything that matters. Four Moves to Protect Your Judgment Those signals tell you something's wrong. Here's what to do about it. Move 1: Postpone it. Move the decision to a high-energy window. For most people, that's morning. Think of those hours like premium real estate. Stop filling them with trivial meetings. ...
    Show more Show less
    16 mins
  • How to Stop Overthinking Your Decisions
    Jan 28 2026
    You've got a decision you've been putting off. Maybe it's a career move. An investment. A difficult conversation you keep rehearsing in your head but never starting. You tell yourself you need more information. More data. More time to think. But you're not gathering information. You're hiding behind it. What looks like due diligence is actually overthinking in disguise. The certainty you're waiting for doesn't exist. It won't exist until after you decide and see what happens. I call this mindjacking: when something hijacks your ability to think for yourself. Sometimes it's external. Algorithms, experts, crowds thinking for you. But sometimes you're the one doing it. That endless research? It feels like diligence. It functions as delay. You're not being thorough. You're mindjacking yourself. Today, you'll learn a framework for knowing when you have enough information, even when it doesn't feel like enough. Because deciding before you're ready isn't recklessness. It's a skill. And for most people, that skill has completely atrophied. The Real Cost of Waiting At a California supermarket, researchers set up a tasting booth for gourmet jams. Some days, the display showed 24 varieties. Other days, just six. The bigger display attracted more attention. Sixty percent of people stopped to look. But only three percent actually bought jam. When shoppers saw just six options? Thirty percent purchased. Ten times the conversion rate. More options didn't help people choose. More options paralyzed them. The jam study has been replicated across dozens of categories since then. The pattern holds. More choices, more overthinking, fewer decisions. Think about your postponed decision. How many options are you juggling? How many articles have you read? Every expert you consult, every scenario you play out in your head... you're not getting closer to certainty. You're adding jams to the display. And while you're researching, the world keeps moving. Opportunities close. Competitors act. Your own situation shifts. The decision you're avoiding today won't even be the same decision six months from now. Waiting has a cost. Most people dramatically underestimate it. The Two-Door Framework So how do you know when you have enough information? Jeff Bezos uses a mental model that's useful here. Picture every decision as a door you're about to walk through. Some doors are one-way: once you're through, you can't come back. Selling your company. Getting married. Signing a ten-year lease. These deserve serious deliberation. Most decisions, though, are two-way doors. You walk through, look around, and if you don't like what you see, you walk back out. Starting a side project. Trying a new marketing strategy. Having that difficult conversation. The consequences are real, but they're not permanent. The mistake most people make is treating two-way doors like one-way doors. They apply the same level of analysis to choosing project management software as acquiring a company. They're not being thorough. They're overthinking reversible choices. That's how organizations grind to a halt. That's how careers stall. That's how opportunities evaporate while you're still "thinking about it." Before you gather more information, ask yourself: Can I reverse this? If yes, even if reversing would be annoying, you're probably overthinking it. The 40-70 Rule General Colin Powell used a decision framework he called the 40-70 rule. Military leaders and executives have adopted it for decades. The Floor: 40% Never decide with less than forty percent of the information you'd want. Below that threshold, you're not being decisive. You're gambling. The Ceiling: 70% Don't wait for more than seventy percent. By the time you've gathered that much data, the window has usually closed. Someone else acted. The situation changed. The decision got made for you, by default. The Sweet Spot That range between forty and seventy percent is where good decisions actually happen. It feels uncomfortable because you're not certain. That discomfort isn't a warning sign, though. It's the signal that you're doing it right. Most overthinking happens above seventy percent. You already have what you need. You're just not ready to commit. If deciding feels completely comfortable, you've probably waited too long. The Productive Discomfort Test "I genuinely need more information" and "I'm using research as a hiding place" feel identical from the inside. Both feel responsible. Both feel like due diligence. I once watched a friend spend eleven months researching a career change. She read books. Took assessments. Talked to people in the field. Built spreadsheets comparing options. She knew more about the industry than people working in it. And at month eleven, she was no closer to a decision than at month one. The research had become the activity. The feeling of progress without the risk of commitment. She wasn't preparing. She was hiding. And she couldn't tell the difference. So how do you tell ...
    Show more Show less
    14 mins
  • Mindjacking - When your Opinions are Not Yours
    Jan 20 2026
    You've built a toolkit over the last several episodes. Logical reasoning. Causal thinking. Mental models. Serious intellectual firepower. Now the uncomfortable question: When's the last time you actually used it to make a decision? Not a decision you think you made. One where you evaluated the options yourself. Weighed the evidence. Formed your own conclusion. Here's what most of us do instead: we Google it, ask ChatGPT, go with whatever has the most stars. We feel like we're deciding, but we're not. We're just choosing which borrowed answer to accept. That gap between thinking you're deciding and actually deciding is where everything falls apart. And there's a name for it. What Mindjacking Actually Is Mindjacking. Not the sci-fi version where hackers seize your brain through neural implants. The real version. Where you voluntarily hand over your thinking because someone else already did the work. It's not dramatic. It's convenient. The algorithm ranked the results. The expert weighed in. The crowd already decided. Why duplicate the effort? Mindjacking is different from ordinary influence. You choose it. Every single time. Nobody forces you to stop evaluating. You volunteer, because forming your own conclusion is harder than borrowing someone else's. What exactly are you losing when this happens? The Two Skills Under Attack Mindjacking destroys two distinct capabilities. They're different, and you need both. Evaluation independence is the ability to assess whether a claim is valid. Not whether the source has credentials. Not whether experts agree. Whether the evidence actually supports the conclusion. Decision independence is the ability to commit to a path based on your own judgment, without needing someone else to validate it first. Both skills need each other. Watch what happens when one erodes faster than the other. A woman researches her medical condition for hours. Journal articles. Treatment comparisons. She understands her options better than most medical students would. She walks into the doctor's office, lays out her analysis. It's thorough. Sophisticated, even. The doctor reviews it and says, "This is impressive. You've really done your homework." She nods. Then looks up and asks: "So what should I do?" She can evaluate. She can't decide. Now flip it. Think about someone who decides fast. Trusts their gut. Never waits for permission. How often does that person get burned by bad information they never verified? They can decide. They can't evaluate. Lose either ability and you're trapped. Lose both and you're not thinking at all. The Four Surrender Signals How do you know when mindjacking is happening? It has a signature. Four internal signals that reveal the handoff in progress, if you know how to read them. Signal one: Relief. The moment you find "the answer," you notice a weight lifting. Pay attention to that. Relief isn't insight. It's the burden of thinking being removed. When you actually work through a problem yourself, the result isn't relief. It's clarity. And clarity usually comes with new questions, not a sense of "done." Signal two: Speed. Uncertainty to certainty in seconds? That's not evaluation. You found someone else's answer and adopted it. There's a difference between "I figured it out" and "I found someone who figured it out." One took effort. The other took a search bar. Signal three: Echo. Listen to your own conclusions. Do they sound like something you read, heard, or scrolled past recently? If your "own opinion" matches a headline almost word-for-word, it probably isn't yours. You're not thinking. You're repeating. Signal four: Unearned confidence. You're certain about a conclusion, but ask yourself: could you explain the reasoning behind it? Not where you heard it. The actual reasoning. If you can't, that confidence isn't yours. It came attached to someone else's answer, and you absorbed both their conclusion and their certainty without doing any analysis yourself. Once you notice these signals firing, you need a way to stop the pattern before it completes. The Interrupt The interrupt is a single question: "Did I reach this conclusion, or just find it?" Six words. That's the whole thing. It works because it forces a distinction your brain normally blurs. "I decided" and "I adopted someone's decision" are identical from the inside, until you ask the question. Test it now. Think about the last opinion you formed. The last purchase you made. The last recommendation you accepted. Did you reach that conclusion, or just find it? The interrupt doesn't tell you what to think. It tells you whether you're thinking at all. Finding an answer isn't the same as reaching one. This matters more than you might realize, because the pattern is bigger than any single decision you make. The Aha Moment: The Illusion of Expertise Researchers at Penn State looked at 35 million Facebook posts and found something remarkable: seventy-five percent of shared links were never clicked. ...
    Show more Show less
    14 mins
  • CES 2026 - Battle of the AI Robots
    Jan 13 2026
    Welcome to this week's show. I'm recording this episode from my hotel room here in Las Vegas, Nevada, at the annual Consumer Electronics Show 2026. If you've been around this channel for long, you know I do this every year. This is 20-plus years I've been coming to the Consumer Electronics Show. Normally, I don't cover tech and new products on this channel—except for once a year at CES. And it's less about specific companies and what they've announced. You can find that on thousands of channels on YouTube or podcasts. What I like to talk about are the trends—the trends that are emerging—and give you my view and opinion on what they really mean for the innovation space. Are we really innovating, or are we just regurgitating the same thing year after year? I do have some notes here that I'll be glancing at as we go through this today, and we'll be splicing in videos I took on the show floor, along with video supplied to us by CES, to give you a feel for what was here and what's going on. The Show's Legacy First, let's recognize that the Consumer Electronics Show is now in its 59th year. It's a spin-off from the old Chicago music show back in the late 1960s. Yes, the late '60s. It's gone through some gyrations over the decades and remains one of the few big shows that survived COVID. Traditional Consumer Electronics As usual, one of the big emphases is TVs, displays, home automation, new refrigerators, new washers and dryers—true consumer electronics, things you would find and put into your home. This year was no different. The big manufacturers were here, along with a number of new smaller manufacturers showcasing new TV technologies. Micro LED is the new buzzword bouncing around the show, and there were plenty of displays to see. I'm a big TV guy, so I definitely had to check that out and see what could be the next TV I put into my house. The AI and Robotics Takeover The one thing about this year's show that was just overwhelming was robots and AI. They were everywhere. I couldn't even tell you how many times we saw AI applied to things that make no sense—though some applications were actually pretty smart. But how many AI toilets do you really need at any given show? On the robotics side, we saw all the familiar ones—like lawn mowers that automatically find your boundaries. One was actually selling the feature that you could program in graphic designs, and it would cut your yard in such a way that the design would appear in your lawn. We also saw humanoid robots, robots doing backflips, robots dancing with people, dancing hands where the fingers are moving. You could buy just the hands or the arms or the elbows and assemble your own robots. It was pretty crazy. Then we started seeing the combination of AI and robots—interactive robots where you could stand there, talk with them, point, and they would follow your commands. Pick up this item. Move this item somewhere else. Not programming through some controller, but simply pointing and talking to direct the robot to do what you want. The Evolution of Electric Vehicles One thing we've seen in past shows was the big emphasis on electric vehicles. This year, the EV car market—which we've seen slow down generally—also slowed down here at the show. However, what we saw in its place focused on two areas: Commercial EVs and Hybrids: There was significant attention on commercial use of EVs, particularly hybrid electric vehicles with combustion engines. Emergency Response Innovation: One exhibit that really impressed me was a fire truck supplied by Dallas Fort Worth Airport. This massive Oshkosh fire truck is a hybrid that uses electric motors for high torque and high acceleration—literally shaving seconds off response time. Given the limited distance on airport property, if there's a disaster or fire requiring quick reaction, the electric motors can accelerate very quickly. There are only about 15 of these trucks in the world, and something like six or seven are just at Dallas Fort Worth Airport. I spent a fair amount of time with that team. This is a perfect example of smart innovation—innovation that isn't just because something is shiny and new. They thought carefully about how to use it, when to apply the right design, leveraging the benefits of electric while using the combustion engine to run the water pumps. Electric Motorcycles: The other area with significant EV presence was motorcycles, particularly dirt bikes. When you're going out for the day to have some fun, the low noise of an electric motor means you're not disturbing rural areas with a combustion engine. Another example of good, smart innovation. Autonomous Vehicles in Commercial Applications The other big area for the show was autonomous vehicles—not just EVs, but vehicles that can operate themselves, particularly in commercial use like farming. John Deere has a long history of autonomous farming with very accurate planting using GPS technologies. Caterpillar had a ...
    Show more Show less
    12 mins
  • Thinking 101: A Pause, A Reflection, And What Might Come Next
    Dec 23 2025

    Twenty-one years.

    That's how long I've been doing this. Producing content. Showing up. Week after week, with only a handful of exceptions—most of them involving hospitals and cardiac surgeons, but that's another story.

    After twenty-one years, you learn what lands and what doesn't. You learn not to get too attached because you never know what's going to connect.

    But this one surprised me.

    Thinking 101—the response has been different. More comments. More questions. More people saying, "This is exactly what I needed."

    It's made me reflect on why I started this series.

    Years ago, I was in a room with people from the Department of Education. I asked them a simple question: Why are we graduating people who can't think?

    Not "don't know things." Can't think. Can't reason through a problem. Can't evaluate an argument.

    Their answer was... let's just say it wasn't satisfying.

    That moment stuck with me. When AI exploded onto the scene—when everyone suddenly had a machine that could generate answers instantly—it became clear: thinking for yourself isn't just valuable anymore. It's survival.

    That's what Part One was about. The Foundations. Building your thinking toolkit.

    So what's next? For the next few weeks—nothing.

    We're taking a breather for the holidays. I'm going to spend time with my wife, my kids, my grandkids.

    We'll be back in early January. And if you're heading to CES in Las Vegas that first week—let me know. I'd love to meet up.

    But before I go, I have a question for you.

    Should there be a Part Two?

    I have ideas. If Part One was about building your toolkit, Part Two could be about what happens when you have to use it. Because knowing how to think and making good decisions aren't the same thing. Real decisions happen when you're tired. When you're stressed. When your own brain is working against you.

    Part Two could be about that gap—between knowing and doing.

    But I want to hear from you first. Should I do it? What topics would you want covered? What questions are you wrestling with?

    Post a comment. If you're a paid subscriber on Substack, send me a DM—I read those.

    And speaking of paid subscribers—that's the best way to support the team that makes this happen. Twenty-one years of showing up doesn't happen alone.

    You can also visit our store at innovation DOT tools for merch, my book, and more.

    Part One is done. The holidays are calling.

    Thank you for making this series land the way it did.

    See you in January.

    I'm Phil McKinney. Take care of yourselves—and each other.

    Show more Show less
    5 mins
  • Mental Models - Your Thinking Toolkit
    Dec 16 2025
    Before the Space Shuttle Challenger exploded in 1986, NASA management officially estimated the probability of catastrophic failure at one in one hundred thousand. That's about the same odds as getting struck by lightning while being attacked by a shark. The engineers working on the actual rockets? They estimated the risk at closer to one in one hundred. A thousand times more dangerous than management believed.¹ Both groups had access to the same data. The same flight records. The same engineering reports. So how could their conclusions be off by a factor of a thousand? The answer isn't about intelligence or access to information. It's about the mental frameworks they used to interpret that information. Management was using models built for public relations and budget justification. Engineers were using models built for physics and failure analysis. Same inputs, radically different outputs. The invisible toolkit they used to think was completely different. Your brain doesn't process raw reality. It processes reality through models. Simplified representations of how things work. And the quality of your thinking depends entirely on the quality of mental models you possess. By the end of this episode, you'll have three of the most powerful mental models ever developed. A starter kit. Three tools that work together, each one strengthening the others. The same tools the NASA engineers were using while management flew blind. Let's build your toolkit. What Are Mental Models? A mental model is a representation of how something works. It's a framework your brain uses to make sense of reality, predict outcomes, and make decisions. You already have hundreds of them. You just might not realize it. When you understand that actions have consequences, you're using a mental model. When you recognize that people respond to incentives, that's a model too. Think of mental models as tools. A hammer drives nails. A screwdriver turns screws. Each tool does a specific job. Mental models work the same way. Each one helps you do a specific kind of thinking. One model might help you spot hidden assumptions. Another might reveal risks you'd otherwise miss. A third might show you what success requires by first mapping what failure looks like. The collection of models you carry with you? That's your thinking toolkit. And like any toolkit, the more quality tools you have, and the better you know when to use each one, the more problems you can solve. Here's the problem. Research from Ohio State University found that people often know the optimal strategy for a given situation but only follow it about twenty percent of the time.² The models sit unused while we default to gut reactions and habits. The goal isn't just to collect mental models. It's to build a system where the right tool shows up at the right moment. And that starts with having a few powerful models you know deeply, not dozens you barely remember. Let's add three tools to your toolkit. Tool One: The Map Is Not the Territory This might be the most foundational mental model of all. Coined by philosopher Alfred Korzybski in the 1930s, it delivers a simple but profound insight: our models of reality are not reality itself.³ A map of Denver isn't Denver. It's a simplified representation that leaves out countless details. The smell of pine trees, the feel of altitude, the conversation happening at that corner café. The map is useful. But it's not the territory. Every mental model, every framework, every belief you hold is a map. Useful? Absolutely. Complete? Never. This explains the NASA disaster. Management's map showed a reliable shuttle program with an impressive safety record. The engineers' map showed O-rings that became brittle in cold weather and a launch schedule that left no room for delay. Both maps contained some truth. But management's map left out critical territory: the physics of rubber at thirty-six degrees Fahrenheit. When your map doesn't match the territory, the territory wins. Every time. How to use this tool: Before any major decision, ask yourself: What is my current map leaving out? Who might have a different map of this same situation, and what does their map show that mine doesn't? The NASA engineers weren't smarter than management. They just had a map that included more of the relevant territory. Tool Two: Inversion Most of us approach problems head-on. We ask: How do I succeed? How do I win? How do I make this work? Inversion flips the question. Instead of asking how to succeed, ask: How would I guarantee failure? What would make this project collapse? What's the surest path to disaster? Then avoid those things. Inversion reveals dangers that forward thinking misses. When you're focused on success, you develop blind spots. You see the path you want to take and ignore the cliffs on either side. Here's a surprising example. When Nirvana set out to record Nevermind in 1991, they had a budget of just $65,000. Hair metal bands were spending millions ...
    Show more Show less
    17 mins