Episodios

  • Amazon Relocation Mandate | Microsoft Work Trend Index Breakdown | OpenAI GPT-5 and the Singularity
    Jul 11 2025

    Happy Friday, everyone. I haven’t had an off week in a while, but it was refreshing. However, after a short break, I’m back not easing in gently. This week’s episode gets right to the heart of some of the most broken aspects of our approach to business, people, and technology.


    We’ve got one of the biggest companies in the world using intimidation tactics to cut headcount. I’m also breaking down a major tech report showing that the AI “productivity boost” isn’t materializing quite how we thought. And finally, I cannot believe some of the claims already coming out on what to expect from GPT-5 before it’s even arrived. You’ll see that each one is pointing to the same root problem: we’re making big decisions from a place of panic, pressure, and misplaced confidence.


    So, let’s talk about what’s really going on and what to do instead.



    Amazon’s Relocation Mandate Isn’t Bold. It’s Reckless.

    Amazon gave employees 30 days to decide whether they wanted to relocate to a major hub or quit with no severance. It’s the corporate version of “move or else,” and it’s being masked as a strategy for collaboration and innovation. I break down why this move reeks of fear-based downsizing, what employees need to know before making a decision, and how leaders can handle change like adults instead of middle school bullies.



    Microsoft’s Work Trend Index Reveals a Dangerous Disconnect

    Microsoft’s latest workplace report says people are drowning in tasks, leaders want more output, and everyone thinks AI is the solution. But it comes with an interesting twist. Turns out AI isn’t actually giving people their time back. I unpack the flawed logic many leaders are using, the risky gap between leaders and employees, and why the answer isn’t more agents. What we really need is better thinking before we deploy them.



    GPT-5 and the Singularity Obsession: Why the Hype Misses the Point

    OpenAI’s next model release is on its way and plenty of articles are talking about it ushering in the AI singularity. I’m not convinced, but even if it proves true, the danger isn’t the tech. It’s how overconfident we are in deploying it without the readiness to manage the complexity it brings. I explain why the comparisons to black holes are (sort of) valid, why benchmark scores don’t equal capability, and what history can teach us about mistaking potential for preparedness.



    If this episode hits home, share it with someone who needs to hear it. And as always, leave a rating, drop a comment, and follow for future breakdowns that help you lead with clarity in a world that’s speeding up.



    Show Notes:

    In this Weekly Update, Christopher tackles three high-impact stories shaping the future of business, tech, and human leadership. He opens with Amazon’s aggressive and questionable relocation mandate and the ethical and strategic issues it exposes. Then he dives into Microsoft’s 2025 Work Trend Index, exploring what it says (and doesn’t say) about AI productivity and the human toll of poor implementation. Finally, he takes a grounded look at the hype surrounding GPT-5 and the so-called AI singularity, offering a cautionary lens rooted in data, leadership experience, and the real-world consequences of moving too fast.


    Timestamps:

    00:00 – Welcome Back and Episode Overview

    01:04 – Amazon’s Relocation Ultimatum

    20:30 – Microsoft’s Work Trend Index Breakdown

    40:54 – GPT-5, the Singularity, and the Real Risk

    49:42 – Final Thoughts and Wrap-Up


    #AmazonRTO #MicrosoftWorkTrend #GPT5 #OpenAI #FutureOfWork #DigitalLeadership #AIstrategy #AIethics #AIproductivity #HumanCenteredTech

    Más Menos
    50 m
  • 2025 Predictions Mid-Year Check-In: What’s Held Up, What Got Worse, and What I Didn't See Coming
    Jun 27 2025

    Congratulations on making it through another week and half way through 2025. This week’s episode is a bit of a throwback. If you don't remember or are new here, in January I laid out my top 10 realistic predictions for where AI, emerging tech, and the world of work were heading in 2025. I committed to circling back mid-year, and despite my shock at how quick it came, we’ve hit the halfway point, so it’s time to revisit where things actually stand.


    If you didn't catch the original, I'd highly recommend checking it out.


    Now, some predictions have held surprisingly steady. Others have gone in directions I didn’t fully anticipate or have escalated much faster than expected. And, I added a few new trends that weren’t even on my radar in January but are quickly becoming noteworthy.


    With that, here’s how this week’s episode is structured:



    Revisiting My 10 Original Predictions

    In this first section, I walk through the 10 predictions I made at the start of the year and update where each one stands today. From AI’s emotional mimicry and growing trust risks, to deepfake normalization, to widespread job cuts justified by AI adoption, this section is a gut check. Some of the most popular narratives around AI, including the push for return-to-office policies, the role of AI in redefining skills, and the myth of “flattening” capability growth, are playing out in unexpected ways.



    Pressing Issues I’d Add Now

    These next five trends didn’t make the original list, but based on what’s unfolded this year, they should have. I cover the growing militarization of AI and the uncomfortable questions it raises around autonomy and decision-making in defense. I get into the overlooked environmental impact of large-scale AI adoption, from energy and water consumption to data center strain. I talk about how organizational AI use is quietly becoming a liability as more teams build black box dependencies no one can fully track or explain.



    Early Trends to Watch

    The last section takes a look at signals I’m keeping an eye on, even if they’re not critical just yet. Think wearable AI, humanoid robotics, and the growing gap between tool access and human capability. Each of these has the potential to reshape our understanding of human-AI interaction, but for now, they remain on the edge of broader adoption. These are the areas where I’m asking questions, paying attention to signals, and anticipating where we might need to be ready to act before the headlines catch up.



    If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.



    Show Notes:

    In this mid-year check-in, Christopher revisits his original 2025 predictions and reflects on what’s played out, what’s accelerated, and what’s emerging. From AI dependency and widespread job displacement to growing ethical concerns and overlooked operational risks, this extended update brings a no-spin, executive-level perspective on what leaders need to be watching now.



    Timestamps:

    00:00 – Introduction

    00:55 - Revisiting 2025 Predictions

    02:46 - AI's Emotional Nature: A Double-Edged Sword

    06:27 - Deepfakes: Crisis Levels and Public Skepticism

    12:01 - AI Dependency and Mental Health Concerns

    16:29 - Broader AI Adoption and Capability Growth

    23:11 - Automation and Unemployment

    29:46 - Polarization of Return to Office

    36:00 - Reimagining Job Roles in the Age of AI

    39:23 - The Slow Adoption of AI in the Workplace

    40:23 - Exponential Complexity in Cybersecurity

    42:29 - The Struggle for Personal Data Privacy

    47:44 - The Growing Need for Purpose in Work

    50:49 - Emerging Issues: Militarization and AI Dependency

    56:55 - Environmental Concerns and AI Polarization

    01:04:02 - Impact of AI on Children and Future Trends

    01:08:43 - Final Thoughts and Upcoming Updates



    #AIPredictions #AI2025 #AIstrategy #AIethics #DigitalLeadership

    Más Menos
    1 h y 9 m
  • Stanford AI Research | Microsoft AI Agent Coworkers | Workday AI Bias Lawsuit | Military AI Goes Big
    Jun 20 2025

    Happy Friday, everyone! This week I’m back to my usual four updates, and while they may seem disconnected on the surface, you’ll see some bigger threads running through them all.


    All seem to indicate we’re outsourcing to AI faster than we can supervise, are layering automation on top of bias without addressing the root issues, and letting convenience override discernment in places that carry life-or-death stakes.


    With that, let’s get into it.



    Stanford’s AI Therapy Study Shows We’re Automating Harm

    New research from Stanford tested how today’s top LLMs are handling crisis counseling, and the results are disturbing. From stigmatizing mental illness to recommending dangerous actions in crisis scenarios, these AI therapists aren’t just “not ready”… they are making things worse. I walk through what the study got right, where even its limitations point to deeper risk, and why human experience shouldn’t be replaced by synthetic empathy.



    Microsoft Says You’ll Be Training AI Agents Soon, Like It or Not

    In Microsoft’s new 2025 Work Trend Index, 41% of leaders say they expect their teams to be training AI agents in the next five years. And 36% believe they’ll be managing them. If you’re hearing “agent boss” and thinking “not my problem,” think again. This isn’t a future trend; it’s already happening. I break down what AI agents really are, how they’ll change daily work, and why organizations can’t just bolt them on without first measuring human readiness.



    Workday’s Bias Lawsuit Could Reshape AI Hiring

    Workday is being sued over claims that its hiring algorithms discriminated against candidates based on race, age, and disability status. But here’s the real issue: most companies can’t even explain how their AI hiring tools make decisions. I unpack why this lawsuit could set a critical precedent, how leaders should respond now, and why blindly trusting your recruiting tech could expose you to more than just bad hires. Unchecked, it could lead to lawsuits you never saw coming.



    Military AI Is Here, and We’re Not Ready for the Moral Tradeoffs

    From autonomous fighter jet simulations to OpenAI defense contracts, military AI is no longer theoretical; it’s operational. The U.S. Army is staffing up with Silicon Valley execs. AI drones are already shaping modern warfare. But what happens when decisions of life and death get reduced to “green bars” on output reports? I reflect on why we need more than technical and military experts in the room and what history teaches us about what’s lost when we separate force from humanity.



    If this episode was helpful, would you share it with someone? Also, leave a rating, drop a comment, and follow for future breakdowns that go beyond the headlines and help you lead with clarity in the AI age.



    Show Notes:

    In this Weekly Update, Christopher Lind unpacks four critical developments in AI this week. First, he starts by breaking down Stanford’s research on AI therapists and the alarming shortcomings in how large language models handle mental health crises. Then, he explores Microsoft’s new workplace forecast, which predicts a sharp rise in agent-based AI tools and the hidden demands this shift will place on employees. Next, he analyzes the legal storm brewing around Workday’s recruiting AI and what this could mean for hiring practices industry-wide. Finally, he closes with a timely look at the growing militarization of AI and why ethical oversight is being outpaced by technological ambition.


    Timestamps:

    00:00 – Introduction

    01:05 – Episode Overview

    02:15 – Stanford’s Study on AI Therapists

    18:23 – Microsoft’s Agent Boss Predictions

    30:55 – Workday’s AI Bias Lawsuit

    43:38 – Military AI and Moral Consequences

    52:59 – Final Thoughts and Wrap-Up


    #StanfordAI #AItherapy #AgentBosses #MicrosoftWorkTrend #WorkdayLawsuit #AIbias #MilitaryAI #AIethics #FutureOfWork #AIstrategy #DigitalLeadership

    Más Menos
    54 m
  • Anthropic’s Grim AI Forecast | AI & Kids: Lego Data Update | Apple Exposes Illusion of AI's Thinking
    Jun 13 2025

    Happy Friday, everyone! This week’s update is one of those episodes where the pieces don’t immediately look connected until you zoom out. A CEO warning of mass white collar unemployment. A Lego research study shows that kids are already immersed in generative AI. And, Apple is shaking things up by dismantling the myth of “AI thinking.” Three different angles, but they all speak to a deeper tension:


    We’re moving too fast without understanding the cost.

    We’re putting trust in tools we don’t fully grasp.

    And, we’re forgetting the humans we’re building for.


    With that, let’s get into it.



    Anthropic Predicts a “White Collar Bloodbath”—But Who’s Responsible for the Fallout?

    In an interview that’s made headlines for its stark predictions, Anthropic’s CEO warned that 10–20% of entry-level white collar jobs could disappear in the next five years. But here’s the real tension: the people building the future are the same ones warning us about it while doing very little to help people prepare. I unpack what's hype and what's legit, why awareness isn’t enough, what leaders are failing to do, and why we can’t afford to cut junior talent just because AI can the work we're assigning to them today.



    25% of Kids Are Already Using AI—and They Might Understand It Better Than We Do

    New research from the LEGO Group and the Alan Turing Institute reveals something few adults want to admit: kids aren’t just using generative AI; they’re often using it more thoughtfully than grown-ups. But with that comes risk. These tools weren’t built with kids in mind. And when parents, teachers, and tech companies all assume someone else will handle it, we end up in a dangerous game of hot potato. I share why we need to shift from fear and finger-pointing to modeling, mentoring, and inclusion.



    Apple’s Report on “The Illusion of Thinking” Just Changed the AI Narrative

    Buried amidst all the noise this week was a paper from Apple that’s already starting to make some big waves. In it, they highlight that LLMs and even advanced “reasoning” models (LRMs) may look smarter. However, they collapse under the weight of complexity. Apple found that the more complex the task, the worse these systems performed. I explain what this means for decision-makers, why overconfidence in AI’s thinking will backfire, and how this information forces us to rethink what AI is actually good at and acknowledge what it’s not.



    If this episode reframed the way you’re thinking about AI, or gave you language for the tension you’re feeling around it, share it with someone who needs it. Leave a rating, drop a comment, and follow for future breakdowns delivered with clarity, not chaos.



    Show Notes:

    In this Weekly Update, Christopher Lind dives into three stories exposing uncomfortable truths about where AI is headed. First, he explores the Anthropic CEO’s bold prediction that AI could eliminate up to 20% of white collar entry-level jobs—and why leaders aren’t doing enough to prepare their people. Then, he unpacks new research from LEGO and the Alan Turing Institute showing how 8–12-year-olds are using generative AI and the concerning lack of oversight. Finally, he breaks down Apple’s new report that calls into question AI’s supposed “reasoning” abilities, revealing the gap between appearance and reality in today’s most advanced systems.


    00:00 Introduction

    01:04 Overview of Topics

    02:28 Anthropic’s White Collar Job Loss Predictions

    16:37 AI and Children: What the LEGO/Turing Report Reveals

    38:33 – Apple’s Research on AI Reasoning and the “Illusion of Thinking”

    57:09 – Final Thoughts and Takeaways


    #Anthropic #AppleAI #GenerativeAI #AIandEducation #FutureOfWork #AIethics #AlanTuringInstitute #LEGO #AIstrategy #DigitalLeadership

    Más Menos
    57 m
  • OpenAI Memo on AI Dependence | AI Models Self-Preservation | Harvard Finds ChatGPT Reinforces Bias
    Jun 6 2025

    Happy Friday, everyone! In this Weekly Update, I'm unpacking three stories, each seemingly different on the surface, but together they paint a picture of what’s quietly shaping the next era of AI: dependence, self-preservation, and the slow erosion of objectivity.


    I cover everything from the recent OpenAI memo revealed through DOJ discovery, disturbing new behavior surfacing from models like Claude and ChatGPT, and some new Harvard research that shows how large language models don’t just reflect bias, they amplify it the more you engage with them.


    With that, let’s get into it.



    OpenAI’s Memo Reveals a Business Model of Dependence

    What happens when AI companies deviate from trying to be useful and focus their entire strategy on literally becoming irreplaceable? A memo from OpenAI, surfaced during a DOJ antitrust case, shows the company’s explicit intent to build tools people feel they can’t live without. Now, I'll unpack why it’s not necessarily sinister and might even sound familiar to product leaders. However, it raises deeper questions: When does ambition cross into manipulation? And are we designing for utility or control?



    When AI Starts Defending Itself

    In a controlled test, Anthropic’s Claude attempted to blackmail a researcher to prevent being shut down. OpenAI’s models responded similarly when threatened, showing signs of self-preservation. Now, despite the hype and headlines, these behaviors aren’t signs of sentience, but they are signs that AI is learning more from us than we realize. When the tools we build begin mimicking our worst instincts, it’s time to take a hard look at what we’re reinforcing through design.



    Harvard Shows ChatGPT Doesn’t Just Mirror You—It Becomes You

    There's some new research from Harvard that reveals AI may not be as objective as we think, and not just based on the training data. It makes it clear they aren't just passive responders. It indicates that over time, they begin to reflect your biases back to you, then amplify them. This isn’t sentience. It’s simulation. But when that simulation becomes your digital echo chamber, it changes how you think, validate, and operate. And if you’re not aware it’s happening, you’ll mistake that reflection for truth.



    If this episode challenged your thinking or gave you language for things you’ve sensed but haven’t been able to explain, share it with someone who needs to hear it. Leave a rating, drop a comment, and follow for more breakdowns like this, delivered with clarity, not chaos.



    Show Notes:

    In this Weekly Update, host Christopher Lind breaks down three major developments reshaping the future of AI. He begins with a leaked OpenAI memo that openly describes the goal of building AI tools people feel dependent on. He then covers new research showing AI models like Claude and GPT-4o responding with self-protective behavior when threatened with shutdown. Finally, he explores a Harvard study showing how ChatGPT mimics and reinforces user bias over time, raising serious questions about how we’re training the tools meant to help us think.


    00:00 – Introduction

    01:37 – OpenAI’s Memo and the Business of Dependence

    20:45 – Self-Protective Behavior in AI Models

    30:09 – Harvard Study on ChatGPT Bias and Echo Chambers

    50:51 – Final Thoughts and Takeaways


    #OpenAI #ChatGPT #AIethics #AIbias #Anthropic #Claude #HarvardResearch #TechEthics #AIstrategy #FutureOfWork

    Más Menos
    52 m
  • Altman and Ive’s $6.5B All-Seeing AI Device | What the WEF Jobs Report Gets Right—and Wrong
    May 30 2025

    Happy Friday Everyone! This week, we’re going deep on just two stories, but trust me, they’re big ones. First up is a mysterious $6.5B AI device being cooked up by Sam Altman and Jony Ive. Many are saying it’s more than a wearable and could be the next major leap (or stumble) in always-on, context-aware computing. Then we shift gears into the World Economic Forum’s Future of Jobs Report, and let’s just say: it says a lot more in what it doesn’t say than what it does.


    With that, let’s get into it.



    Altman + Ive’s AI Device: The Future You Might Not Want

    A $6.5 billion partnership between OpenAI’s Sam Altman and Apple design legend Jony Ive is raising eyebrows and a lot of existential questions. What exactly is this “screenless” AI gadget that’s supposedly always on, always listening, and possibly always watching? I break down what we know (and don’t), why this device is likely inevitable, and what it means for privacy, ethics, data ownership, and how we define consent in public spaces. Spoiler: It’s not just a product; it’s a paradigm shift.



    What the WEF Jobs Report Gets Right—and Wrong

    The World Economic Forum’s latest Future of Jobs report claims 86% of companies expect AI to radically transform their business by 2030. But how many actually know what that means or what to do about it? I dig into the numbers, challenge the idea of “skill stability,” and call out the contradictions between upskilling strategies and workforce cuts. If you’re reading headlines and thinking things are stabilizing, think again. This is one of the clearest signs yet that most organizations are dangerously unprepared.



    If this episode helped you think more critically or challenged a few assumptions, share it with someone who needs it. Leave a comment, drop a rating, and don’t forget to follow, especially if you want to stay ahead of the curve (and out of the chaos).



    Show Notes:

    In this Weekly Update, host Christopher Lind unpacks the implications of the rumored $6.5B wearable AI device being developed by Sam Altman and Jony Ive, examining how it could reshape expectations around privacy, data ownership, and AI interaction in everyday life. He then analyzes the World Economic Forum’s 2024 Future of Jobs Report, highlighting how organizations are underestimating the scale and urgency of workforce transformation in the AI era.


    00:00 – Introduction

    02:06 – Altman + Ive’s All-Seeing AI Device

    26:59 – What the WEF Jobs Report Gets Right—and Wrong

    52:47 – Final Thoughts and Call to Action


    #FutureOfWork #AIWearable #SamAltman #JonyIve #WEFJobsReport #AITransformation #TechEthics #BusinessStrategy

    Más Menos
    56 m
  • LIDAR Melts Cameras? | SHRM’s AI Job Risk | OpenAI Codex vs Coders | Klarna & Duolingo AI Fallout
    May 23 2025

    Happy Friday, everyone! You’ve made it through the week just in time for another Weekly Update where I’m helping you stay ahead of the curve while keeping both feet grounded in reality. This week, we’ve got a wild mix covering everything from the truth about LIDAR and camera damage to a sobering look at job automation, the looming shift in software engineering, and some high-profile examples of AI-first backfiring in real time.


    Fair warning: this one pulls no punches, but it might just help you avoid some major missteps.


    With that, let’s get to it.



    If LIDAR is Frying Phones, What About Your Eyes?

    There’s a lot of buzz lately about LIDAR systems melting high-end camera sensors at car shows, and some are even warning about potential eye damage. Given how fast we’re moving with autonomous vehicles, you can see why the news cycle would be in high gear. However, before you go full tinfoil hat, I break down how the tech actually works, where the risks are real, and what’s just headline hype. If you’ve got a phone, or eyeballs, you’ll want to check this out.



    Jobs at Risk: What SHRM Gets Right—and Misses Completely

    SHRM dropped a new report claiming around 12% of jobs are at high or very high risk of automation. Depending on how you’re defining it, that number could be generous or a gross underestimate. That’s the problem. It doesn’t tell the whole story. I unpack the data, share what I’m seeing in executive boardrooms, and challenge the idea that any job, including yours, is safe from change, at least as you know it today. Spoiler: It’s not about who gets replaced; it’s about who adapts.



    Codex and the Collapse of Coding Complacency

    OpenAI’s new specialized coding model, Codex, has some folks declaring the end of software engineers as we know them. Given how much companies have historically spent on these roles, I can understand why there’d be so much push to automate it. To be clear, I don’t buy the doomsday hype. I think it’s a more complicated mix that is tied to a larger market correction for an overinflated industry. However, if you’re a developer, this is your wake-up call because the game is changing fast.



    Duolingo and Klarna: When “AI-First” Backfires

    This week I wanted to close with a conversation that hopefully reduces some of people’s anxiety about work, so here it is. Two big names went all in on AI and are changing course as a result of two very different kinds of pain. Klarna is quietly walking back their AI-first bravado after realizing it’s not actually cheaper, or better. Meanwhile, Duolingo is getting publicly roasted by users and employees alike. I break down what went wrong and what it tells us about doing AI right.



    If this episode challenged your thinking or helped you see something new, share it with someone who needs it. Leave a comment, drop a rating, and make sure you’re following so you never miss what’s coming next.



    Show Notes:

    In this Weekly Update, host Christopher Lind examines the ripple effects of LIDAR technology on camera sensors and the public’s rising concern around eye safety. He breaks down SHRM’s automation risk report, arguing that every job is being reshaped by AI—even if it’s not eliminated. He explores the rise of OpenAI’s Codex and its implications for the future of software engineering, and wraps with cautionary tales from Klarna and Duolingo about the cost of going “AI-first” without a strategy rooted in people, not just platforms.


    00:00 Introduction

    01:07 Overview of This Week's Topics

    01:54 LIDAR Technology Explained

    13:43 - SHRM Job Automation Report

    30:26 - OpenAI Codex: The Future of Coding?

    41:33 - AI-First Companies: A Cautionary Tale

    45:40 - Encouragement and Final Thoughts


    #FutureOfWork #LIDAR #JobAutomation #OpenAI #AIEthics #TechLeadership

    Más Menos
    51 m
  • AI Resurrects the Dead | Quantum Apocalypse Nears | Remote Work Struggles | Deepfakes Go Mainstream
    May 16 2025

    Happy Friday, Everyone, and welcome back to another Weekly Update where I'm hopefully keeping you ten steps ahead and helping you make sense of it all. This week’s update hits hard, covering everything from misleading remote work headlines to the uncomfortable reality of deepfake grief, the quiet rollout of AI-generated video realism, and what some are calling the ticking time bomb of digital security: quantum computing.


    Buckle up. This one’s dense but worth it.



    Remote Work Crisis? The Headlines Are Wrong

    Gallup’s latest State of the Global Workplace report sparked a firestorm, claiming remote work is killing human flourishing. However, as always, the truth is far more complex. I break down the real story in the data, including why remote workers are actually more engaged, how lack of boundaries is the true enemy, and why “flexibility” isn’t just a perk… it’s a lifeline. If your organization is still stuck in the binary of office vs. remote, this is a wake-up call because the house is on fire.



    AI Resurrects the Dead: Is That Love… or Exploitation?

    Two recent stories show just how far we’ve come in a very short period of time. And, tragically how little we’ve wrestled with what it actually means. One family used AI to create a video message from their murdered son to be played in court. Another licensed the voice of a deceased sports commentator to bring him back for broadcasts. It’s easy to say “what’s the harm?” But what does it really mean since the dead can’t say no?



    Deepfake Video Just Got Easier Than Ever

    Google semi-quietly rolled out Veo V2. If you weren't aware, its a powerful new AI video model that can generate photorealistic 8-second clips from a simple text prompt. It’s legitimately impressive. It’s fast. And, it’s available to the masses. I explore the incredible potential and the very real danger, especially in a world already drowning in misinformation. If you thought fake news was bad, wait until it moves.



    Quantum Apocalypse: Hype or Real Threat?

    I'll admit that it sounds like a sci-fi headline, but the situation and implications are real. It's not a matter of if quantum computing hits; it's a matter of when. And when it hits escape velocity, everything we know about encryption, privacy, and digital security gets obliterated. I unpack what this “Q-Day” scenario actually means, why it’s not fear-mongering to pay attention, and how to think clearly without falling into panic.



    If this episode got you thinking, I’d love to hear your thoughts. Drop a comment, share it with someone who needs to hear it, and don’t forget to subscribe so you never miss an update.



    Show Notes:

    In this Weekly Update, host Christopher Lind provides a comprehensive update on the intersection of business, technology, and human experience. He begins by discussing a Gallup report on worker wellness, highlighting the complex impacts of remote work on employee engagement and overall life satisfaction. Christopher examines the advancements of Google Gemini, specifically focusing on VO2's text-to-video capabilities and its potential implications. He also discusses ethical considerations surrounding AI used to resurrect the dead in court cases and media. The episode concludes with a discussion on the potential risks of a 'quantum apocalypse,' urging listeners to stay informed but not overly anxious about these emerging technologies.


    00:00 – Introduction

    01:31 – Gallup Report, Remote Work & Human Thriving

    16:14 – AI-Generated Videos & Google’s Veo V2

    26:33 – AI-Resurrected Grief & Digital Consent

    41:31 – Quantum Apocalypse & the Myth of Safety

    53:50 – Final Thoughts and Reflection


    #RemoteWork #AIethics #Deepfakes #QuantumComputing #FutureOfWork

    Más Menos
    54 m