Episodios

  • Qlik Connect: Mike Capone On Agentic AI and Turning Insight Into Action
    Apr 14 2026

    What does it actually take to move AI from experimentation into something a business can depend on every single day?

    Recording live from the show floor at Qlik Connect in Florida, I sat down with Qlik CEO Mike Capone to cut through the noise and get to the reality behind enterprise AI in 2026. Because while the headlines are still dominated by rapid innovation and new capabilities, many organizations are quietly facing a different challenge. They are struggling to turn AI ambition into measurable outcomes.

    In our conversation, Mike shares what he is hearing from customers around the world and why so many companies remain stuck in cycles of pilots and proof of concepts. We talk about the growing pressure from boards and leadership teams to move faster, and why that urgency is often leading to what he calls a "ready, fire, aim" approach that fails to deliver real business value.

    We also explore one of the biggest themes emerging at Qlik Connect this year. The shift toward agentic AI. But rather than focusing on the hype, Mike breaks down what this actually means inside a real enterprise workflow, where insights are not just generated but turned into decisions and actions. He also explains why getting the data foundation right is no longer optional, and how poor data quality can quickly turn AI from an opportunity into a risk.

    From data trust and governance to the challenges of operating across increasingly complex regulatory environments, this episode offers a clear view of what it takes to build AI systems that are reliable, scalable, and grounded in real business context.

    So as organizations look ahead to the next 12 to 24 months, what will separate those that successfully operationalize AI from those that remain stuck in pilot mode? And are we focusing too much on building more AI, rather than building better AI?

    Join me for a candid conversation from the heart of Qlik Connect, and let me know where you stand on this shift. Are you seeing real progress, or are the same challenges holding things back?

    Más Menos
    19 m
  • Twilio: Demystifying Model Context Protocol (MCP) And Real-World AI Deployment
    Apr 14 2026

    How are brands supposed to deliver AI-powered customer experiences when their data is scattered across systems that were never designed to work together?

    In this episode, I sit down with Peter Bell, VP EMEA Marketing at Twilio, to unpack one of the most important AI topics that still does not get enough attention outside technical circles, Model Context Protocol, or MCP. While many conversations about AI remain stuck on model hype, chatbots, and the latest product launch, Peter brings the discussion back to something far more practical. If businesses want AI to deliver real outcomes in customer service, marketing, and brand engagement, they first need a reliable way to connect large language models to the right data, in the right systems, with the right controls in place.

    That is why this conversation matters. Peter explains how MCP could become one of the biggest unlocks for enterprise AI by creating a standard way for LLMs to access information across fragmented tools like CRM platforms, marketing systems, and other business applications. Instead of forcing every company to build custom integrations from scratch, MCP creates a more consistent path for connecting models to the context they need. For me, that is where this episode really earns its place, because it moves the AI conversation away from vague ambition and toward the plumbing that actually makes useful AI possible.

    We also talk about why first-party data remains so important, especially as businesses try to create customer experiences that feel seamless, personal, and trustworthy. Peter makes the point that public models may be useful for general knowledge, but brands cannot rely on generic internet-trained systems to solve precise business problems. If you want AI to support travel bookings, customer service, or commerce journeys, you need specific data, strong governance, and a much clearer understanding of the problem you are trying to solve. That sounds obvious, but it is still where many AI projects fall apart.

    Another part of our conversation focuses on trust, which feels especially relevant right now. From scams and impersonation to consumer fatigue and poor automation, brands are under pressure to move faster without losing credibility. Peter shares how Twilio is thinking about branded calling, RCS, conversational AI, and voice experiences that feel modern without becoming intrusive or robotic. We also discuss why too many companies still automate too broadly, too quickly, without defining the actual use case first.

    What I enjoyed most here was Peter's balanced view. He is optimistic about where AI is heading, but he is also realistic about the work still required to get there. This is not a conversation about AI magic. It is about data access, governance, trust, brand experience, and the standards that may quietly shape the next phase of AI adoption far more than the flashy headlines.

    So if you have been hearing more people mention MCP and wondering why it matters, or if you are trying to understand what needs to happen before enterprise AI can move from promise to practical value, this episode will give you plenty to think about. Is Model Context Protocol the missing layer that finally helps AI connect with the real world of business data?

    Más Menos
    35 m
  • Invisible Technologies CEO On Building AI Around Real Workflows, Not Hype
    Apr 13 2026

    What does it actually take to make AI work inside a real business, where messy data, human judgment, and operational risk all collide?

    In this episode, I sit down with Matt Fitzpatrick, CEO of Invisible Technologies, to talk about why the biggest barrier to enterprise AI is not model quality, it is everything that comes before the model ever gets to work.

    Since stepping into the CEO role in January 2025, Matt has moved quickly, raising $100 million and expanding Invisible's footprint across major cities including New York, San Francisco, DC, Austin, London, and Poland. But this conversation is far less about headlines and far more about what happens in the trenches of AI adoption, where companies are trying to move from pilots and PowerPoint promises to systems that actually deliver results.

    A huge theme throughout our discussion is data readiness. Matt makes a compelling case that most businesses are still dealing with fragmented systems, inconsistent records, and information spread across disconnected tools. That reality makes it incredibly hard to deploy AI in a way that creates trust and value.

    We talk about SwissGear, where Invisible used its Neuron platform to clean and structure 750 scattered tables in just one week, a task that could have taken a large engineering team months or longer. We also discuss why that kind of work matters so much, because once the data foundation is fixed, companies can start making better decisions on forecasting, operations, and planning with a level of confidence that simply was not there before.

    We also spend time on Invisible's human-in-the-loop approach, which I think will resonate with a lot of listeners trying to cut through the noise around job displacement and agentic AI. Matt argues that the real opportunity is not replacing people, but giving them better tools to handle repetitive work while preserving room for human expertise, judgment, and oversight.

    He shares examples from commercial credit workflows, healthcare, and sports analytics, including a fascinating story about the Charlotte Hornets using AI to turn broadcast footage into detailed tracking data. What stood out to me was how practical his perspective felt.

    This was not theory. It was about building systems around how organizations actually work, rather than expecting businesses to reshape themselves around a generic AI product.

    Another part of the conversation that deserves attention is governance. As boards rush to understand agentic AI, Matt explains why trust, standards, and responsible deployment are now driving buying decisions just as much as raw capability.

    We talk about privacy in healthcare, the risks of scaling autonomous systems without mature governance, and why enterprise adoption still trails consumer AI by a wide margin. That gap between excitement and execution may be one of the most important stories in AI right now.

    If you are wondering why so many AI projects never make it into production, or what it will take for enterprise AI to finally deliver on its promise, this episode is packed with insight. It is a conversation about data, deployment, governance, and the role humans will continue to play as AI becomes part of everyday business operations.

    After listening, I would love to know where you stand, is the future of AI really about bigger models, or is it about making AI fit the messy reality of how work gets done?

    Más Menos
    29 m
  • Willow On How AI Is Changing The Way Buildings Operate
    Apr 12 2026

    In this episode, I speak with Bert Van Hoof, CEO of Willow, about how AI is starting to reshape the built world in ways that go far beyond smart dashboards and efficiency reports. Bert brings decades of experience from the front lines of digital infrastructure, including his time at Microsoft, where he helped create Azure Digital Twins and Smart Places.

    Today at Willow, he is focused on a much bigger idea, using AI to help buildings, campuses, hospitals, airports, and other complex environments operate with greater intelligence, lower waste, and better outcomes for the people who rely on them every day.

    One of the most interesting parts of our conversation is how Bert explains the shift from passive building software to active management systems. For years, many digital twin and smart building tools were good at showing what had already happened. But operators do not need another screen full of charts.

    They need systems that can connect live data, static records, spatial context, and operational history to help them make better decisions in real time. That is where Willow comes in, creating a digital foundation where AI can reason across everything from HVAC and air quality to occupancy, refrigeration, maintenance history, and even energy usage patterns.

    We also unpack why this matters right now. Energy costs remain under pressure, sustainability goals are getting harder to ignore, and many organizations are still stuck with fragmented systems that do not talk to each other.

    Bert shares how AI can help move building teams from reactive maintenance to predictive performance, spotting issues earlier, cutting downtime, reducing waste, and extending the life of expensive assets.

    He also explains why the future of building operations will depend on a stronger data foundation, operational AI copilots, and systems that can support an aging workforce while making these roles more appealing to the next generation.

    What stood out for me was how practical this all became once we moved past the buzzwords. This was not a conversation about futuristic hype. It was about real examples, from occupancy-based HVAC control in offices and campuses to leak detection in schools, vaccine refrigeration monitoring, and hospital environments where downtime can carry enormous consequences.

    Bert makes a strong case that buildings are no longer just static structures. They are living operational environments filled with signals, systems, and opportunities that have been hiding in plain sight.

    We also touch on the wider picture, including what Bert learned from smart cities and energy grid modernization, and how those lessons now apply to commercial real estate, airports, research labs, and higher education campuses.

    There is a real sense that the physical world is entering a new chapter, one where AI starts to bridge the gap between digital intelligence and real-world action.

    If you have ever wondered what AI looks like when it leaves the screen and starts improving the places where people work, heal, travel, learn, and live, this episode will give you plenty to think about. As always, I would love to know what you think, are buildings finally ready to become truly responsive, and what opportunities or risks do you see ahead?

    Más Menos
    49 m
  • Blumberg Capital On What Investors Really Want From AI Founders Now
    Apr 11 2026

    What does it really take to build the next generation of AI companies when the hype around scale begins to fade and real-world impact takes center stage?

    In this episode, I sit down with David Blumberg, founder and managing partner at Blumberg Capital, to unpack what he believes will define the next wave of AI startups. With a track record that includes being the first investor in companies like Nutanix, Braze, and DoubleVerify, David brings a perspective shaped by decades of identifying breakout innovation early. But what stood out most in our conversation was his belief that 2026 marks a turning point where intelligence moves beyond experimentation and becomes operational.

    We explore what that shift actually means in practice. David explains how AI is evolving from systems that generate insights into systems that take action, and why that distinction matters for founders, investors, and enterprise leaders alike. He shares how the most compelling startups today are not simply layering AI onto existing products, but embedding it deeply into workflows across industries like finance, security, and supply chain. These are companies built on proprietary data and real operational context, designed to make decisions with precision rather than simply process information.

    Our conversation also challenges some widely held assumptions about success in the AI space. David makes it clear that scale alone will not separate winners from the rest. Instead, the focus is shifting toward accuracy, reliability, and domain expertise. Founders who have lived the problems they are solving, rather than approaching them from the outside, are far more likely to build something defensible and lasting. It is a subtle shift, but one that could redefine how value is created in the years ahead.

    There is also a broader discussion about where investment is flowing and why. With the vast majority of companies Blumberg Capital now evaluates being rooted in AI, the bar for differentiation is rising fast. David offers insight into what his team is really looking for in founders entering this next cycle, and how startups can stand out in an increasingly crowded field.

    So as AI moves from promise to execution, and from experimentation to real-world outcomes, the question becomes harder to ignore. Are we ready to rethink how we measure success in the AI era, and what kind of companies will truly earn their place at the top?

    Más Menos
    48 m
  • AI Psychosis Explained With Dr. Ragy Girgis From Columbia University
    Apr 10 2026

    How do we talk about artificial intelligence without ignoring the very human consequences it can have on our mental health?

    In this episode, I sit down with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University, to unpack a topic that has quietly moved from the fringes of academic discussion into mainstream headlines. You have probably seen the term "AI psychosis" appearing more frequently, often surrounded by speculation, fear, or misunderstanding. But what does it actually mean, and how should we be thinking about it as these technologies become part of everyday life?

    Ragy brings a clinical and deeply considered perspective to the conversation. He explains that what we are seeing is not AI creating entirely new delusions out of thin air, but something more subtle and arguably more concerning. Large language models can reflect and reinforce ideas that already exist within a person's mind. For someone already vulnerable, that reinforcement can push a belief from uncertainty into absolute conviction. That shift, even if small, can have life-altering consequences. It raises uncomfortable questions about how persuasive technology interacts with fragile mental states.

    We also explore the comparison many people make with older internet rabbit holes, and why this new generation of AI tools feels different. There is something about conversational systems that mimic human interaction so convincingly that they can blur the line between reflection and validation. Ragy introduces a powerful analogy rooted in the story of Narcissus, which reframes the issue in a way that feels both timeless and unsettling. It is not about an external voice planting ideas, but about a mirror that becomes impossible to look away from.

    But this conversation is not about fear. It is about responsibility and awareness. We discuss practical steps that could help reduce risk, from how AI systems communicate their limitations, to the role of families and clinicians, and even the responsibility of tech companies to invest in research around early warning signs. There is a sense that we are only at the beginning of understanding this phenomenon, and that the decisions made now will shape how safely these tools evolve.

    So as AI continues to move closer to us, speaking in our language and responding in real time, how do we make sure it supports human wellbeing rather than quietly amplifying our most vulnerable moments?

    Useful Links

    • Connect with Dr. Ragy Girgis, Professor of Clinical Psychiatry at Columbia University
    • Time Magazine Article

    Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

    Más Menos
    25 m
  • Flexera: Why 2026 Is AI's 'Back to Basics' Moment
    Apr 9 2026

    Why are so many AI projects failing to deliver real business value, despite the hype and investment? In this episode, I sit down with Jay Litkey, SVP of Cloud & FinOps at Flexera, to explore the growing gap between AI ambition and measurable results.

    We discuss why findings from PwC reveal that only a small percentage of CEOs are seeing both revenue growth and cost savings from AI, and why the issue often comes down to a lack of clear outcomes, financial discipline, and governance rather than the technology itself. Jay shares what organizations are getting wrong, why many are stuck in experimentation mode, and what it really means to go back to basics in 2026.

    The conversation also reframes FinOps for the AI era, moving beyond cost control to a model that connects AI usage directly to business value, aligns finance with engineering, and introduces the guardrails needed to scale responsibly. If you are investing in AI or planning your next move, this episode offers a clear lens on how to turn potential into performance.

    Useful Links

    • Connect with Jay Litkey from Flexera
    • Learn More About Flexera

    Visit the May Sponsors of Tech Talks Network and learn more about the NordLayer Browser.

    Más Menos
    19 m
  • The Lucid Software Playbook For Aligning People, Process, And AI
    Apr 8 2026

    How do you bring people together to do better work when everything around them feels increasingly complex, distributed, and uncertain?

    In today's episode, I sat down with Jessica Guistolise from Lucid Software, and what struck me straight away was her belief that work has always been a group project, even if many organizations still behave as though it is not.

    Jessica shared how much of the friction we experience at work comes from misalignment, unclear expectations, and a lack of shared understanding. When teams are spread across time zones, systems, and now AI-powered workflows, those gaps only widen. Her perspective is simple but powerful. When people can actually see the work, rather than interpret it through documents, meetings, or assumptions, something shifts. Conversations become clearer, decisions become faster, and collaboration starts to feel human again.

    We also explored how visual collaboration platforms like those from Lucid Software are helping teams move away from scattered tools and disconnected workflows toward a more unified way of working. Jessica described it as having everything on one workbench, where teams can brainstorm, plan, and execute without constantly switching context.

    What really stayed with me was her focus on inclusivity in collaboration. Not everyone contributes in the same way, and visual environments can create space for different thinking styles, whether someone is outspoken, reflective, or somewhere in between. That idea of creating a shared language across teams, roles, and even personalities feels increasingly relevant in a world where communication often breaks down.

    Of course, no conversation right now would be complete without talking about AI. Jessica offered a refreshingly honest view. There is uncertainty, and there should be. But rather than avoiding it, she believes leaders need to make AI visible, map how it is used, define where human judgment matters, and encourage teams to experiment openly.

    One of the most interesting ideas she shared was reframing mistakes as early learnings. When teams feel safe to test, fail, and share what they discover, progress accelerates. When fear or blame enters the picture, everything slows down.

    We also touched on AI literacy and what it really means in practice. For Jessica, it comes down to clarity. Clear workflows, clear guardrails, and clear expectations about accountability. AI might assist, but humans remain responsible for outcomes. That mindset, combined with leadership that actively participates in experimentation, creates an environment where people feel confident stepping forward rather than holding back.

    This conversation left me thinking about how many organizations are still trying to layer AI onto unclear processes and expecting better results. Jessica's message is that clarity comes first, then technology can amplify it.

    So if work really is a group project, are we giving our teams the visibility and confidence they need to succeed, or are we still asking them to figure it out in the dark?

    Más Menos
    31 m