Episodios

  • From Data Overload To Decision Advantage: Inside Anticipatory Intelligence with Ansel Stein
    Feb 28 2026

    In this episode, I'm joined by Ansel Stein, Vice President of Operations at Crisis24, and the leader behind AiiA powered by Palantir, an intelligence platform built to help executives cut through noise and make better calls in uncertain conditions.

    Ansel's background spans more than two decades across analysis, diplomacy, and high-stakes advisory work, including supporting U.S. national security priorities. Today, he's applying that same discipline to the private sector, helping organizations turn overwhelming streams of information into judgment leaders can actually use.

    We talk about what "intelligence" really means in this context, and why it's different from collecting more data or running another monitoring program. Ansel breaks down the thinking behind the AiiA President's Brief, inspired by the kind of concise, high-rigor briefings senior government leaders rely on, and explains how that model translates into business decision-making without losing context or nuance. If you have ever felt buried by alerts, headlines, and competing narratives, this conversation puts language around that problem and offers a practical alternative.

    We also address the concerns many leaders have about AI, privacy, and the fear of being tracked. Ansel is clear on boundaries, what data AiiA uses, why open-source intelligence matters, and how governance needs to be designed upfront if trust is going to hold. From structured analytic techniques and scenario planning to the idea that risk and opportunity often sit side by side, this episode is a look at how organizations can move from reacting to anticipating, without handing accountability over to a machine.

    If your team is trying to shorten the time from signal to decision while still protecting trust, what would it look like to treat intelligence as a leadership habit rather than a crisis tool, and are you ready to build that muscle before the next disruption hits?

    Más Menos
    24 m
  • From FBI Gag Order To Privacy-First Telco: The Nicholas Merrill Story
    Feb 28 2026

    How did a routine request from the FBI turn into a decade-long legal battle that helped reshape modern privacy law and ultimately inspire a new kind of mobile network?

    In this episode, I sit down with Nicholas Merrill, founder of Phreeli and one of the most influential yet often under-recognized figures in the fight for digital rights. Long before privacy became a mainstream talking point, Nick was running an internet service provider that powered major global brands. That journey took a dramatic turn in 2004 when he became the first person to challenge the constitutionality of a National Security Letter under the Patriot Act, living under a gag order for years while the case unfolded. What followed was a deeply personal and professional transformation that led him to question whether litigation and legislation alone could ever keep pace with the scale of modern surveillance.

    We explore how that experience pushed him toward a third path, building privacy directly into technology itself. From launching the Calyx Institute and developing privacy-focused Android software to raising a multi-million-dollar endowment for digital rights, Nick has spent decades turning principles into practical tools. Now, with Phreeli, he is taking that philosophy into one of the most data-hungry industries of all, mobile telecoms, reimagining what a carrier looks like when it is designed to know as little about its customers as possible.

    Our conversation also tackles the shifting balance of power between governments and corporations in the data economy, and why the distinction between the two is becoming increasingly blurred. Nick explains the trade-offs involved in building a privacy-first operator in a heavily regulated market, the cryptographic thinking behind Phreeli's double-blind architecture, and why he believes consent and personal agency should sit at the center of the digital experience.

    This is a story about resistance, resilience, and the belief that technology can be used to restore choice rather than quietly remove it. It is also a timely reminder that privacy is not an abstract concept for activists and engineers, but something as familiar as closing the curtains in your own home.

    So after three decades on the front lines of this debate, what does Nick think most of us still misunderstand about our digital rights, and what single shift in mindset could change how we all approach privacy in the connected world?

    Más Menos
    29 m
  • AI Fraud vs AI Scams, Alloy CEO Tommy Nicholas Explains The Difference
    Feb 27 2026

    Have you noticed how every week brings a new headline about AI driven fraud, yet it still feels hard to tell what is real risk and what is noise?

    In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear driven commentary and gets into what fraud teams are actually dealing with right now.

    We start with a simple but important distinction that gets blurred all the time. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse.

    Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel.

    Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board level narratives, budgets, or risk models on top of survey data.

    From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video.

    The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts.

    We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows.

    Otherwise they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable.

    We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol roll out.

    When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?

    Más Menos
    54 m
  • How Lenovo Is Preparing Classrooms For The AI Era
    Feb 26 2026

    How do you prepare an entire generation for a world where AI is already shaping how we work, create, and solve problems?

    In this episode of Tech Talks Daily, I'm joined by Dr. Tara Nattrass, Chief Innovation Strategist for Education at Lenovo, for a grounded and thoughtful conversation about what responsible AI integration really looks like in K–12 classrooms.

    Tara brings more than 25 years of experience inside school districts, including serving as Assistant Superintendent for Teaching and Learning in Arlington Public Schools, so this isn't a theory-led discussion. It's informed by lived experience.

    We explore how the conversation has shifted over the past 18 months. AI has been present in schools for years through adaptive software and analytics, but the arrival of generative and now agentic AI tools has accelerated everything. As Tara explains, the debate is no longer about whether AI should be in schools. It's about how to approach it responsibly, strategically, and in ways that genuinely improve learning outcomes.

    A big theme in our conversation is AI literacy. Tara breaks this down in practical terms, moving beyond technical understanding to include critical thinking, creativity, collaboration, and the ability to evaluate risk and bias. She shares real examples of students designing AI tools to solve problems in their communities, shifting the focus from passive consumption to active creation.

    We also talk about infrastructure readiness. Many school systems have bold ambitions around AI, but there is often a gap between vision and technical capability. AI-ready devices, intelligent infrastructure, cybersecurity, and data governance all play a role in making innovation sustainable rather than experimental.

    Lenovo's approach, as Tara describes it, centers on building education ecosystems rather than simply refreshing hardware.
    There is also a careful balance to strike between innovation, privacy, and inclusion. From hybrid AI models to questions around where data is stored and who can access it, schools are navigating complex decisions. Tara shares how Lenovo partners with districts, policymakers, and organizations such as ISTE and ASCD to align infrastructure, professional learning, and governance frameworks.

    Looking ahead, we discuss what will separate school systems that truly benefit from AI from those that simply layer new tools onto old teaching models. Vision, educator upskilling, cybersecurity, and rethinking assessment all feature prominently in her answer.
    If you are working in education, technology leadership, or policy, this conversation offers a practical view of how AI-ready classrooms are being built today and what still needs to happen next.

    As always, I'd love to hear your thoughts. How is AI reshaping learning in your organization, and are you ready for what comes next?

    Más Menos
    31 m
  • ServiceNow, Dynatrace And The Future Of End-To-End IT Autonomy
    Feb 25 2026

    What does autonomous IT really look like when you move beyond the slideware and start wiring systems together in the real world?

    At Dynatrace Perform in Las Vegas, I sat down with Pablo Stern, EVP and GM of Technology Workflow Products at ServiceNow, to unpack exactly that. Pablo leads the teams focused on CIOs and CISOs, building the workflows and security products that sit at the heart of modern IT organizations. From service desks and command centers to risk and asset management, his remit is clear: enable AI to work for people, not the other way around.

    We began with ServiceNow's deepening multi-year partnership with Dynatrace. While the announcement made headlines, Pablo was quick to point out that the real story starts with customers. This collaboration is rooted in a shared goal of helping joint customers reduce outages, improve SLA adherence, and shrink mean time to resolution. The vision of autonomous IT operations is not about hype. It is about connecting observability data with deterministic workflows so that insight can evolve into coordinated, system-level action.

    Pablo walked me through the maturity curve he sees emerging. First came AI-powered insight, summarizing data and surfacing signals from noise. Then came task automation, drafting knowledge articles, paging teams, triggering predefined playbooks. The next step, and the one that excites him most, is orchestrated autonomy. That means stitching together skills, agents, and workflows into systems that can drive end-to-end outcomes. It is a journey measured in years, not months, and it depends as much on digitizing process and building trust as it does on technology.

    We also explored root cause analysis, still one of the biggest time drains in IT. By combining Dynatrace's AI-driven observability with ServiceNow's workflow engine, enterprises can automate forensic steps, correlate events faster, and shorten the time spent on major incident bridges where teams debate ownership. Even incremental improvements in accuracy can save hours when incidents strike.

    Trust, of course, remains central. Pablo was candid that full self-healing systems are still some distance away. What we will see first is relief automation, controlled failovers, scripted actions suggested by machines but approved by humans. Over time, as confidence grows and processes become fully digitized, the balance will shift.

    Beyond the technology, a consistent theme ran through our conversation. Outcomes have not changed. Enterprises still want higher availability, faster resolution, better employee experiences. What is changing is the how. ServiceNow is reimagining its platform to deliver those outcomes at a much higher standard, not through incremental tweaks, but through rethinking workflows for an AI-first world.

    From design partnerships with banks building pre-flight change checks, to internal teams acting as the toughest customers, this was a grounded, practical conversation about where autonomous operations are headed and what it will take to get there.

    If you are a CIO, CISO, or IT leader wondering how to move from theory to execution, this episode offers a clear-eyed look behind the curtain.

    Más Menos
    30 m
  • Scrut Automation And The Security Blind Spot Facing The 99%
    Feb 24 2026

    What happens when nearly half of organizations admit they have no AI-specific security controls, yet AI-driven data leaks are accelerating at the same time?

    In this episode of Tech Talks Daily, I spoke with Aayush Choudhry, CEO and co-founder of Scrut Automation, about what he sees as a blind spot in the cybersecurity industry. While much of the market continues to design tools for Fortune 500 enterprises with deep pockets and large security teams, Aayush argues that the real existential risk sits with the 99 percent of businesses that cannot survive a serious breach.

    Aayush brings a founder's perspective shaped by firsthand pain. Before launching Scrut, he and his co-founder experienced the grind of managing compliance and security as a cloud-native startup trying to sell into enterprises. They were outsiders to GRC and security at the time, forced to learn from first principles. That experience became the foundation for Scrut Automation, a modern GRC platform built specifically for small and mid-sized companies that cannot afford six-month implementations, armies of consultants, or half-million-dollar tooling budgets.

    We explore why treating compliance and security as separate functions increases risk for smaller organizations. In the mid-market, the same small team is often responsible for both. When compliance is handled as a box-ticking exercise and security as a separate technical discipline, gaps emerge. Scrut's approach converges governance, risk, and security signals into a unified layer that translates hundreds of technical alerts into context-aware risks that actually matter to the business.

    Our conversation also tackles AI complacency. Using the classic confidentiality, integrity, and availability framework, Aayush outlines what minimum viable AI security hygiene looks like in practice. That includes ensuring AI agents are not over-privileged compared to the humans they represent, placing guardrails around sensitive data fed into models, and extending supply chain security thinking to agentic integrations. For resource-constrained teams, these are not theoretical concerns. They are daily realities.

    Perhaps most compelling is his view that AI can act as a force multiplier for small teams. By embedding accumulated expertise into agents trained on anonymized patterns and edge cases, Scrut aims to democratize security know-how that would otherwise require multiple full-time analysts. The goal is simple but ambitious: make enterprise-grade security outcomes accessible without enterprise-grade headcount.

    If you are leading a small or mid-sized business and wondering how to balance growth, compliance, and AI risk without breaking the bank, this conversation offers a candid look from the trenches.

    Más Menos
    25 m
  • Inside Epicor's Approach To Inclusive, High-Performing Tech Teams
    Feb 24 2026

    How do you build enterprise software for the companies that keep the world turning, while also building a leadership culture where people can actually thrive?

    In this episode of Tech Talks Daily, I spoke with Kerrie Jordan, Chief Marketing Officer and SVP at Epicor, about her journey from studying literature to helping shape cloud ERP strategy at a global software company serving more than 20,000 customers worldwide. Kerrie's story is a reminder that there is no single path into technology leadership. Sometimes the foundations are laid in unexpected places, through storytelling, creativity, and a deep curiosity about people.

    Kerrie shares how her early career in product lifecycle management opened her eyes to the human side of software. Interviewing customers and writing case studies showed her that behind every system implementation is a personal story, a career milestone, or a business trying to survive and grow. That perspective still shapes how she approaches product and marketing today at Epicor, a company recently recognized as a Leader in the Gartner Magic Quadrant for Cloud ERP for Product-Centric Enterprises for the third consecutive year.

    But this conversation goes far beyond market recognition. We talk openly about burnout, resilience, and the reality of leading through pressure. Kerrie reflects on the importance of protecting time, creating space to reconnect, and building a culture where empathy is practiced, not just discussed. Her view of leadership is grounded in communication, psychological safety, and being tough on problems rather than people.

    Mentorship is another thread running throughout our discussion. Kerrie explains why powerful mentorship is not passive. It requires vulnerability, preparation, and a willingness to hear difficult advice. A single phrase from a mentor early in her career, "stick-to-itiveness," continues to shape how she approaches hard problems today.

    We also explore the future of women in manufacturing and technology. Kerrie highlights the need for intentional change across education, early career development, and leadership visibility. She believes technology, particularly AI, can expand access, enable upskilling, and introduce flexibility that supports long-term career growth. At the same time, she makes a simple but powerful point. Women in tech want the same thing as anyone else: the space and autonomy to do their jobs well.

    From customer co-innovation and community-driven product roadmaps to inclusive leadership under commercial pressure, this episode offers a candid look at what it really takes to lead in enterprise technology today.

    If you are building products, leading teams, or questioning your own next career step, I think you will find something in Kerrie's story that resonates.

    Más Menos
    33 m
  • Miro CIO Tomás Dostal Freire On Reclaiming Creative Time With AI
    Feb 23 2026

    Why do so many of us feel busy all day, yet struggle to point to the meaningful work we actually completed?

    In this episode of Tech Talks Daily, I sit down with Tomás Dostal Freire, CIO of Miro, to unpack a challenge that quietly drains modern organizations. Tomás brings experience from companies like Google, Netflix, and Booking.com, and now leads both IT and business acceleration at Miro. His focus is simple but ambitious. Move beyond AI experimentation and rethink how work itself gets done.

    We explore new research revealing that for every hour of creative work, employees lose up to three hours to meetings, admin, emails, and maintenance tasks. That ratio is more than an inconvenience. It affects decision-making speed, employee satisfaction, and ultimately a company's ability to compete. Tomás argues that future candidates will choose employers based on how much unnecessary internal work they are expected to tolerate. In other words, reducing busy work is quickly becoming a talent strategy.

    One of the biggest culprits? Context switching. With dozens of browser tabs open and information scattered across tools, teams spend more time stitching together fragments than making decisions. Tomás describes how duplication of work, outdated systems, and a lack of shared context quietly erode momentum. AI, he believes, should not create more noise or another standalone tool. It needs to be embedded where collaboration already happens.

    We discuss the difference between single-player AI moments, where individuals use tools in isolation, and multiplayer AI collaboration, where shared context allows teams to move faster together. At Miro, this philosophy has shaped what they call an AI Innovation Workspace, a shared canvas where human insight and AI assistance coexist in real time.

    Tomás also shares practical advice for leaders who want to reclaim creative time. Start by identifying tasks you dislike doing that could easily be handled by someone junior. That list often reveals what AI can already automate. Then focus on building transferable skills like cognitive agility and first-principles thinking, rather than chasing every new tool.

    If you are wrestling with burnout, fragmented workflows, or wondering how AI can genuinely improve collaboration without overwhelming teams, this conversation offers a grounded, optimistic perspective. And yes, we even add a Beatles classic to the Spotify playlist along the way.

    Más Menos
    27 m