Episodios

  • 3560: How People.ai is Turning Sales Activity Into Answers Leaders Can Act On
    Jan 20 2026

    What does sales leadership actually look like once the AI experimentation phase is over and real results are the only thing that matters?

    In this episode of Tech Talks Daily, I sit down with Jason Ambrose, CEO of the Iconiq backed AI data platform People.ai, to unpack why the era of pilots, proofs of concept, and AI theater is fading fast. Jason brings a grounded view from the front lines of enterprise sales, where leaders are no longer impressed by clever demos. They want measurable outcomes, better forecasts, and fewer hours lost to CRM busywork. This conversation goes straight to the tension many organizations are feeling right now, the gap between AI potential and AI performance.

    We talk openly about why sales teams are drowning in activity data yet still starved of answers. Emails, meetings, call transcripts, dashboards, and dashboards about dashboards have created fatigue rather than clarity.

    Jason explains how turning raw activity into crisp, trusted answers changes how sellers operate day to day, pulling them back into customer conversations instead of internal reporting loops. The discussion challenges the long held assumption that better selling comes from more fields, more workflows, and more dashboards, arguing instead that AI should absorb the complexity so humans can focus on judgment, timing, and relationships.

    The conversation also explores how tools like ChatGPT and Claude are quietly dismantling the walls enterprise software spent years building. Sales leaders increasingly want answers delivered in natural language rather than another system to log into, and Jason shares why this shift is creating tension for legacy platforms built around walled gardens and locked down APIs.

    We look at what this means for architecture decisions, why openness is becoming a strategic advantage, and how customers are rethinking who they trust to sit at the center of their agentic strategies.

    Drawing on work with companies such as AMD, Verizon, NVIDIA, and Okta, Jason shares what top performing revenue organizations have in common.

    Rather than chasing sameness, scripts, and averages, they lean into curiosity, variation, and context. They look for where growth behaves differently by market, segment, or product, and they use AI to surface those differences instead of flattening them away. It is a subtle shift, but one with big implications for how sales teams compete.

    We also look ahead to 2026 and beyond, including how pricing models may evolve as token consumption becomes a unit of value rather than seats or licenses.

    Jason explains why this shift could catch enterprises off guard, what governance will matter, and why AI costs may soon feel as visible as cloud spend did a decade ago. The episode closes with a thoughtful challenge to one of the biggest myths in the industry, the belief that selling itself can be fully automated, and why the last mile of persuasion, trust, and judgment remains deeply human.

    If you are responsible for revenue, sales operations, or AI strategy, this episode offers a clear-eyed look at what changes when AI stops being an experiment and starts being held accountable, so what assumptions about sales and AI are you still holding onto, and are they helping or quietly holding you back?

    Useful Links

    • Follow Jason Ambrose on LinkedIn
    • Learn more about people.ai
    • Follow on LinkedIn

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    34 m
  • 3559: Conviva CEO on Turning Experimental AI Agents Into Reliable Systems
    Jan 19 2026

    In this episode of Tech Talks Daily, I sat down with Keith Zubchevich, CEO of Conviva, to unpack one of the most honest analogies I have heard about today's AI rollout.

    Keith compares modern AI agents to toddlers being sent out to get a job, full of promise, curious, and energetic, yet still lacking the judgment and context required to operate safely in the real world. It is a simple metaphor, but it captures a tension many leaders are feeling as generative AI matures in theory while so many deployments stumble in practice.

    As ChatGPT approaches its third birthday, the narrative suggests that GenAI has grown up. Yet Keith argues that this sense of maturity is misleading, especially inside enterprises chasing measurable returns. He explains why so many pilots stall or quietly disappoint, not because the models lack intelligence, but because organizations often release agents without clear outcomes, real-time oversight, or an understanding of how customers actually experience those interactions.

    The result is AI that appears to function well internally while quietly frustrating users or failing to complete the job it was meant to do.

    We also dig into the now infamous Chevrolet chatbot incident that sold a $76,000 vehicle for one dollar, using it as a lens to examine what happens when agents are left without boundaries or supervision.

    Keith makes a strong case that the next chapter of enterprise AI will not be defined by ever-larger models, but by visibility. He shares why observing behavior, patterns, sentiment, and efficiency in real time matters more than chasing raw accuracy, especially once AI moves from internal workflows into customer-facing roles.

    This conversation will resonate with anyone under pressure to scale AI quickly while worrying about brand risk, accountability, and trust. Keith offers a grounded view of what effective AI "parenting" looks like inside modern organizations, and why measuring the customer experience remains the most reliable signal of whether an AI system is actually growing up or simply creating new problems at speed.

    As leaders rush to put agents into production, are we truly ready to guide them, or are we sending toddlers into the workforce and hoping for the best?

    Useful Links

    • Connect with Keith Zubchevich
    • Learn more about Conviva
    • Chevrolet Dealer Chatbot Agrees to Sell Tahoe for $1

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    30 m
  • 3558: Do You Really Have an Offline backup, or Just the Illusion of One?
    Jan 18 2026

    In this episode of Tech Talks Daily, I sit down with Imran Nino Eškić and Boštjan Kirm from HyperBUNKER to unpack a problem many organisations only discover in their darkest hour. Backups are supposed to be the safety net, yet in real ransomware incidents, they are often the first thing attackers dismantle. Speaking with two people who cut their teeth in data recovery labs across 50,000 real cases gave me a very different perspective on what resilience actually looks like.

    They explain why so many so-called "air-gapped" or "immutable" backups still depend on identities, APIs, and network pathways that can be abused. We talk through how modern attackers patiently map environments for weeks before neutralising recovery systems, and why that shift makes true physical isolation more relevant than ever. What struck me most was how calmly they described failure scenarios that would keep most leaders awake at night.

    The heart of the conversation centres on HyperBUNKER's offline vault and its spaceship-style double airlock design. Data enters through a one-way hardware channel, the network door closes, and only then is information moved into a completely cold vault with no address, no credentials, and no remote access. I also reflect on seeing the black box in person at the IT Press Tour in Athens and why it feels less like a gadget and more like a last-resort lifeline.

    We finish by talking about how businesses should decide what truly belongs in that protected 10 percent of data, and why this is as much a leadership decision as an IT one. If everything vanished tomorrow, what would your company need to breathe again, and would it actually survive?

    Useful LInks

    • Connect with Imran Nino Eškić
    • Connect With Boštjan Kirm
    • Learn More about HyperBUNKER
    • Lear more about the IT Press Tour

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    25 m
  • 3557: MythWorx Explains Why Reasoning Matters More Than AI Scale
    Jan 17 2026

    What happens when the AI race stops being about size and starts being about sense?

    In this episode of Tech Talks Daily, I sit down with Wade Myers from MythWorx, a company operating quietly while questioning some of the loudest assumptions in artificial intelligence right now. We recorded this conversation during the noise of CES week, when headlines were full of bigger models, more parameters, and ever-growing GPU demand. But instead of chasing scale, this discussion goes in the opposite direction and asks whether brute force intelligence is already running out of road.

    Wade brings a perspective shaped by years as both a founder and investor, and he explains why today's large language models are starting to collide with real-world limits around power, cost, latency, and sustainability. We talk openly about the hidden tax of GPUs, how adding more compute often feels like piling complexity onto already fragile systems, and why that approach looks increasingly shaky for enterprises dealing with technical debt, energy constraints, and long deployment cycles.

    What makes this conversation especially interesting is MythWorx's belief that the next phase of AI will look less like prediction engines and more like reasoning systems. Wade walks through how their architecture is modeled closer to human learning, where intelligence is learned once and applied many times, rather than dragging around the full weight of the internet to answer every question. We explore why deterministic answers, audit trails, and explainability matter far more in areas like finance, law, medicine, and defense than clever-sounding responses.

    There is also a grounded enterprise angle here. We talk about why so many organizations feel uneasy about sending proprietary data into public AI clouds, how private AI deployments are becoming a board-level concern, and why most companies cannot justify building GPU-heavy data centers just to experiment. Wade draws parallels to the early internet and smartphone app eras, reminding us that the playful phase often comes before the practical one, and that disappointment is often a signal of maturation, not failure.

    We finish by looking ahead. Edge AI, small-footprint models, and architectures that reward efficiency over excess are all on the horizon, and Wade shares what MythWorx is building next, from faster model training to offline AI that can run on devices without constant connectivity. It is a conversation about restraint, reasoning, and realism at a time when hype often crowds out reflection.

    So if bigger models are no longer the finish line, what should business and technology leaders actually be paying attention to next, and are we ready to rethink what intelligence really means?

    Useful Links

    • Connect with Wade Myers
    • Learn More About MythWorx

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    27 m
  • 3556: How Illumio Is Helping Leaders Rethink Cybersecurity for a World Where Attacks Keep Happening
    Jan 16 2026

    What happens when we finally admit that stopping every cyberattack was never realistic in the first place?

    That is the thread running through this conversation, recorded at the start of the year when reflection tends to be more honest and the noise dial is turned down a little. I was joined by returning guest Raghu Nandakumara from Illumio, nearly three years after our last discussion, to pick up a question that has aged far too well. How do organizations talk about cybersecurity value when breaches keep happening anyway?

    This episode is less about shiny tools and more about uncomfortable truths. We spend time unpacking why security teams still struggle to show value, why prevention-only thinking keeps setting leaders up for disappointment, and why the conversation is slowly shifting toward resilience and containment. Raghu is refreshingly direct on why reducing cyber risk, rather than chasing impossible guarantees, is the only metric that really holds up under boardroom scrutiny.

    We also talk about the strange contradiction playing out across industries. Attackers are often using familiar paths like misconfigurations, excessive permissions, and missing patches, yet many organizations still fail to close those gaps. The issue, as Raghu explains, is rarely a lack of tools. It is usually fragmented coverage, outdated processes, and a talent pipeline that blocks capable people from entering the field while claiming there is a skills shortage.

    One of the most practical parts of this conversation centers on mindset. Instead of asking whether an attacker got in, Raghu argues that leaders should be asking how far they were able to go once inside. That shift alone changes how success is measured, how teams prepare for incidents, and how pressure-filled P1 moments are handled when boards want answers every fifteen minutes.

    We also touch on how legal action, public claims campaigns, and customer lawsuits are changing the stakes after a breach, forcing executives to rethink how they frame cyber investment. From there, Raghu shares how Illumio has been working with Microsoft to strengthen internal resilience at massive scale, and why visibility and segmentation are becoming harder to ignore.

    This is a conversation about realism, responsibility, and growing up as an industry. If cybersecurity is really about safety and not slogans, what would you want your organization to stop saying, and what would you rather hear instead?

    Please feel free to upload the podcast. Here are also the links we discussed on the call:

    Useful Links

    • Connect with Raghu Nandakumara on LinkedIn and Twitter
    • Learn more about Illumio
    • Lateral Movement in Cyberattacks
    • Illumio Podcast
    • Follow on Facebook, Twitter, LinkedIn, and YouTube

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    41 m
  • 3555: Immersive on Why Incident Response Plans Break Down in Reality
    Jan 15 2026

    What really happens inside an organization when a cyber incident hits and the neat incident response plan starts to fall apart?

    That question sat at the heart of my return conversation with Max Vetter, VP of Cyber at Immersive. It has been a big year for breaches, public fallout, and eye-watering financial losses, and this episode goes beyond headlines to examine what cyber crisis management actually looks like when pressure, uncertainty, and human behavior collide. Max brings a rare perspective shaped by years in law enforcement, intelligence work, and hands-on cyber defense, and he is refreshingly honest about where most organizations are still unprepared.

    We talked about why written incident response plans tend to fail at the exact moment they are needed most. Cyber incidents are chaotic, emotional, and non-linear, yet many plans assume calm decision-making and perfect coordination. Max explains why success or failure is often defined by the response rather than the initial breach itself, and why leadership, communication, and judgment matter just as much as technical skill. Real-world examples from major incidents highlight how competing pressures quickly emerge, whether to contain or keep systems running, whether to pay a ransom or risk prolonged downtime, and how every option comes with consequences.

    One idea that really stood out is Max's belief that resilience is revealed, not documented. Compliance and audits may tick boxes, but they rarely expose how teams behave under stress. We explored why organizations that rely on annual tabletop exercises often develop a false sense of confidence, and how that confidence can become dangerous when decisions are made quickly and publicly. Max shared why the best-performing teams are often the ones that feel less certain in the moment, because they question assumptions and adapt faster.

    We also dug into the growing role of crisis simulations and micro-drills. Rather than rehearsing a single scenario once a year, Immersive focuses on repeated, realistic practice that builds muscle memory across technical teams, executives, legal, and communications. The goal is not to predict the exact attack, but to train people to think clearly, collaborate across functions, and make defensible decisions when there are no good options. That preparation becomes even more important as cyber incidents increasingly spill into supply chains, manufacturing, and the physical world.

    As public scrutiny rises and consumer-led legal action becomes more common after breaches, reputation and response speed now sit alongside forensics and recovery as business-critical concerns. This episode is a candid look at why cyber crisis readiness is a discipline, not a document, and why assuming you will cope when the moment arrives is a risky bet.

    So if resilience only truly shows itself when everything is on the line, how confident are you that your organization would perform when the pressure is real and the clock is ticking?

    Useful Links
      • Connect with Max Vetter on Linkedin
      • Learn more about Immersive Labs
      • Follow on LinkedIn, Instagram, Twitter and Facebook

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    29 m
  • 3554: The Mammoth Enterprise AI Browser and the Future of Secure Agentic Workflows
    Jan 14 2026

    What happens when the web browser stops being a passive window to information and starts acting like an intelligent coworker, and why does that suddenly make security everyone's problem?

    At the start of 2026, I sat down with Michael Shieh from Mammoth Cyber to unpack a shift that is quietly redefining how work gets done.

    AI browsers are moving fast from consumer curiosity to enterprise reality, embedding agentic AI directly into the place where most work already happens, the browser. Search, research, comparison, analysis, and decision support are no longer separate steps. They are becoming one continuous workflow.

    In this conversation, we talk openly about why consumer adoption has surged while enterprise teams remain hesitant. Many employees already rely on AI-powered browsing at home because it removes ads, personalizes results, and saves time.

    Inside organizations, however, the same tools raise difficult questions around data exposure, credential safety, and indirect prompt injection. Once an AI agent starts reading untrusted external content, the browser itself becomes a new attack surface.

    Michael explains why this risk is often misunderstood and why the real danger is not internal documents, but external websites designed to manipulate AI behavior.

    We dig into how Mammoth Cyber approaches this challenge differently, starting with a secure-first architecture that isolates trusted internal data from untrusted external sources. Every AI action, from memory to model connections to data access, is monitored and governed by policy. It is a practical response to a problem many security teams know is coming but feel unprepared to manage.

    We also explore how AI browsers change day-to-day work. A task like competitive analysis, which once took days of manual research and document comparison, can now be completed in minutes when an AI browser securely connects internal knowledge with external intelligence. That productivity gain is real, but only if enterprises trust the environment it runs in.

    We touch on Zero Trust principles, including work influenced by Chase Cunningham, and why 2026 looks like a tipping point for enterprise AI browsing. The technology is maturing, security controls are catching up, and businesses are starting to accept that blocking AI outright is no longer realistic.

    If you are curious to see how this works in practice, Mammoth Cyber offers a free Enterprise AI Browser that lets you experience what secure AI-powered browsing actually looks like, without putting your organization at risk. I have included the link so you can explore it yourself and decide whether this is where work is heading next.

    So, as AI browsers become the new workflow hub for knowledge workers everywhere, is your organization ready to secure the browser before it becomes your most exposed endpoint, and what would adopting one safely change about how your teams work?

    If you want to see what an enterprise-grade AI browser looks like when security is built in from day one, Mammoth Cyber is offering free access to its Enterprise AI Browser.

    It gives you a hands-on way to experience how agentic AI can automate real work inside the browser while keeping internal data isolated from untrusted external sources. You can explore it yourself and decide whether this is how your organization should be approaching AI-powered browsing in 2026.

    Useful Links

    • Learn more about the Mammoth Enterprise Browser and try it for free
    • Connect with Michael Shieh on LinkedIn

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    19 m
  • 3553: How Coralogix is Turning Observability Data Into Real Business Impact
    Jan 14 2026

    What happens when engineering teams can finally see the business impact of every technical decision they make?

    In this episode of Tech Talks Daily, I sat down with Chris Cooney, Director of Advocacy at Coralogix, to unpack why observability is no longer just an engineering concern, but a strategic lever for the entire business. Chris joined me fresh from AWS re:Invent, where he had been challenging a long-standing assumption that technical signals like CPU usage, error rates, and logs belong only in engineering silos. Instead, he argues that these signals, when enriched and interpreted correctly, can tell a much more powerful story about revenue loss, customer experience, and competitive advantage.

    We explored Coralogix's Observability Maturity Model, a four-stage framework that takes organizations from basic telemetry collection through to business-level decision making. Chris shared how many teams stall at measuring engineering health, without ever connecting that data to customer impact or financial outcomes. The conversation became especially tangible when he explained how a single failed checkout log can be enriched with product and pricing data to reveal a bug costing thousands of dollars per day. That shift, from "fix this tech debt" to "fix this issue draining revenue," fundamentally changes how priorities are set across teams.

    Chris also introduced Oli, Coralogix's AI observability agent, and explained why it is designed as an agent rather than a simple assistant. We talked about how Oli can autonomously investigate issues across logs, metrics, traces, alerts, and dashboards, allowing anyone in the organization to ask questions in plain English and receive actionable insights. From diagnosing a complex SQL injection attempt to surfacing downstream customer impact, Oli represents a move toward democratizing observability data far beyond engineering teams.

    Throughout our discussion, a clear theme emerged. When technical health is directly tied to business health, observability stops being seen as a cost center and starts becoming a competitive advantage. By giving autonomous engineering teams visibility into real-world impact, organizations can make faster, better decisions, foster innovation, and avoid the blind spots that have cost even well-known brands millions.

    So if observability still feels like a necessary expense rather than a growth driver in your organization, what would change if every technical signal could be translated into clear business impact, and who would make better decisions if they could finally see that connection?

    Useful LInks

    • Connect with Chris Cooney
    • Learn more about Coralogix
    • Follow on LinkedIn

    Thanks to our sponsors, Alcor, for supporting the show.

    Más Menos
    33 m