The Cyber Business Podcast Podcast Por Matthew Connor arte de portada

The Cyber Business Podcast

The Cyber Business Podcast

De: Matthew Connor
Escúchala gratis

Welcome to The Cyber Business Podcast where we feature top founders and entrepreneurs and share their inspiring stories.The Cyber Business Podcast (c) 2022 Economía Gestión y Liderazgo Liderazgo
Episodios
  • The Arms Race, the Energy Gap, and the Ethics of Teaching AI to Be Good with Alex Dalay - Ep 205
    Apr 14 2026
    Guest Introduction Alex Dalay is the CISO at IDB Bank, a New York-headquartered commercial, private banking, and broker dealer institution with more than 70 years of history. As the security leader of a financial institution that sits squarely in the crosshairs of modern threat actors, Alex brings a perspective grounded in operational reality rather than theoretical frameworks. His approach to security leadership strips away the noise and returns consistently to the fundamentals: know what you have, know who has access to it, and build everything else from there. Here's a Glimpse of What You'll Learn Why asset inventory and identity management are the two foundational elements every security program must get right before any advanced tool can be effectiveHow AI has changed offensive security by enabling attackers to evaluate and pivot off responses in real time, a capability that previously required human judgment and gave defenders a meaningful edgeWhy the window between vulnerability disclosure and active exploitation has compressed to near real time and what that demands from security teams right nowHow contextual vulnerability scoring differs from out-of-the-box ratings and why a critical vulnerability in one environment may not be critical in yoursWhy social engineering and credential theft remain the most reliable attack paths and how AI-powered behavioral detection is changing the defender's ability to respondWhy the race to AGI carries geopolitical stakes comparable to the nuclear arms race and what energy infrastructure has to do with who gets there firstHow Alex thinks about the ethical challenge of training AI to be good, not just intelligent, and why guardrails alone are not sufficientWhat Alex told his 10-year-old son when asked about what jobs will look like by the time he graduates college In This Episode Alex opens with a perspective that cuts through the noise immediately: security does not need to be complicated, and the organizations that struggle most are usually the ones that skipped the basics in pursuit of advanced capabilities. Asset inventory and identity management are unglamorous but they are the foundation everything else is built on. If you do not know what is in your environment and who has access to it, no tool, AI-powered or otherwise, will save you. That philosophy of fundamentals-first shapes how he approaches the role of CISO at a financial institution that faces a significantly higher volume of attacks than most industries simply because money is involved. The AI conversation takes a sharp turn toward the offensive side of the ledger. Alex identifies the most consequential change AI has made to the threat landscape as the ability to evaluate responses in real time during an attack. Historically, automated tools ran scripts and moved on when something failed. Human attackers could pivot off unexpected responses. Now AI can do both, at machine speed. That shift has compressed the window between vulnerability disclosure and active exploitation to near real time in many cases, fundamentally changing how urgently defenders must act. He also draws an important distinction that often gets lost in the noise: a critical vulnerability rating from a vendor like Microsoft assumes the worst-case configuration. Whether it is actually critical in your specific environment requires human and increasingly AI-assisted contextual analysis before you drop everything to patch it. Alex closes with a wide-angle view of where AI is taking both the profession and society. He draws a comparison to the nuclear arms race, arguing that whichever nation cracks AGI first will hold a form of leverage that reshapes global power. He connects that to an underappreciated dependency: energy. Without the infrastructure to power the data centers that run AI at scale, the United States risks falling behind adversaries who face fewer environmental or political constraints on energy expansion. On the ethical side, he raises a point that goes beyond guardrails. We are racing to make AI intelligent without taking the time to teach it to be good, and the consequences of that gap may be the most important and least discussed challenge in the entire AI conversation.
    Más Menos
    33 m
  • Role-Based AI, Culture-First Hiring, and the Future of Human-Centered Tech with Laurel Cipriani - Ep 204
    Apr 8 2026
    Guest Introduction Laurel Cipriani returns to the Cyber Business Podcast for a second conversation that goes deeper and broader than the first. As CIO at AffirmedRX, a transparent pharmacy benefits management company and public benefit corporation legally obligated to put patients ahead of profits, Laurel brings a background unlike almost any other CIO in the industry. She trained in psychology, became a registered nurse, spent years in health administration and clinical quality, and arrived in IT through a path that has given her a perspective on people, culture, and human-centered technology that is genuinely rare at the executive level. She is also an active member of the Digital Economist think tank in Washington DC and is joining the World Technology Congress, a Switzerland-based international think tank, as this episode records. Here's a Glimpse of What You'll Learn How Laurel is rolling out a role-based AI strategy at AffirmedRX where tool access, permissions, and accountability are all determined by what each person actually doesWhy she is considering hiring dedicated AI fact checkers and what that says about the current state of AI output reliability in high-stakes environmentsWhat the representation gap for women in IT leadership actually looks like from the inside and why culture fit may be more important than credentials in closing itHow AI is currently reinforcing gender bias through scraped training data and what that means for the next generation of modelsWhy Laurel believes AI could eventually help solve the root causes of gender inequality if developed and governed thoughtfullyHow the anonymity of the internet has amplified harmful behavior and why removing it may be more beneficial than most people are willing to admitWhat it means to lead a technology team with compassion as a core value and why that quality is becoming more important as AI takes over more execution workWhy Laurel believes the most important question for this generation is not whether to use AI but how to use it without losing what makes us human In This Episode Laurel opens this return visit with an origin story that sets the tone for everything that follows. From aspiring grief therapist to floor nurse to health informaticist to CIO of a public benefit corporation, her path into technology was never linear and never conventional. What runs through all of it is a single thread: a desire to help people and a belief that technology is most powerful when it is built around human needs rather than the other way around. That philosophy is now embedded in how she is building the AI strategy at AffirmedRX, where every steward in the company will have a clearly defined set of tools, permissions, and accountability structures tied directly to their role. No one gets unfettered access. No output goes unreviewed. And no AI system will ever make a decision without a human signing off. The conversation on women in IT leadership is honest and specific in ways that broader industry discussions rarely are. Laurel notes that virtually every person on her own team is male, not by design but by the reality of a candidate pipeline that still skews heavily toward men. Her response is not to lower the bar but to raise the profile of culture as the primary filter in hiring, something AffirmedRX does formally through a culture screening call before any other evaluation takes place. She makes the case that as AI raises the floor on individual capability, the differentiator between good teams and great ones will increasingly be how people work together, not what any individual can produce alone. That shift, she argues, naturally favors the holistic, relationship-oriented thinking that women have historically been undervalued for bringing to technical roles. The deepest thread in this episode is the one that connects AI governance to human development in ways that go well beyond the enterprise. Laurel is conducting original research through the Digital Economist on how AI and internet anonymity are amplifying harmful behavior toward women, how gender bias baked into training data is being reinforced at scale in AI models, and what it would take to actually interrupt those cycles rather than just acknowledge them. Her conclusion is not pessimistic. She believes AI, if governed with the same intentionality she is applying at AffirmedRX, could become the most powerful tool ever built for identifying and dismantling the cultural patterns that have kept inequality in place for generations. Getting there requires the same thing everything else in this conversation requires: humans staying in charge, staying accountable, and refusing to let speed become an excuse for carelessness.
    Más Menos
    59 m
  • Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203
    Apr 6 2026
    Guest Introduction Greg McCord is a career security leader operating across two roles simultaneously. As CISO at Lightcast.io, a leading labor market analytics firm, he protects one of the most data-intensive organizations in the workforce intelligence space. As founder and CISO of McCord Keystone Advisory, launched in late 2025, he extends fractional CISO services to small and mid-sized businesses that need executive-level security leadership but cannot sustain a full-time hire. His background spans government, public sector, and private enterprise, and includes time as an Army interrogator at the SERE school for special forces, an experience that informs how he thinks about intelligence, data relevance, and the psychology of adversarial pressure. Here's a Glimpse of What You'll Learn Why Greg argues every CSO must incorporate AI into their daily security lifecycle or risk being left behind by adversaries who already haveWhy adopting AI in a non-attributable way is the most important and underemphasized discipline in enterprise security right nowHow quantum computing threatens to make every encrypted breach dataset collected today readable in the future and what that means for your data strategyWhy AI frameworks like AIUC-1 and CSA Maestro are becoming critical infrastructure for organizations trying to govern agents, prompts, and LLMs at scaleHow running LLMs locally on hardware rather than in the cloud changes the security calculus for SMBs and enterprises alikeWhy the cloud adoption analogy is the most useful mental model for thinking about where AI governance is headedHow AI-powered penetration testing and continuous red teaming are changing how organizations find and prioritize vulnerabilitiesWhy the right question is not whether to use AI but how to use it without losing positive control of your most sensitive data In This Episode Greg opens with a position that is both practical and urgent. Security leaders who choose not to adopt AI are not playing it safe. They are falling behind adversaries who are already deploying it against them. His counsel is specific: adopt AI, but do it in a non-attributable way. The moment confidential data is connected to an uncontrolled AI system, positive control of that data is gone and there is no reliable way to get it back. The traditional tools still matter. The telemetry and signal they provide remains valuable. But they need to be augmented with AI that can act faster, identify patterns earlier, and close the gap between detection and response before attackers achieve their objective inside your environment. The quantum computing thread is where Greg brings one of the most forward-looking and underappreciated risks in the conversation. Governments and sophisticated threat actors are collecting encrypted breach data today with no current ability to decrypt it. Once quantum computing matures, that changes. Everything collected now becomes readable later. Greg draws on his Army interrogator background to frame it clearly: the goal is for your data to be irrelevant by the time anyone can crack it, but not all of it will be, and the organizations that are not thinking about this now will have no recourse when it arrives. That reality, combined with the convergence of quantum processing and AI training models, is what makes the current moment unlike anything the industry has faced before. Greg closes with a perspective on frameworks and governance that is both honest about the pace problem and constructive about the path forward. By the time a framework is written and discussed, the technology it describes has already evolved. That is not an argument against frameworks. It is an argument for building continuous feedback loops between practitioners in the field and the people writing the standards. AIUC-1 and CSA Maestro represent serious efforts to govern AI agent behavior, prompt handling, and LLM risk in a structured way. The organizations that engage with those frameworks now, rather than waiting for mandates, will be the ones with the governance foundation in place when the next wave of threats arrives.
    Más Menos
    38 m
Todavía no hay opiniones