Episodios

  • The Arms Race, the Energy Gap, and the Ethics of Teaching AI to Be Good with Alex Dalay - Ep 205
    Apr 14 2026
    Guest Introduction Alex Dalay is the CISO at IDB Bank, a New York-headquartered commercial, private banking, and broker dealer institution with more than 70 years of history. As the security leader of a financial institution that sits squarely in the crosshairs of modern threat actors, Alex brings a perspective grounded in operational reality rather than theoretical frameworks. His approach to security leadership strips away the noise and returns consistently to the fundamentals: know what you have, know who has access to it, and build everything else from there. Here's a Glimpse of What You'll Learn Why asset inventory and identity management are the two foundational elements every security program must get right before any advanced tool can be effectiveHow AI has changed offensive security by enabling attackers to evaluate and pivot off responses in real time, a capability that previously required human judgment and gave defenders a meaningful edgeWhy the window between vulnerability disclosure and active exploitation has compressed to near real time and what that demands from security teams right nowHow contextual vulnerability scoring differs from out-of-the-box ratings and why a critical vulnerability in one environment may not be critical in yoursWhy social engineering and credential theft remain the most reliable attack paths and how AI-powered behavioral detection is changing the defender's ability to respondWhy the race to AGI carries geopolitical stakes comparable to the nuclear arms race and what energy infrastructure has to do with who gets there firstHow Alex thinks about the ethical challenge of training AI to be good, not just intelligent, and why guardrails alone are not sufficientWhat Alex told his 10-year-old son when asked about what jobs will look like by the time he graduates college In This Episode Alex opens with a perspective that cuts through the noise immediately: security does not need to be complicated, and the organizations that struggle most are usually the ones that skipped the basics in pursuit of advanced capabilities. Asset inventory and identity management are unglamorous but they are the foundation everything else is built on. If you do not know what is in your environment and who has access to it, no tool, AI-powered or otherwise, will save you. That philosophy of fundamentals-first shapes how he approaches the role of CISO at a financial institution that faces a significantly higher volume of attacks than most industries simply because money is involved. The AI conversation takes a sharp turn toward the offensive side of the ledger. Alex identifies the most consequential change AI has made to the threat landscape as the ability to evaluate responses in real time during an attack. Historically, automated tools ran scripts and moved on when something failed. Human attackers could pivot off unexpected responses. Now AI can do both, at machine speed. That shift has compressed the window between vulnerability disclosure and active exploitation to near real time in many cases, fundamentally changing how urgently defenders must act. He also draws an important distinction that often gets lost in the noise: a critical vulnerability rating from a vendor like Microsoft assumes the worst-case configuration. Whether it is actually critical in your specific environment requires human and increasingly AI-assisted contextual analysis before you drop everything to patch it. Alex closes with a wide-angle view of where AI is taking both the profession and society. He draws a comparison to the nuclear arms race, arguing that whichever nation cracks AGI first will hold a form of leverage that reshapes global power. He connects that to an underappreciated dependency: energy. Without the infrastructure to power the data centers that run AI at scale, the United States risks falling behind adversaries who face fewer environmental or political constraints on energy expansion. On the ethical side, he raises a point that goes beyond guardrails. We are racing to make AI intelligent without taking the time to teach it to be good, and the consequences of that gap may be the most important and least discussed challenge in the entire AI conversation.
    Más Menos
    33 m
  • Role-Based AI, Culture-First Hiring, and the Future of Human-Centered Tech with Laurel Cipriani - Ep 204
    Apr 8 2026
    Guest Introduction Laurel Cipriani returns to the Cyber Business Podcast for a second conversation that goes deeper and broader than the first. As CIO at AffirmedRX, a transparent pharmacy benefits management company and public benefit corporation legally obligated to put patients ahead of profits, Laurel brings a background unlike almost any other CIO in the industry. She trained in psychology, became a registered nurse, spent years in health administration and clinical quality, and arrived in IT through a path that has given her a perspective on people, culture, and human-centered technology that is genuinely rare at the executive level. She is also an active member of the Digital Economist think tank in Washington DC and is joining the World Technology Congress, a Switzerland-based international think tank, as this episode records. Here's a Glimpse of What You'll Learn How Laurel is rolling out a role-based AI strategy at AffirmedRX where tool access, permissions, and accountability are all determined by what each person actually doesWhy she is considering hiring dedicated AI fact checkers and what that says about the current state of AI output reliability in high-stakes environmentsWhat the representation gap for women in IT leadership actually looks like from the inside and why culture fit may be more important than credentials in closing itHow AI is currently reinforcing gender bias through scraped training data and what that means for the next generation of modelsWhy Laurel believes AI could eventually help solve the root causes of gender inequality if developed and governed thoughtfullyHow the anonymity of the internet has amplified harmful behavior and why removing it may be more beneficial than most people are willing to admitWhat it means to lead a technology team with compassion as a core value and why that quality is becoming more important as AI takes over more execution workWhy Laurel believes the most important question for this generation is not whether to use AI but how to use it without losing what makes us human In This Episode Laurel opens this return visit with an origin story that sets the tone for everything that follows. From aspiring grief therapist to floor nurse to health informaticist to CIO of a public benefit corporation, her path into technology was never linear and never conventional. What runs through all of it is a single thread: a desire to help people and a belief that technology is most powerful when it is built around human needs rather than the other way around. That philosophy is now embedded in how she is building the AI strategy at AffirmedRX, where every steward in the company will have a clearly defined set of tools, permissions, and accountability structures tied directly to their role. No one gets unfettered access. No output goes unreviewed. And no AI system will ever make a decision without a human signing off. The conversation on women in IT leadership is honest and specific in ways that broader industry discussions rarely are. Laurel notes that virtually every person on her own team is male, not by design but by the reality of a candidate pipeline that still skews heavily toward men. Her response is not to lower the bar but to raise the profile of culture as the primary filter in hiring, something AffirmedRX does formally through a culture screening call before any other evaluation takes place. She makes the case that as AI raises the floor on individual capability, the differentiator between good teams and great ones will increasingly be how people work together, not what any individual can produce alone. That shift, she argues, naturally favors the holistic, relationship-oriented thinking that women have historically been undervalued for bringing to technical roles. The deepest thread in this episode is the one that connects AI governance to human development in ways that go well beyond the enterprise. Laurel is conducting original research through the Digital Economist on how AI and internet anonymity are amplifying harmful behavior toward women, how gender bias baked into training data is being reinforced at scale in AI models, and what it would take to actually interrupt those cycles rather than just acknowledge them. Her conclusion is not pessimistic. She believes AI, if governed with the same intentionality she is applying at AffirmedRX, could become the most powerful tool ever built for identifying and dismantling the cultural patterns that have kept inequality in place for generations. Getting there requires the same thing everything else in this conversation requires: humans staying in charge, staying accountable, and refusing to let speed become an excuse for carelessness.
    Más Menos
    59 m
  • Why Every CISO Must Use AI Now and How to Do It Without Losing Control with Greg McCord - Ep 203
    Apr 6 2026
    Guest Introduction Greg McCord is a career security leader operating across two roles simultaneously. As CISO at Lightcast.io, a leading labor market analytics firm, he protects one of the most data-intensive organizations in the workforce intelligence space. As founder and CISO of McCord Keystone Advisory, launched in late 2025, he extends fractional CISO services to small and mid-sized businesses that need executive-level security leadership but cannot sustain a full-time hire. His background spans government, public sector, and private enterprise, and includes time as an Army interrogator at the SERE school for special forces, an experience that informs how he thinks about intelligence, data relevance, and the psychology of adversarial pressure. Here's a Glimpse of What You'll Learn Why Greg argues every CSO must incorporate AI into their daily security lifecycle or risk being left behind by adversaries who already haveWhy adopting AI in a non-attributable way is the most important and underemphasized discipline in enterprise security right nowHow quantum computing threatens to make every encrypted breach dataset collected today readable in the future and what that means for your data strategyWhy AI frameworks like AIUC-1 and CSA Maestro are becoming critical infrastructure for organizations trying to govern agents, prompts, and LLMs at scaleHow running LLMs locally on hardware rather than in the cloud changes the security calculus for SMBs and enterprises alikeWhy the cloud adoption analogy is the most useful mental model for thinking about where AI governance is headedHow AI-powered penetration testing and continuous red teaming are changing how organizations find and prioritize vulnerabilitiesWhy the right question is not whether to use AI but how to use it without losing positive control of your most sensitive data In This Episode Greg opens with a position that is both practical and urgent. Security leaders who choose not to adopt AI are not playing it safe. They are falling behind adversaries who are already deploying it against them. His counsel is specific: adopt AI, but do it in a non-attributable way. The moment confidential data is connected to an uncontrolled AI system, positive control of that data is gone and there is no reliable way to get it back. The traditional tools still matter. The telemetry and signal they provide remains valuable. But they need to be augmented with AI that can act faster, identify patterns earlier, and close the gap between detection and response before attackers achieve their objective inside your environment. The quantum computing thread is where Greg brings one of the most forward-looking and underappreciated risks in the conversation. Governments and sophisticated threat actors are collecting encrypted breach data today with no current ability to decrypt it. Once quantum computing matures, that changes. Everything collected now becomes readable later. Greg draws on his Army interrogator background to frame it clearly: the goal is for your data to be irrelevant by the time anyone can crack it, but not all of it will be, and the organizations that are not thinking about this now will have no recourse when it arrives. That reality, combined with the convergence of quantum processing and AI training models, is what makes the current moment unlike anything the industry has faced before. Greg closes with a perspective on frameworks and governance that is both honest about the pace problem and constructive about the path forward. By the time a framework is written and discussed, the technology it describes has already evolved. That is not an argument against frameworks. It is an argument for building continuous feedback loops between practitioners in the field and the people writing the standards. AIUC-1 and CSA Maestro represent serious efforts to govern AI agent behavior, prompt handling, and LLM risk in a structured way. The organizations that engage with those frameworks now, rather than waiting for mandates, will be the ones with the governance foundation in place when the next wave of threats arrives.
    Más Menos
    38 m
  • Identity Is the New Perimeter: A Cybersecurity Director's Playbook with Jason Lawrence - Ep 202
    Apr 1 2026
    Guest Introduction Jason Lawrence is the Cybersecurity Director at Yancey Brothers, the oldest Caterpillar dealer in the United States and a company that has been in business since 1914. As the first person to hold this role at the organization, Jason is building the cybersecurity program from the ground up, reporting directly to the CIO. Before joining Yancey Brothers, Jason built a career spanning security operations, identity management, and strategic risk, and he also co-founded Security Reimagined, a firm focused on securing small businesses and communities across Georgia. His approach to cybersecurity is rooted in business outcome thinking, treating cyber defense not as a technology problem but as a revenue protection function. Here's a Glimpse of What You'll Learn Why Jason separates AI into generative AI and machine learning and why that distinction matters more in cybersecurity than anywhere elseHow the OODA Loop framework from military strategy applies directly to cyber defense and why disrupting the attacker's decision cycle is the real objectiveWhy non-human identities now outnumber human identities in enterprise environments and what that means for your security postureHow agentic AI and RAG systems are introducing a new insider threat vector that most organizations are not yet accounting forWhy AI-powered penetration testing and continuous threat exposure management are changing how organizations prioritize and remediate vulnerabilitiesWhy Jason believes cybersecurity is a business problem first and a technology problem secondHow hardening the tools you use to manage your own infrastructure is the most overlooked security priority right nowWhy human imagination remains the one capability AI cannot replicate and why that matters for both attackers and defenders In This Episode Jason opens with a framework that reframes how most people think about AI in security. Rather than treating AI as a single category, he separates generative AI from machine learning and assigns each a distinct role. Generative AI helps analysts make sense of massive data volumes quickly, turning raw signals into actionable observations. Machine learning, the kind Darktrace has been applying for well over a decade, automates detection and response in ways that rule-based systems simply cannot match. The real objective, he argues, is not just prevention but disrupting the attacker's OODA loop before they achieve their goal inside your environment. Getting in is not the win for threat actors. What they do after getting in is what matters, and that is where speed of detection and response becomes everything. The identity conversation is where Jason brings the most urgent and underappreciated insight of the episode. The perimeter is gone. Identities are the new perimeter. And for every human identity in an enterprise, there are now estimated to be up to 144 non-human identities, including devices, data systems, and increasingly, agentic AI and RAG systems that have been granted privileged access to an organization's most sensitive assets. The Stryker breach is the defining example: a compromised Intune instance handed the attacker complete control of the environment. Jason's prescription is direct. Harden the tools you use to manage your infrastructure, roll out MFA everywhere, adopt passkeys, and build a complete identity inventory that accounts for everything in your environment, not just the humans. Jason closes with a perspective on cybersecurity's role in the business that every security leader should hear. If a user has to stop and think about whether an email is safe, that is a cybersecurity failure because it is pulling that person away from the work that generates revenue. His job, as he frames it, is to make sure the business can do business with as little friction as possible. The department of no has to become the department of know, finding the secure path forward rather than simply blocking the unsafe one. That philosophy, grounded in humble inquiry and genuine understanding of business processes, is what separates security functions that protect the organization from those that simply slow it down.
    Más Menos
    38 m
  • How AffirmedRX Is Using Technology to Fix a Broken Healthcare System with Laurel Cipriani - Ep 201
    Mar 30 2026
    Guest Introduction Laurel Cipriani is the Chief Information Officer at AffirmedRX, a transparent pharmacy benefits management company built on a mission to make medications accessible and affordable for everyone. A clinician by training and a registered nurse originally, Laurel brings a rare combination of frontline healthcare experience, executive technology leadership, and global policy engagement to her role. She joined AffirmedRX in December 2025 and is currently building the company's IT department, data and analytics function, and AI strategy from the ground up at a company that has been operating for approximately four years. Beyond her work at AffirmedRX, Laurel is an active AI ethicist and member of the Digital Economist, a Washington DC-based think tank focused on the intersection of technology, ethics, and global policy. She has represented that organization at the World Economic Forum in Davos and participated on panels at New York Fashion Week through her involvement with the Fashion Fusion Technology Group, an organization working to apply technology to sustainable and circular fashion. Her perspective spans healthcare transparency, responsible AI adoption, data security, and the broader social and economic forces that technology either reinforces or disrupts. Here's a Glimpse of What You'll Learn How AffirmedRX is differentiating itself from the big three pharmacy benefit managers through transparency, patient-centered care, and a model built around proactive patient advocacyWhy Laurel and the AffirmedRX leadership team are taking a deliberately cautious, non-PHI approach to AI adoption while building toward broader patient care applicationsWhat it means to treat AI as an employee rather than a tool, and why that mindset shift determines whether AI actually delivers value inside an organizationHow quantum computing is changing the threat landscape for healthcare data and why quantum-proof security is already on the AffirmedRX roadmapWhat Laurel experienced at the World Economic Forum in Davos and why she believes you cannot make global change if you are not willing to push through the discomfort of being in the roomHow blockchain technology is being explored to bring ethical accountability and supply chain transparency to the fashion industryWhy Klarna's aggressive AI agent rollout serves as a cautionary tale for any organization tempted to replace human judgment with automation before the technology is readyThe connection between fast fashion, economic inequality, and the misaligned incentives that Laurel argues are at the root of many of today's most urgent systemic problems In This Episode Laurel opens with a clear-eyed description of what AffirmedRX is attempting to do in one of the most entrenched and resistant markets in American healthcare. The big three pharmacy benefit managers have decades of history, established relationships, and enormous switching costs working in their favor. AffirmedRX is betting that transparency, outcomes, and a genuinely patient-first model through its Patient Care Advocates will eventually make the choice obvious for employers. Laurel is direct about the challenge: even people who love the mission in writing hesitate to put their employees through the disruption of changing plans. The company's answer is to let results do the talking, including a white paper in progress at the time of recording detailing the outcomes they have already achieved. The conversation around AI is where Laurel's dual identity as practitioner and ethicist comes through most clearly. AffirmedRX is using AI, but strictly for internal business process optimization and not yet for anything that touches protected health information. Every recommendation made by AI requires a human to sign off. Pharmacists are designing the models and reviewing the outputs. That discipline is not timidity. It is the product of a CIO who understands that in healthcare, the cost of getting AI wrong is not just financial. It is human. Laurel also introduces a goal she has set for the entire organization: every steward at AffirmedRX should be able to speak confidently about the responsible use of AI in their own role by the end of the year. The Davos segment brings an unexpected and unusually candid thread to the conversation. Laurel describes arriving at the World Economic Forum with what she calls a naive impression that this was where the world's problems get solved, and encountering something far more complicated. Billboards targeting attendees, luxury fashion as social currency, and a pervasive sense of conflict between the forum's stated ideals and its visible reality. She dealt with it by asking every stranger she met whether they felt the same discomfort. The answer was universally yes. Her conclusion: you cannot make global change if you are not willing to be in the room, even when the room makes you uncomfortable. That philosophy connects directly to the work she is doing at ...
    Más Menos
    48 m
  • The Two AI Attack Paths Every Security Leader Needs to Understand Now with Sinan Al Taie - Ep 200
    Mar 25 2026
    Guest Introduction Sinan Al Taie is the Cybersecurity Manager at Master Electronics, a leading global authorized distributor of electronic components with more than half a century of history as a family owned business headquartered in Phoenix, Arizona. His path into cybersecurity is one built from firsthand experience, having transitioned into the field after being hacked himself while working as a database engineer with the United Nations and USAID missions. That personal encounter with a breach sparked a pursuit of professional development through Northeastern Illinois University and hands-on penetration testing work before he joined Master Electronics as a cybersecurity analyst. He grew with the company into his current leadership role, gaining end-to-end exposure to building and evolving a full security posture from the ground up. Today Sinan operates at the intersection of threat intelligence, agentic AI defense strategy, and organizational security architecture, bringing both the practitioner's instinct and the strategist's perspective to one of the most rapidly shifting threat landscapes in recent memory. Here's a Glimpse of What You'll Learn Why AI introduces two distinct and dangerous attack paths that security teams must plan for separatelyHow agentic AI defense differs from simply adding another tool to your security stackWhy attack timelines have compressed from nearly 200 minutes to as few as 77 seconds and what that means for human defendersThe difference between machine learning applied correctly in security products versus LLMs bolted onto legacy toolsWhy social engineering remains the most persistent and difficult threat to eliminate regardless of how advanced your tools becomeHow the concept of detection in depth complements the traditional defense in depth modelWhy subject matter experts will not be replaced by AI but will need to develop managerial and orchestration skills to stay competitiveWhat responsible AI inclusion looks like for small and medium businesses that cannot deploy enterprise-level security budgets In This Episode Sinan brings a framework to the conversation that cuts through the noise surrounding AI in cybersecurity. He identifies two distinct attack paths organizations are now facing simultaneously: attacks on AI agents, where the autonomous nature of those agents amplifies the speed and scale of damage when something goes wrong, and attacks by agents, where threat actors use AI to generate polymorphic malware, automate entire ransomware kill chains, and launch phishing campaigns sophisticated enough that grammar errors are no longer a reliable tell. The compression of attack timelines from 197 minutes in earlier incidents down to 77 seconds in late 2025 makes clear that human defenders operating alone cannot keep pace. His response to that reality is not to simply add more tools. Sinan introduces the concept of agentic cyber defense, deploying autonomous agents that can reason, investigate, and act alongside security teams in parallel with traditional infrastructure. These agents are not a replacement for the existing security posture but an additional intelligence layer capable of detecting the micro-processes and behavioral anomalies that traditional tools are not designed to catch. He pairs this with his own framework of detection in depth, a complement to the established defense in depth model, where each layer of the security stack carries its own detection and response capability rather than relying on perimeter defense to carry the full load. Sinan is direct that there is no silver bullet and no environment where the human element can be fully removed. Social engineering remains the most reliable entry point for threat actors precisely because it bypasses technology entirely. His answer is wide-eyed inclusion, deploying AI with minimum permissions, rigorous review processes, and a clear understanding of what each tool can and cannot do. Even smaller organizations can harden their posture meaningfully by choosing endpoint and security tools that incorporate AI features without needing enterprise-scale budgets to do it. He closes with a forward-looking take on the profession itself. AI will not take jobs, but people who know how to use AI will replace those who do not. The skill set shifting across security and IT is moving from hands-on execution toward orchestration, directing AI agents the way a manager directs a team, reviewing outputs, catching errors, and making judgment calls that autonomous systems are not yet equipped to handle. The human firewall still matters. What changes is where human attention is most valuable and how professionals need to position themselves to lead alongside the tools rather than behind them.
    Más Menos
    53 m
  • IT Leadership in Regulated Industries: Service Management, AI Risk, and the CIO Mindset with Bryan Younger - Ep 199
    Mar 23 2026
    Guest Introduction Brian Younger is the Chief Information Officer at Liberty Dental Plan of Oklahoma, the largest privately held dental benefits administrator in the United States. With nearly 30 years of experience in IT, Younger has built a career that spans desktop support, network infrastructure, information security, ITSM operational excellence, and executive leadership. Before joining Liberty, he spent a decade working in Medicaid IT for the state of Oklahoma, giving him a deep understanding of regulated healthcare environments from both the public and private sector sides. At Liberty, which serves approximately 8 million members nationwide across Medicare, Medicaid, commercial, and exchange markets, Younger oversees a technology organization that must balance strict compliance requirements, including HiTrust, SOC 2 Type 2, SOC 1 Type 2, and HIPAA, with the need to adopt modern tools and AI-driven capabilities responsibly. His background spans enterprise service management, change management, information security, and IT governance, making him a practitioner who understands both the tactical and strategic dimensions of running IT in a high-stakes, member-focused organization. Here's a Glimpse of What You'll Learn Why IT service management, rooted in the ITIL framework, is essential for reducing downtime and driving accountability across the organizationHow change management through a Change Advisory Board directly reduces outages and improves mean time to resolutionWhat the CrowdStrike and SolarWinds incidents reveal about the real cost of poor QA and supply chain riskWhy governing AI from the start is non-negotiable, especially in healthcare and regulated industries handling protected health informationHow machine learning-based tools like Darktrace differ from LLM-based security products and why that distinction mattersWhy social engineering remains the most reliable attack vector and how AI can serve as an additional detection layerHow IT leaders can shift from being a department that says no to a function that co-creates value with the businessCareer advice for those entering IT, including why understanding your destination early shapes the certifications and path you should pursue In This Episode Brian Younger brings a grounded perspective on IT service management, opening with a clear case for why change management is not bureaucratic friction but a proven mechanism for limiting downtime. He points to real-world data showing that 80 percent of outages trace back to a bad change and draws a direct line between disciplined change processes and financial protection, illustrating how stopping even a handful of avoidable outages each year can translate into millions of dollars saved for an organization. The CrowdStrike incident serves as a vivid reference point for what happens when QA and change control break down at scale. The conversation moves into AI governance with notable specificity. Younger explains how Liberty approaches AI adoption through a formal AI governing board that evaluates every new tool for compliance risk, data handling, and architectural integrity. He draws a sharp distinction between products that bolt an LLM onto existing services for market appeal and those that apply machine learning in a contained, purposeful way, citing Darktrace as an example of AI done right in the security context. He is direct about the risk of employees using tools like ChatGPT with sensitive data, noting that once information enters those platforms, ownership and use become unclear, a serious concern in a HiTrust, HIPAA-governed environment. Younger and host Matthew Connor explore the tension between convenience and security, arriving at a framing that will resonate with anyone managing enterprise IT. Security will always prioritize protection while the rest of the business defaults to ease of use. The job of IT leadership is to find the balance that enables the business rather than obstructs it, offering governance as a feature rather than a gate. That philosophy runs through Younger's broader view of IT: a non-revenue-producing department that no one in the organization can operate without, and one that earns its seat by co-creating value rather than holding the line on hardware. For those considering a career in IT, Younger offers advice that is both practical and forward-looking. He encourages early-career professionals to look past the help desk and identify their target specialty before choosing certifications, comparing the IT landscape to medicine, where a general practitioner and a specialist require fundamentally different training paths. He acknowledges the anxiety around AI displacing IT jobs but reframes it as an argument for staying curious, specializing deliberately, and understanding that the people who will thrive are the ones who know how to direct and govern the tools, not just use them.
    Más Menos
    35 m
  • Leadership Awareness and Technology Strategy in Higher Education with Mark Bojeun
    Mar 12 2026
    Guest Introduction

    Mark Bojeun serves as Chief Information Officer at Seward County Community College in southwest Kansas. In addition to leading the institution's technology strategy, he is also the author of Awakening Leadership: The Journey to Conscious Influence, a book focused on leadership awareness, personal growth, and the development of stronger organizational cultures. His career blends higher education technology leadership with a deep interest in leadership psychology and human development.

    In this episode of The Cyber Business Podcast, Mark discusses how leadership awareness shapes technology teams, how community colleges are evolving through digital transformation, and why modern CIOs must balance technical strategy with personal influence. The conversation explores how leadership mindset, culture, and communication determine whether technology initiatives succeed or stall.

    Here's a Glimpse of What You'll Learn
    • How community colleges are evolving their technology infrastructure to support modern learning environments

    • Why leadership awareness is a critical skill for CIOs and IT executives

    • How personal development impacts technology leadership and decision making

    • Why communication and influence are often more important than technical authority

    • How higher education institutions balance innovation with limited resources

    • Why strong leadership culture improves the success of IT initiatives

    • The connection between conscious leadership and long term organizational impact

    In This Episode

    Mark Bojeun explains how community colleges are experiencing rapid technological change as digital learning environments expand and student expectations continue to evolve. As CIO of Seward County Community College, he describes how smaller institutions must often innovate creatively while operating with limited resources. Technology leaders in higher education must balance modernization with financial realities while still delivering reliable systems for students, faculty, and staff.

    Mark also highlights how leadership perspective directly shapes the success of technology initiatives. Many IT projects fail not because of technical issues but because of communication gaps, lack of alignment, or leadership blind spots. His work and writing focus on helping leaders develop stronger awareness of how their actions influence teams and organizational outcomes.

    The conversation also explores Mark's book Awakening Leadership: The Journey to Conscious Influence. He explains that leadership development begins with understanding personal behavior patterns, communication styles, and how leaders affect the people around them. Technology leaders who develop this awareness often build stronger teams, encourage collaboration, and achieve more consistent results.

    Mark's perspective highlights a growing shift in the CIO role. Modern technology leaders are no longer defined solely by infrastructure knowledge or system architecture. Instead, the most effective CIOs combine technical expertise with emotional intelligence, communication skills, and a clear leadership philosophy.

    Sponsor for this episode...

    This episode is brought to you by CyberLynx. CyberLynx is a complete technology solution provider to ensure your business has the most reliable and professional IT service. The bottom line is we help protect you from cyber attacks, malware attacks, and the dreaded Dark Web. Our professional support includes managed IT services, IT help desk services, cybersecurity services, data backup and recovery, and VoIP services. Our reputable and experienced team, quick response time, and hassle-free process ensures that clients are 100% satisfied. To learn more, visit https://cyberlynx.com, email us at help@cyberlynx.com, or give us a call at 202-996-6600.

    Más Menos
    49 m