Episodios

  • What the ISM AI Update Actually Means for Cyber Teams
    Apr 1 2026
    Episode Summary

    The ISM has been updated again, and this time AI is front and centre. In this episode of Secured, Cole Cornford is joined by returning guest Toby Amodio, Practice Lead at Fujitsu Cybersecurity Services, for another instalment of Policy Wonks and Gronks, cutting through the vendor noise to talk about what the March 2026 update actually means in practice.

    They explore where AI is genuinely delivering value for cyber professionals, from automating compliance mapping and vendor assessments to streamlining pen test reporting and SOC triage. But they are equally candid about the risks: the erosion of foundational skills as junior roles get outsourced to AI, the creeping fatigue of reviewing outputs at scale, and the danger of skipping straight to full automation without the expertise to validate what the machine is doing.

    The conversation also tackles bigger picture concerns unique to Australia, sovereign AI capability, the risk of a brain drain to the US, and whether a small country can afford to decentralise its AI infrastructure. Toby closes with a sharp reminder for government CISOs: AI is just another system, and how people use it matters far more than the certifications attached to it.

    Timestamps

    00:00 Episode Trailer

    01:01 Chainguard ad

    01:28 Intro and the March 2026 ISM update

    03:00 AI hype vs real world utility

    05:00 Governance and compliance use cases

    08:00 Vendor assessments and knowledge base automation

    11:00 Skill erosion and the junior roles question

    14:00 AI in pen testing: reporting, scoping and customer experience

    17:30 The maturity model for AI adoption

    21:00 Vibe coding, slop assurance and fatigue at scale

    25:00 Agents watching agents and the bot vs bot future

    28:30 Australian AI sovereignty and the brain drain risk

    32:00 Top tip for government CISOs on AI risk

    35:00 Shadow AI and DNS log visibility

    37:00 Closing remarks

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    34 m
  • (Replay Ep) Leading Change in Cybersecurity: Tara Whitehead’s Approach to Security Engagement
    Mar 25 2026
    Episode Summary

    Tara Whitehead is Security Engagement Manager at MYOB. Prior to becoming a cybersecurity specialist, Tara had an eclectic career, including working in advertising and international relations. In this episode Tara chats with Cole about how her non-technical background has in many ways been an asset working in security, leading change management in large enterprises, the importance of great communication skills, and plenty more.

    Timestamps

    7:15 - Tara's first days in AppSec

    10:00 - How to influence people

    12:30 - Why we should dial back on the doomsday conversation

    14:10 - Find your change champions

    21:30 - Is a non-technical background help or hindrance?

    23:30 - Communication and influencing key skills

    26:00 - Communicating with execs

    28:20 - Rapid fire questions

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.

    Mentioned in this episode:

    Download your free CVE Reduction Assessment

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk.

    December 2025 - Chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    36 m
  • AI in AppSec: Hype, Layoffs and What's Actually Real
    Mar 4 2026
    Episode Summary

    Artificial intelligence is dominating headlines in cybersecurity, but how much of it holds up under scrutiny? In this solo episode of Secured, Cole Cornford, founder and CEO of Galah Cyber, shares his unfiltered take on three of the biggest AI narratives making waves in the AppSec space right now.

    Cole breaks down the Claude Code security announcement and why the market reaction dramatically overstated its real-world impact, arguing that the most meaningful security vulnerabilities have never been the ones static analysis tools can easily catch. He then examines Aikido's continuous penetration testing proposition, raising serious questions around noise, cost, resilience, and whether most organisations are even architected to support it.

    Finally, Cole tackles the AI job displacement narrative head-on, making the case that most high-profile tech layoffs are less about AI capability and more about mismanaged businesses using automation as convenient cover for decisions driven by poor performance and investor pressure.

    Timestamps

    00:00 – Intro & Cole's hot take on AI hype

    01:30 – Claude Code Security: what it is and why markets overreacted

    03:30 – Why meaningful vulnerabilities need context, not static analysis

    05:30 – Autofix, token waste, and who's actually using Claude Code

    08:00 – Aikido Infinite: the continuous pen testing promise

    10:00 – Cost, resilience, and noise concerns with Aikido

    12:49 – The AI jobs narrative: Cole's verdict

    14:30 – WiseTech, Block, and the smokescreen theory

    16:00 – Jobs shift, not job loss

    17:03 – Closing thoughts and solo format feedback

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard

    Secured is part of Day One.Day One helps founders and startup operators make better business decisions more often.

    To learn more, join our newsletter to be notified of new First Cheque episodes and upcoming shows.

    Mentioned in this episode:

    Call for Feedback



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    19 m
  • How AI Pen Testing Actually Works (and Where It Breaks)
    Feb 18 2026
    Episode Summary

    AI is starting to change penetration testing, but most people are asking the wrong question. In this episode of Secured, Cole Cornford sits down with Brendan Dolan-Gavitt, AI researcher at XBOW and former NYU professor, to unpack what autonomous pen testing really is, what it can reliably do today, and what still needs humans.

    They explore why AI agents are great at scaling the boring parts of testing, like authenticated workflows and broad vulnerability coverage across huge attack surfaces, and why that does not automatically translate to deep, context-aware exploitation. The conversation also gets into the messy parts: AI systems overclaiming “serious” findings, business logic flaws that are hard to verify, audit expectations, and why scope control needs real guardrails, not vibes. From agent traces and validation models to cost curves and creative exfiltration tricks, this episode is a grounded look at where AI helps AppSec and where it can still cause damage if you trust it too much.

    Timestamps

    00:00 – Intro

    03:10 – From academia to building autonomous security tools

    05:00 – Human pen testers vs AI agents: what is actually different

    06:40 – Where AI helps most: boring tasks and low hanging fruit

    08:30 – Scale: a thousand targets vs hiring a thousand testers

    10:20 – Accessibility, economics, and Jevons paradox

    12:30 – Accountability: audit evidence, traces, and “who signs off”

    14:40 – Scope control: avoiding prod and preventing out-of-scope actions

    16:20 – Safety checkers, overseer agents, and persuasion resistance

    18:40 – The cost question: VC money, inference pricing, and efficiency

    21:20 – When AI wastes money and why prioritisation matters

    23:50 – Failure mode: overclaiming business “vulnerabilities”

    26:10 – Validation agents and adversarial peer review

    28:40 – The scary clever stuff: exfiltrating files as images

    31:00 – What AI finds well: XSS, SQLi, file traversal, hard proof bugs

    33:10 – What AI struggles with: business logic and contextual judgement

    35:20 – Hype vs skepticism and why nobody has a crystal ball

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Report at https://dayone.fm/chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    42 m
  • AI, Hiring, and Trust: Why Shortcuts Break Interviews
    Feb 4 2026
    Episode Summary

    Hiring is still a human process, no matter how much AI gets injected into it. In this episode of Secured, Cole Cornford sits down with Kim Acosta, Managing Director at UCentric and former Amazon talent acquisition leader, to unpack how AI is actually changing recruitment and where it is quietly breaking trust.

    They explore how candidates are using AI in applications and technical assessments, why misuse often damages long term employability more than failing an interview, and why recruiters and hiring managers are responding with stricter controls, in person assessments, and AI detection. Kim shares what she is seeing across data, analytics, and AI roles, where demand is growing, and why human judgment, rapport, and credibility still matter far more than perfect answers.

    The conversation also covers embedded recruitment and RPO models, why soft skills matter more as teams get smaller, and what the next hiring cycle is likely to look like as big tech contracts while smaller companies continue to grow. For candidates, hiring managers, and founders alike, this episode is a grounded look at why shortcuts rarely pay off and why trust is still the real signal.

    Timestamps

    00:00 – Intro

    01:24 – Meet Kim Acosta and UCentric

    02:06 – From Amazon to starting a recruitment consultancy

    04:19 – Data engineering demand vs AI hype

    05:31 – What data engineering roles actually look like

    07:27 – Adapting business models to real market needs

    10:04 – Where AI genuinely helps recruiters

    11:09 – Custom GPTs and interview preparation

    13:43 – One way interviews and candidate slop

    15:09 – Technical assessments and AI misuse

    17:19 – Trust, failure, and reapplying the right way

    18:29 – Spotting AI generated answers in interviews

    20:19 – Rapport, eye contact, and human signals

    22:19 – Hiring for values and team fit

    23:52 – Agency vs internal vs embedded recruiters

    27:59 – RPO models and cost tradeoffs

    28:47 – Layoffs, market shifts, and salary reality

    30:57 – Where hiring is still strong

    33:10 – Why hiring and podcasts still need humans

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Assessment at https://dayone.fm/chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    34 m
  • PSPF Changes Explained for Security Leaders
    Jan 21 2026
    Episode Summary

    The Protective Security Policy Framework is meant to guide how government manages security risk, but constant updates make it harder to implement than to understand. In this episode of Secured, Cole Cornford is joined by Toby Amodio, Practice Lead at Fujitsu Cybersecurity Services and former senior cybersecurity leader across Australian government, to break down what actually changed in the latest PSPF update and why it matters in practice.

    They examine the growing focus on personnel security and foreign interference risk, the inclusion of AI guidance that adds little beyond basic risk assessment, and the long overdue recognition of Secure Service Edge and SASE as compliant gateways. The conversation also explores why deny lists and centralised risk sharing sound sensible on paper but are far harder to enforce in reality, and why most security failures still come down to behaviour, accountability, and how technology is actually used rather than what policy says.

    Timestamps

    00:00 – Intro

    01:18 – What the PSPF is and why it exists

    02:49 – Annual updates, directives, and policy advisories

    04:19 – What actually changed in the 2025 PSPF update

    05:36 – AI in the PSPF and why it adds little value

    08:14 – Tool hype vs implementation risk

    10:32 – The AI policy advisory and trusted vendors

    14:25 – Directive 3 and clearance disclosure risks

    17:21 – Personnel security and enforcement reality

    19:41 – Secure Service Edge and SASE recognition

    23:39 – Commonwealth Technology Management directive

    25:28 – Deny lists, transparency, and security through obscurity

    28:05 – Centralised risk sharing and assessment overload

    29:52 – Policy wonk or policy gronk

    31:12 – Final takeaways and closing

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Report at https://dayone.fm/chainguard

    Mentioned in this episode:

    Download your free CVE Reduction Assessment

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk.

    December 2025 - Chainguard

    Call for Feedback



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    33 m
  • The Architect’s Dilemma: Why Security Design Keeps Failing (and How to Fix It)
    Jan 7 2026
    Episode Summary

    Most security architects are not actually doing architecture. They are doing assurance work, following checklists, and hoping standards will save them. But as systems get more complex and attackers get faster, that approach is no longer good enough.

    In this episode of Secured, Cole sits down with Ken Fitzpatrick, founder of Patterned Security and creator of securitypatterns.io, a resource built during the lockdown years that has since grown into one of the clearest frameworks for designing meaningful, context-aware security architecture.

    Ken shares why so many architects fall into the trap of compliance thinking, how security design becomes a tick box exercise, and why threat modeling without understanding context is pointless. They unpack the four foundational steps every architect should follow, why traceability matters more than ever, and how modern teams can stop copying best practice and start solving the real problems in front of them.

    The conversation also digs into secure by design in different industries, why the term has lost its meaning, and how modern defensible architecture is resetting expectations for what good looks like. Cole and Ken also dive into AI and its impact on the architecture function, separating hype from reality and exploring which roles are at risk as AI improves.

    If you work in engineering, architecture, AppSec, risk, or are building a product and want a practical way to think about secure design, this is an episode you should not miss.

    Timestamps

    00:00 – Intro

    00:48 – Chainguard Ad

    01:20 – Meet Ken Fitzpatrick and Patterned Security

    02:19 – How a cancelled Canada trip sparked securitypatterns.io

    04:08 – Why architecture needs practical guidance, not more frameworks

    05:18 – The four step method for real security architecture

    07:23 – Moving beyond box ticking and why engineering experience matters

    09:39 – Teaching architecture fundamentals and selecting the right controls

    11:37 – Traceability and making defensible design decisions

    13:14 – Architecture vs assurance and who securitypatterns.io is for

    16:31 – Embedding secure by design into PMO processes and scale up use cases

    19:58 – What secure by design means across different industries

    23:05 – Inconsistent definitions in security and the need for clarity

    23:50 – Modern defensible architecture and Zero Trust guidance

    24:44 – AI’s role in architecture and which tasks get replaced

    28:25 – AI in AppSec and reducing false positives with context

    30:24 – AI sales bots, hype cycles, and the loss of human reciprocity

    33:28 – Ken’s call for collaboration on repeatable architecture patterns

    34:28 – Closing and how to connect with Galah Cyber

    🐙 Secured is grateful to be sponsored and supported by Chainguard.

    Chainguard is the trusted source for open source. Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Report at https://dayone.fm/chainguard

    Mentioned in this episode:

    Chainguard is the trusted source for open source.

    Get hardened, secure, production-ready builds so your team can ship faster, stay compliant, and reduce risk. Download your free CVE Reduction Report now!

    December 2025 - Chainguard



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    35 m
  • Fix the Flag: Rethinking Secure Code Training with Pedram Hayati
    Sep 11 2025
    Episode Summary

    CTFs are fun, but do they actually make developers write more secure code? In this episode of Secured, Cole Cornford is joined by Pedram Hayati (Founder of SecDim & SecTalks) to explore why most developer security training fails, and how SecDim’s “Fix the Flag” approach is changing the game.

    From contrived WebGoat-style examples to frameworks that quietly eradicate entire bug classes, Cole and Pedram dive deep into the intersection of AppSec and software engineering. They unpack why developer experience is non-negotiable, why security needs to borrow design patterns from engineering, and how real-world incidents (like GitHub’s mass assignment bug or the Optus breach) make concepts stick far better than acronyms like “XSS” or “SSTI.”

    This is a technical, opinionated episode for anyone who’s ever struggled to get developers engaged with security.

    Timestamps

    01:10 – Why Pedram built SecDim, the problem with pen test reports, and why CTFs don’t train developers

    04:42 – From “Capture the Flag” to “Fix the Flag”: making training realistic and Git-first

    06:30 – Training inside developer workflows and why contrived examples fail

    10:28 – Using modern stacks, AI-tailored labs, and real-world incidents to make concepts stick

    12:35 – Why security names suck (XSS vs. “content injection”) and the Optus hack as a teaching moment

    17:37 – Secure design patterns vs. vague slogans, and why secure defaults beat secure by design

    21:15 – Frameworks like React, Rails, and Angular that kill entire bug classes

    23:23 – Engineering by-products: reproducibility, immutability, and orthogonality in secure coding

    30:36 – PHP’s bad reputation, language quirks, and what’s actually most popular in security training today

    33:41 – Why AppSec pros need to build and deploy apps (not just know vulnerability classes)

    37:44 – Getting started with SecDim and hands-on secure coding

    Mentioned in this episode:

    Call for Feedback



    This podcast uses the following third-party services for analysis:

    Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
    Spotify Ad Analytics - https://www.spotify.com/us/legal/ad-analytics-privacy-policy/
    Más Menos
    39 m