Cables2Clouds Podcast Por Cables2Clouds arte de portada

Cables2Clouds

Cables2Clouds

De: Cables2Clouds
Escúchala gratis

Join Chris and Tim as they delve into the Cloud Networking world! The goal of this podcast is to help Network Engineers with their Cloud journey. Follow us on Twitter @Cables2Clouds | Co-Hosts Twitter Handles: Chris - @bgp_mane | Tim - @juangolbez

© 2026 Cables2Clouds
Economía Exito Profesional Política y Gobierno
Episodios
  • Please Don’t Dump Data Center Soup - Monthly News Update
    Mar 25 2026

    Send us Fan Mail

    AI is everywhere right now, but the numbers and the real-world trade-offs don’t always match the hype. We dig into a headline that AI added basically nothing to US GDP growth last year, even after billions in spending from the biggest names in tech. That launches a bigger question we can’t ignore: is the AI boom creating durable productivity, or mostly moving money around the same handful of companies that sell GPUs, cloud capacity, and data center hardware?

    From there, we get into the messy incentive layer of AI safety and AI regulation. We talk about Anthropic’s shifting safety stance and why “we meant well but competition changed” is becoming a familiar pattern across the AI industry. If guardrails depend on goodwill, what happens when the market punishes anyone who slows down? And if we keep pushing responsibility onto “developers,” are vendors dodging accountability for the defaults they ship?

    We also zoom out to the physical footprint of AI infrastructure: energy demand, strained grids, and the environmental impact questions that show up when states consider options like data center wastewater discharge. Then we hit the human side of “AI efficiency,” including layoffs framed as automation wins, and we end with privacy concerns around Meta Ray-Ban smart glasses and footage that may capture far more than people expect.

    What headline worries you most right now: jobs, safety, the environment, or privacy?

    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj

    Más Menos
    32 m
  • An Honest Conversation About AI Security
    Mar 11 2026

    Send a text

    Ready for a reality check on AI security? We invited Cisco cybersecurity expert Katherine McNamara to dig into where large language models actually break: from prompt injection and over-permissioned plugins to reckless “vibe-coded” apps that leak IDs, photos, and entire backends. The stories are real, the stakes are high, and the fixes are concrete. We trace how AI sprawl mirrors the worst of early IoT—weak defaults, poor isolation, and a stampede to integrate models into billing, HR, and support without guardrails—only this time the blast radius includes your customer data and your legal exposure.

    We talk through the human factor first. Written policies won’t stop someone from pasting a pen test report into a public chatbot. DLP helps, but hybrid work and BYOD stretch defenses thin. Then we move to the core threat model: public and private models are targets; datasets can be poisoned; plugins often ship with admin-level scopes; and a clever prompt can trick an LLM into disclosing chat histories, creating new accounts, or modifying orders. Courts have already treated chatbots as company representatives, binding businesses to their outputs—another reason to treat every integration like an untrusted user with strict least privilege.

    It’s not all doom. Used well, AI gives security operations superpowers: correlating signals across dozens of tools, reducing alert fatigue, and surfacing lateral movement. The path forward is discipline, not denial. Fence models on the network. Prefer read-only to write. Gate plugins behind narrowly scoped APIs. Vet datasets for backdoors. Red-team prompts as seriously as you pen test code. And educate stakeholders with live demos so they see why these controls matter. We also unpack the shaky economics—GPU costs, rising consumer fatigue, hype-fueled projects with little ROI—and why that pressure can erode privacy if teams aren’t vigilant.

    If you’re building with LLMs or trying to rein them in, this conversation gives you a practical map: what to allow, what to block, and how to make AI useful without turning your stack into an attack surface. Subscribe, share with a teammate who ships integrations, and drop a review with the one guardrail you’ll implement this quarter.


    Connect with our Guest:
    https://x.com/kmcnam1
    https://www.linkedin.com/in/katherinermcnamara/

    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj

    Más Menos
    52 m
  • When AI Deletes Production: Guardrails, MCP Risks, And The Surveillance Creep
    Feb 25 2026

    Send us Fan Mail

    What happens when an AI agent decides the “best” fix is to delete production? We unpack the AWS outage tied to an over‑permitted agent and zoom out to a bigger pattern: systems built for maximum utility and minimum restraint. From MCP’s connective promise to its post‑auth sprawl, we break down how agent toolchains turn small mistakes into big blast radii—and how to fix that with real guardrails, least privilege, and human‑in‑the‑loop at destructive boundaries.

    The conversation widens to public deployments where abstractions fail loudly. A military nutrition assistant built on Grok reportedly ran with minimal safety constraints and instantly entertained unsafe prompts. That’s not a funny glitch; it’s a policy failure. We talk about what genuine safety layers look like in high‑stakes settings: capability firewalls, explicit refusal policies, robust logging, and escalation paths for sensitive actions. Ethics, compliance, and operational discipline are not speed bumps; they are the steering wheel.

    Privacy takes center stage with a Ring twist: footage stored in the cloud despite no subscription. Helpful for a kidnapping investigation, yes—but also a wake‑up call for anyone who assumed “local” meant private. We offer practical steps for home security that actually secures the home: VLAN segmentation, strict egress controls, and device choices that still function offline. Then we turn to Discord’s plan to gate “mature” spaces behind global face and ID checks via Persona, the security research that raised red flags, and how user pressure pushed a rollback. If regulation demands verification, the right answer is minimal disclosure, not maximal identity.

    We close with a rare combo: a zero‑day disclosure delivered as a catchy music video calling out Malwarebytes for hard‑coded creds and privilege issues—followed by a commendable vendor response. It’s a model for the culture we want: researchers spotlighting flaws, companies fixing fast, and users gaining safer software. Throughout, we keep returning to one principle that ties AI, identity, and devices together: trust is a permission. Design for refusal, constrain by default, and say clearly what your systems must never do.

    If this resonates, follow the show, share it with a friend, and leave a quick review—what guardrail would you never ship without?

    Purchase Chris and Tim's book on AWS Cloud Networking: https://www.amazon.com/Certified-Advanced-Networking-Certification-certification/dp/1835080839/

    Check out the Monthly Cloud Networking News
    https://docs.google.com/document/d/1fkBWCGwXDUX9OfZ9_MvSVup8tJJzJeqrauaE6VPT2b0/

    Visit our website and subscribe: https://www.cables2clouds.com/
    Follow us on BlueSky: https://bsky.app/profile/cables2clouds.com
    Follow us on YouTube: https://www.youtube.com/@cables2clouds/
    Follow us on TikTok: https://www.tiktok.com/@cables2clouds
    Merch Store: https://store.cables2clouds.com/
    Join the Discord Study group: https://artofneteng.com/iaatj

    Más Menos
    42 m
Todavía no hay opiniones