AI policy basics for operators: what this week changed
No se pudo agregar al carrito
Solo puedes tener X títulos en el carrito para realizar el pago.
Add to Cart failed.
Por favor prueba de nuevo más tarde
Error al Agregar a Lista de Deseos.
Por favor prueba de nuevo más tarde
Error al eliminar de la lista de deseos.
Por favor prueba de nuevo más tarde
Error al añadir a tu biblioteca
Por favor intenta de nuevo
Error al seguir el podcast
Intenta nuevamente
Error al dejar de seguir el podcast
Intenta nuevamente
-
Narrado por:
-
De:
EP002: AI policy basics for operators.
This episode translates AI policy concepts into practical operating decisions for leaders, managers, and delivery teams.
- Episode: 002
- Title: AI policy basics for operators
- Runtime: 10m 30s
- Host: Michael Hanna-Butros Meyering
AI policy works only when it is written as operational guidance people can apply in daily workflows.
- 00:00 Why AI policy fails in real teams
- 01:20 Story 1: Claude Sonnet 4.6 and model-change governance
- 04:40 Story 2: AI infrastructure cost signals and procurement controls
- 07:40 Action block: policy + change management implementation
- 09:40 Monday-morning actions + outro
- Anthropic launched Claude Sonnet 4.6 (February 17, 2026), which reinforces the need for model-upgrade controls and evaluation gates in internal policy.
- Anthropic announced it will cover electricity price increases tied to data-center growth (February 17, 2026), making infrastructure impact a practical procurement and governance issue.
- Scope: which AI use cases are allowed, restricted, or prohibited.
- Data: which data classes may be used with which tools.
- Controls: review, logging, exception handling, and escalation.
- Accountability: who owns policy updates and incident response.
- Add a model-change trigger section to your AI policy (when re-evaluation is mandatory).
- Add three infrastructure-risk questions to AI vendor intake.
- Run one manager briefing with a clear script for allowed/restricted use.
- Audit one active AI workflow for drift between policy and real usage.
- Anthropic, “Announcing Claude Sonnet 4.6”: https://www.anthropic.com/news/claude-sonnet-4-6
- TechCrunch coverage, “Anthropic releases Claude Sonnet 4.6”: https://techcrunch.com/2026/02/17/anthropic-releases-claude-sonnet-4-6/
- Anthropic, “Covering electricity price increases from AI data centers”: https://www.anthropic.com/news/covering-electricity-price-increases
- Reuters coverage (via Investing.com): https://www.investing.com/news/stock-market-news/anthropic-to-cover-electricity-price-increases-in-areas-where-it-builds-data-centers-3894580
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- NIST Generative AI Profile: https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence
- OECD AI Principles: https://oecd.ai/en/ai-principles
- ISO/IEC 42001 overview: https://www.iso.org/standard/81230.html
This episode uses AI-assisted production tools (voice rendering, editing support, and publishing automation). Final editorial and risk decisions are human-led.
Todavía no hay opiniones