Copilot in Dynamics 365: AI Agents, Governance Drift, and Everyday Risk Zones Podcast Por  arte de portada

Copilot in Dynamics 365: AI Agents, Governance Drift, and Everyday Risk Zones

Copilot in Dynamics 365: AI Agents, Governance Drift, and Everyday Risk Zones

Escúchala gratis

Ver detalles del espectáculo
(00:00:00) The Silent Threat of Architectural Erosion (00:00:02) The Pitfalls of Automated Decision-Making (00:00:14) Copilot's Hidden Impact on Enterprise Architecture (00:00:25) Credit Hold and Dispute Resolution Challenges (00:02:11) The Four Scenarios of Erosion (00:03:56) Vendor Selection and ESG Considerations (00:04:49) Customer Service Case Resolution Complications (00:04:52) Addressing OCR and Three-Way Match Issues (00:05:07) Invoice Approval: From Inspection to Narration (00:05:12) Credit Hold Edge Cases and Seasonality Most Dynamics leaders still talk about “adding Copilot” as if it were a simple overlay on top of existing processes. A smarter assistant in the same UI, helping humans work through the same approvals, the same holds, and the same cases. But once you let AI agents plan and execute across Dynamics 365, Graph, Power Automate, Outlook, and Teams, you are no longer just accelerating workflows; you are quietly changing where governance, accountability, and intent actually live. The controls, logs, and SoD models you trust still exist on paper, yet every composite step the agent takes introduces a little more drift between what you think is enforced and what is really happening in production.In this episode of M365.FM, Mirko Peters examines why organizations that treat Copilot in Dynamics 365 as “just another feature” keep widening their blast radius without noticing — and why the ones that treat AI agents as first‑class control‑plane participants are the only ones who can scale them safely. This is a conversation about the structural difference between validating actions and mediating narratives, between RBAC on single apps and effective authority emerging from orchestrated toolchains, and between auditing events and reconstructing causality when your decision traces live outside traditional logs. Instead of asking “does Copilot work,” Mirko asks what each helpful suggestion, summary, and automated step dissolves in terms of traceability, explainability, and enforceable intent.The organizations that will lead with Dynamics 365 and Copilot are not those with the most polished AI demos. They are those that have turned their enterprise stack into an explicit contract the agents must respect: where sensitive tools require step‑up, where prompts, tool maps, and models move through ALM like code, and where Segregation of Duties spans observe, recommend, and execute — not just roles on a RACI chart. In Mirko’s view, the real maturity test is whether you can bound blast radius, replay decisions, and see how composite identity actually behaves when agents stitch together legitimate low‑risk actions into emergent high‑impact pathways.WHAT YOU WILL LEARNWhy speed from AI agents is never neutral, and how “acceleration” in invoice approvals, credit holds, vendor selection, and case resolution turns into architectural erosion over time.How Dynamics 365 Copilot behaves as a distributed decision engine across Dynamics, Graph, Power Platform, Outlook, and Teams — and why that breaks naïve assumptions about RBAC and least privilege.Why mediation (summaries, confidence bands, narratives) quietly replaces validation and makes human reviewers track story quality instead of signal quality.How non‑deterministic planning on deterministic systems undermines regression testing, reproducibility, and incident response in real environments.What it means to design controls that survive composition: decision traces, step‑up on sensitive tools, ALM parity for prompts and tool graphs, and SoD models that recognize agents as actors, not features.THE CORE INSIGHTThe Dynamics AI Agent Lie is that you are “just” getting more work done, faster. In reality, every orchestration the agent performs rewrites where your governance actually lives, often outside the places you inspect or certify. Systems do not run on narratives about Copilot helping users; they run on the contracts that define who can do what, with which tools, under which obligations, and with which trace. As long as intent is implicit in prompts and flows instead of explicit in code and policy, every new agent capability adds a little more variance you do not price, a little more blast radius you do not bound, and a little more archaeology your teams will have to do after the next incident.WHO THIS EPISODE IS FORDynamics 365, CRM, and ERP leaders accountable for platform roadmap and Copilot adoption.Enterprise, solution, and security architects responsible for governance, RBAC, SoD, and auditability in Microsoft‑centric landscapes.IT and platform owners integrating Dynamics, Power Platform, Microsoft 365, and Entra ID into a coherent operating model with AI in the loop.Risk, compliance, and internal audit leaders who need to understand how AI agents really change decision traces, obligations, and incident blast radius.Microsoft partners and consultants advising customers on Dynamics 365, Copilot rollout, and ...
Todavía no hay opiniones