AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle. What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands. Links from this episode: The Emerging Threat of Workplace AI Abuse The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz. Neville Hobson: And I’m Neville Hobson. Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that. Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches. Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available. Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think. Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500? Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message. What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious...
Más
Menos