FIR #477: Deslopifying Wikipedia Podcast Por  arte de portada

FIR #477: Deslopifying Wikipedia

FIR #477: Deslopifying Wikipedia

Escúchala gratis

Ver detalles del espectáculo
User-generated content is at a turning point. With generative AI models cranking out tons of slop, content repositories are being polluted with low-quality, often useless material. No website is more vulnerable than Wikipedia, the open-source reference site populated entirely with articles created (and revised) by users. How Wikipedia is handling the issue — in light of its strict governance policies — is worth watching, especially for organizations that also rely on user-generated content. Links from this episode: Wikipedia Editors Adopt ‘Speedy Deletion’ Policy for AI Slop ArticlesHow Wikipedia is fighting AI slop contentFrom the technology community on Reddit: Volunteers fight to keep ‘AI slop’ off WikipediaWikipedia:WikiProject AI CleanupWikipedia loses challenge against Online Safety Act verification rulesWikipedia can challenge Online Safety Act if strictest rules apply to it, says judge The next monthly, long-form episode of FIR will drop on Monday, August 25. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz (00:00) Hi everybody, and welcome to episode number 477 of For Immediate Release. I’m Shel Holtz. @nevillehobson (00:08) And I’m Neville Hobson. Wikipedia has long been held up as one of the internet success stories, a vast collaborative knowledge project that has largely resisted the decline and disorder we’ve seen on so many other platforms. But it’s now facing a new kind of threat, the flood of AI generated content. Editors have a name for it, not just editors by the way, we do as well. It’s called AI slop. And it’s becoming harder to manage as large language models make it easy. to churn out articles that look convincing on the surface, but are riddled with fabricated citations, clumsy phrasing, or even remnants of chat bot prompts like as a large language model. Until now, the process of removing bad articles from Wikipedia has relied on long discussions within the volunteer editor community to build consensus, sometimes lasting weeks or more. That pace is no match for the volume of junk AI can generate. So Wikipedia has now introduced a new defense, a speedy deletion policy that lets administrators immediately remove articles if they clearly bear the hallmarks of AI generation and contain bogus references. It’s a pragmatic fix, they say, not perfect, but enough to stem the tide and signal that unreviewed AI content has no place in an encyclopedia built on human verification and trust. This development is more than just an internal housekeeping matter. It highlights the broader challenge of how open platforms can adapt to the scale and speed of generative AI without losing their integrity. And it comes at a moment when Wikipedia is under pressure from another front, regulation. Just this month, it lost a legal challenge to the UK’s online Safety Act, a ruling that raises concerns about whether its volunteer editors could be exposed to identity checks or new liabilities. The court left some doors open for future challenges, but the signal is clear. the rights and responsibilities of platforms like Wikipedia are being redrawn in real time. Put together these two stories, the fight against AI slop and the battle with regulators shows us that even the most resilient online communities are entering a period of profound change. And that makes Wikipedia a fascinating case study for what lies ahead for all digital knowledge platforms. For communicators, these developments at Wikipedia matter deeply. They touch on questions of credibility. how we can trust the information we rely on and share, and on the growing role of regulation in shaping how online platforms operate. And there are other implications too, from reputation risks when organizations are misrepresented, to the lessons in governance that communicators can draw from how Wikipedia responds. So, Shail, there’s a lot here for communicators to grapple with. What do you see as the most pressing for communicators right now? Shel Holtz (02:52) Well, I think the most pressing is being able to trust the content that you see is accurate and authentic and able to be used in whatever project you’re using it for. And Wikipedia, we know based on how it’s configured, has always been a good source for accurate information because it is community edited, errors are usually caught. We have talked about ...
Todavía no hay opiniones