Democratizing UX with AI Podcast Por  arte de portada

Democratizing UX with AI

Democratizing UX with AI

Escúchala gratis

Ver detalles del espectáculo
I've spent a lot of years arguing that most organizations have the wrong mental model of what a UX team is for. In the vast majority of organizations, UX is dramatically underinvested. You have one UX person, or at most a small team, supporting an organization with dozens of developers, product managers, and business analysts. Or a small digital team made up of a variety of disciplines and generalists, supposed to raise the quality of every digital touchpoint across an organization of several thousand. In that environment, expecting UX to own and shape the entire user experience is not a strategy. It is wishful thinking dressed up as one. The only approach that actually makes sense is democratization. Instead of trying to do everything yourselves, your job is to spread the capability: set the standards, train people, and give everyone who touches digital the knowledge and tools to apply UX best practice on their own. I've written about this for years, and most UX professionals I talk to agree with the principle. The problem has always been the execution. The playbook was the best answer we had For the past decade or so, the most sensible response to this challenge has been the digital playbook. A playbook, in this context, is a collection of policies, principles, standard operating procedures, and training material that documents how the organization should approach digital work. Done well, it does several things at once: it educates people who don't have a UX background, it standardizes how work gets done, and it gives the UX or digital team something to point at when a stakeholder wants to skip testing or cram twelve things onto a homepage. The UK Government Digital Service manual is probably the best public example of this. Comprehensive, well-structured, and genuinely useful. It also took a significant amount of work to produce, and presumably even more work to get people to actually use. The UK Government Digital Service Manual is probably the best example of a digital playbook. That last part is the problem with most playbooks. They ask a lot of the people you want to reach. If a product manager wants to run a quick survey to inform a decision, they now need to find the right section of the playbook, absorb methodology they've never thought about before, learn to apply it to their specific situation, and avoid the dozen ways this kind of thing typically goes wrong. That is a reasonable request if surveys are their job. It is a significant ask if they have three other priorities and a deadline on Friday. The playbook shifts the burden of UX knowledge from the UX team onto everyone else. In theory, fine. In practice, people are busy, and busy people take shortcuts. I say this having spent years advocating for playbooks, so make of that what you will. What AI changes about this picture I've been building out a library of AI skills for my own consulting practice over the past year or so, and somewhere along the way I realized these are doing the same job as a playbook, just in a radically different form. An AI skill, if you haven't come across the term, is a reusable standard operating procedure that an AI can follow on demand. You write it once, document the process in enough detail that an AI can apply it reliably, and from that point on anyone can use it without needing to understand the underlying methodology. This is what makes them interesting at an organizational level. A well-designed AI skills library doesn't ask your product manager to read the playbook before running a survey. It lets them say, "I need to design a survey to find out why users are dropping off at checkout," and have an AI walk them through the process, applying your organization's standards as it goes. The best practice is embedded in the skill. The person using it doesn't need to have absorbed it first. That is a qualitatively different proposition from anything a static playbook can offer. What an organizational AI skills library actually looks like The specific skills worth building will vary depending on the organization. But for a UX or digital team trying to extend their influence, the candidates tend to cluster around the tasks that non-specialists most often get wrong. Survey design is an obvious one. Writing questions that don't inadvertently bias the answers is harder than it looks, and most people who aren't researchers have no idea how their phrasing is leading respondents astray. A skill that guides someone through question design, flags leading language, and checks for common structural problems would save a lot of quietly-useless survey data from being collected. Prototype testing is another. The basics of a usability test, what to observe, what to ask, how to avoid putting words in a participant's mouth, are genuinely learnable. The problem is that someone needs to learn them before running the test, not during it. You could build skills for writing user stories that capture real intent rather than ...
Todavía no hay opiniones