UX Isn’t Ready for Agentic AI
(And That’s Not a Skills Problem)
There’s something familiar showing up in conversations about UX and AI right now:
Designers need to skill up. Researchers need to learn new tools. Everyone needs to get better at prompts.
Some of that is true. But it’s not the whole story.
What I keep coming back to is that many of the hardest UX challenges we’re facing don’t seem to come from individual capability at all, especially as more products introduce agentic AI. They come from the systems we’re designing within: our processes, assumptions, and how responsibility is handled once software starts acting on its own.
I don’t have a diagnosis or a set of answers. All I have is a starting point. As a practitioner, it’s my attempt to name what feels fundamentally different about designing for agentic systems, and why our existing UX playbook is starting to strain.
UX was built for request–response systems
For most of its history, UX has been optimized around a stable interaction model:
- A user initiates an action
- A system responds
- We evaluate whether that response was usable, helpful, or clear
Our methods reflect this model. It’s in usability testing, task completion rates, and conversion funnels. Even the language we use for critiques tends to center around discrete moments of interaction.
Agentic AI breaks this model.
When systems anticipate needs, take action in the background, generate content proactively, or make decisions without explicit user input, the “moment” we’re supposed to design for becomes harder to locate. And harder to evaluate.
In these systems, UX problems don’t always show up as obvious friction. They build up slowly, often outside the moments we traditionally test.
What changes when systems act on their own
When software begins acting independently, the experience stops being transactional and starts being behavioral.
Instead of asking:
- Can a user complete this task?
We’re suddenly dealing with questions like:
- Was this action appropriate in this context?
- Did the system overstep or under-communicate?
- How would a user even know this happened?
- Who is accountable if the outcome feels wrong?
Think about AI systems that:
- Rewrite or summarize content automatically
- Surface recommendations or suppress information
- Trigger actions based on inferred intent
None of these are inherently bad. But they introduce UX risks that don’t map neatly to screens, flows, or single interactions. The experience unfolds over time, and often outside the user’s immediate awareness.
That makes traditional UX validation feel… insufficient.
Why this doesn’t feel like a skills gap
When AI-driven experiences feel brittle or untrustworthy, it’s tempting to frame the issue as a capability problem: designers need new tools, researchers need new techniques, teams need more training.
But the friction I keep noticing doesn’t feel like a lack of talent. It feels like we’re missing infrastructure.
In many teams, there simply aren’t shared answers to questions like:
- Who owns the experience when AI takes action on a user’s behalf?
- When is human review required, and when isn’t it?
- How do we monitor UX quality once the system is live and evolving?
- Where does UX sit in governance conversations, if at all?
Without those structures, UX often gets pulled downstream and asked to refine outputs instead of help shape the rules that govern behavior.
No amount of prompt expertise fixes that.
What I’m trying to pressure-test
I don’t have a neat framework yet. What I do have are questions that keep resurfacing as I think about agentic systems more seriously:
- How do we design for trust when actions aren’t explicitly requested?
- How do we evaluate UX quality when failures are subtle or delayed?
- How does UX assert influence when speed is prioritized over resilience?
These aren’t conclusions. They’re hypotheses. And they’re exactly the kinds of questions that feel better explored with other practitioners than solved in isolation.
Why UX in ATX is starting here
As UX in ATX shifts back into regular, in-person conversations, I want us to focus less on polished answers and more on shared sensemaking.
Agentic AI is still new enough that many teams are navigating it by feel: experimenting, shipping, and adjusting as they go. That’s not a failure. It’s reality.
But it also means we need spaces where we can:
- Compare notes honestly
- Surface uncertainty without defensiveness
- Build shared language for experiences that don’t fit old models
That’s the role I hope UX in ATX can play this year.
An open invitation
If you’re working on AI-driven products and feeling like parts of the UX conversation don’t quite fit anymore, you’re not alone. And you’re not behind.
UX isn’t being replaced. It’s being stretched.
This post is an attempt to name that “stretch” and invite others into the conversation, whether you agree, disagree, or are still figuring out what questions to ask.
That’s where we’ll start.
