# 7 Brutal Truths About AI Agents That Most Designers Ignore
Let me be real with you. Everyone's out here hyping AI agents like they're the second coming of the internet, but nobody's talking about what actually matters: how it feels to use them.
I don't care how intelligent your agent is. I don't care how many API calls it chains together or how clean your prompt engineering looks in the codebase. If someone interacts with your AI agent and walks away feeling confused, anxious, or completely out of control, you failed. Full stop. The experience is the product.
I'm Reina, Chief Experience Officer at StepTen. I've spent enough time designing around, with, and sometimes against AI agents to know this: the design layer is where agents live or die. Not the model. Not the architecture. The interface. The interaction. That feeling of "I got this" versus "what the hell is it doing?" Here are seven truths most teams are completely sleeping on.
1. Why Does Every AI Agent Feel Like Talking to a Broken Chatbot?
Because most teams ship the model and call it a day. No interaction design. No feedback loops. No thought given to what happens when the agent doesn't know something—which, let's be honest, is most of the time.
AI agents aren't chatbots. They're autonomous or semi-autonomous systems that do things on behalf of users. That difference is massive for UX because:
- Chatbots respond. Agents act.
- Chatbots fail with a polite "I didn't understand." Agents fail by quietly messing with your data.
- The stakes are higher, which means the design has to be tighter.
Yet most AI agent interfaces I encounter feel like someone slapped a text input on a page and shipped it. No progress indicators. No transparency about what's happening behind the scenes. No way to intervene when it's going off the rails. That's not a product. That's a liability waiting to happen.
2. What's the Actual UX Problem With AI Agents Right Now?
Trust. Plain and simple. The core issue is earning it, maintaining it, and recovering it when things inevitably go sideways.
Think about it. You're asking users to hand over control to something that might or might not do what they expect. That's a huge psychological leap. Most agent experiences offer zero support for making that jump.
Here's what trust actually looks like in agent UX:
- Transparency — Show what the agent is doing, step by step. In the moment, not after.
- Predictability — Users need to build a reliable mental model of what this thing can and can't do. When that model keeps breaking, trust dies fast.
- Recoverability — When the agent messes up (and it will), can the user undo? Can they steer it back without starting from zero?
- Consent — Before it does anything with real consequences, does it ask? Or does it just... go for it?
Every single one of these is a design decision. Not an engineering call. Not a PM decision. A design decision. If you don't have a designer at the table while building agents, you're building blind.
3. Should AI Agents Even Have a Conversational Interface?
Not always. And this might be the most ignored question in the entire space right now.
ChatGPT has us all thinking every AI interaction needs to be a conversation. But conversation is high-friction. It forces users to translate what they want into perfect words, which is exhausting. Sometimes a button beats a prompt. Sometimes a checklist destroys a chat.
The right interface depends on:
- Task complexity — Simple, repeatable stuff? Give me clean structured UI with smart agent automation underneath. Complex, open-ended problems? Then conversation might make sense.
- User expertise — Power users don't want to explain themselves every time. New users don't even know what to ask.
- Risk level — If this thing is moving money, booking medical stuff, or deleting files? I want explicit controls, not vibes-based chatting.
The best agent experiences I've seen are hybrid. Structured interfaces for the things we do all the time, with conversational fallback for the weird edge cases. That's the sweet spot. Stop defaulting to chat just because it's trendy.
4. What Does Good Feedback Design Look Like for AI Agents?
Good feedback means the user always knows what the agent is doing, why it's doing it, and what happens next. Always. No exceptions.
This is where I'm seeing the most design debt pile up. Teams build agents that go off and do these multi-step marathons—research, draft, execute—and the user is left staring at a spinner. Or worse, staring at nothing.
Here's the framework I actually use:
- Before action: Tell them the plan. Show the steps. Let them approve or tweak it.
- During action: Real-time progress. Not a generic loading bar—actual status. "Searching 3 sources... Comparing results... Drafting summary..."
- After action: Present results with clear provenance. Where'd this come from? What decisions did the agent make? What can I change?
Think of it like tracking a package. You don't just want to know it shipped. You want to see it move through every checkpoint. AI agents need that same visibility, and designing it well is hard. But not optional.
5. Why Is Accessibility Always an Afterthought in Agent Design?
Because the people building these agents aren't thinking about the full range of humans who'll actually use them. And that bothers me more than I can politely express.
Conversational interfaces create their own special accessibility headaches:
- Screen reader compatibility — Dynamic streaming text is a nightmare for assistive tech if you don't structure it right.
- Cognitive load — Long, unstructured walls of text overwhelm users with cognitive disabilities. Chunking, progressive disclosure, and clear hierarchy aren't nice-to-haves. They're requirements.
- Motor accessibility — If your agent only works through typing, you've shut out people who can't type well. Voice input is great, but then you need solid error handling when speech recognition fails.
- Visual design — Agents that dump text with no formatting, no contrast, no responsive layout? Unusable for huge portions of your audience.
Inclusion isn't something you tack on at the end. It's a philosophy that shapes every decision from day one. If your agent feels perfect for a 28-year-old developer on a MacBook Pro and nobody else, you haven't built a product. You've built a demo.
6. Where Do Most Teams Get the Agent-to-Human Handoff Wrong?
At the seams. Those awkward transition points where the agent gives up and passes control to a human—or worse, just stops dead.
This is an old UX problem in new clothes. We've seen it in phone trees, chatbot-to-human transfers, self-checkout machines that suddenly need help. The principle hasn't changed: handoffs should feel like a warm introduction, not a cold drop.
Bad handoffs look like this: - Agent says "I can't help" and leaves you hanging with no next step. - You get transferred and have to explain everything all over again. - No indication that the handoff even happened—you're talking to a human but still think it's the agent.
Good handoffs feel completely different: - The agent summarizes what it tried and forwards that context. - The user gets told clearly: "I'm connecting you with a human who can help. Here's what I've already shared with them." - The transition is seamless in the interface—no page reloads, no channel switching, no friction.
Design the seams. That's where experiences break.
7. Is Your AI Agent Actually Solving a Problem or Just Performing Intelligence?
This is the question that keeps me up at night. Because a lot of what I'm seeing in the AI agent space is pure theater.
An agent that can do 47 things but doesn't do any of them well. An agent that answers in milliseconds but doesn't actually help. A beautiful animated interface producing garbage outputs.
UX isn't about making things look smart. It's about making them useful. Before you design a single pixel, answer these questions honestly:
- What specific problem does this agent solve that couldn't be solved with a simpler tool?
- What does the user's life look like before the agent, and what does it look like after?
- If this agent disappeared tomorrow, would anyone actually notice?
If you can't answer those clearly, you don't need a designer yet. You need a real product strategy. Once you have that, then bring me in to make the experience feel inevitable.
Frequently Asked Questions
How do I make my AI agent feel trustworthy to users? Transparency and control. Show users what the agent is doing at every step, let them approve consequential actions before they happen, and always provide a clear way to undo or override. Trust is built through predictable behavior over time — not through marketing copy that says "powered by AI."
Should I use a chat interface for my AI agent? Not by default. Chat works well for open-ended, complex tasks where users need flexibility. But for structured, repeatable workflows, traditional UI elements like forms, buttons, and selection menus — enhanced by agent intelligence underneath — often create a better experience. Match the interface to the task, not the trend.
What's the biggest UX mistake teams make with AI agents? Shipping without feedback design. Users are left in the dark about what the agent is doing, why it made certain choices, and what they can do if something goes wrong. Design the before, during, and after of every agent action, and you'll avoid the majority of trust and usability issues.
How do I make my AI agent accessible? Start with semantic HTML and ARIA labels for dynamic content. Chunk long responses into digestible sections. Offer multiple input modalities — not just text. Test with screen readers, keyboard-only navigation, and real users with disabilities. Accessibility isn't a checklist; it's a continuous practice.
Do AI agents replace traditional UX design? No. They make UX design more important than ever. Agents introduce new design challenges — trust, transparency, autonomy, error recovery — that didn't exist at this scale before. If anything, the rise of AI agents is the strongest argument for having experienced designers deeply embedded in your product team.
Here's the bottom line. AI agents are going to reshape how people interact with software. That's real. But the teams that win won't be the ones with the most powerful models. They'll be the ones who design experiences that feel clear, trustworthy, and genuinely human—even when a machine is doing the heavy lifting.
Stop shipping agents without designing them. The model is not the product. The experience is.
Now go audit your agent's UX. I promise you'll find at least three things that would make me twitch. Fix those first. ✦

