January 1, 2025

Designing for AI the Human Way

As AI becomes table stakes in digital products, the question isn't whether to include it — it's how to design it so people actually trust and use it.

Illustration showing human-centered AI design principles

AI Is Everywhere. Good AI UX Is Not.

Walk into almost any product roadmap conversation today and you'll hear the same thing: we're adding AI. The pressure is real, the timelines are aggressive, and the technology is genuinely powerful. But the gap between what AI can do and what users actually experience — and trust — remains wide.

At SeaLab, we've been doing AI UX work long enough to have strong opinions about where most products go wrong. And the answer is almost always the same: they design for the AI's capabilities, not for the human's experience of those capabilities.

Those are two very different things.

What Makes AI UX Different

Traditional UX design operates on a principle of predictability. Users click a button and something happens. They fill out a form and get a result. The interface is deterministic — the same input produces the same output.

AI breaks that contract. Outputs vary. Confidence levels fluctuate. The system sometimes gets things wrong. Designing a good experience on top of that variability requires thinking differently about trust, transparency, and control.

Most AI UX failures fall into a few predictable categories:

The black box problem. The AI produces an output and gives the user no signal about why, how confident it is, or what factors it considered. Users can't evaluate the output, so they either trust it blindly (risky) or don't trust it at all (wasteful).

The loss of control. AI takes an action — summarizes something, recommends something, auto-fills something — and the user has no clear way to override, correct, or undo it. Users feel like passengers in their own product.

The overpromise. Onboarding sets expectations the AI can't meet. Users discover the limits the hard way, after they've already committed to using the feature.

The cliff edge. When the AI fails or reaches its limits, there's no recovery path. The experience just stops, leaving users stranded.

The C.L.E.A.R. Framework

SeaLab developed the C.L.E.A.R. framework as a practical guide for designing AI-powered experiences that hold up under real-world use.

C — Control. Users must be able to guide, edit, or override AI outputs. Default to assistive, not automatic. Give users the steering wheel; let AI be the navigation.

L — Learnability. Design for first-try success. Use examples, microcopy, and intuitive prompts to help users understand what the AI can do and how to get the most out of it. Don't assume people know how to prompt.

E — Explainability. Show users why the AI gave a specific result. Highlight contributing factors, confidence signals, or data sources. "Here's what I found, and here's where it came from" is far more trustworthy than just "here's the answer."

A — Accountability. Build for when things go wrong. Include clear error messages, correction tools, and escalation paths. The question isn't if your AI will get something wrong — it's what happens to the user when it does.

R — Responsiveness. AI should adapt to feedback. When users correct, reject, or modify an AI output, the system should acknowledge it and adjust. Personalization changes must feel visible and earned.

What This Looks Like in Practice

On the FOMO.AI project, applying C.L.E.A.R. meant redesigning how AI outputs were surfaced on the dashboard — adding confidence signals, inline explanations, and clear correction paths alongside every AI-generated recommendation. The technology didn't change. The user's ability to trust and act on it did.

On the eLearning platform, we learned that users didn't trust AI for high-stakes tasks like grading and mentorship — so we pivoted the AI features toward lower-stakes reinforcement and self-testing, where users were comfortable with AI assistance. Meeting users where their trust actually is, rather than where you wish it was, is one of the most important principles in AI UX.

On the conversational AI project, it meant prompt engineering as design work — crafting the AI's persona, tone, and fallback behaviors so that the experience felt consistent and human even when the outputs varied.

The Bottom Line

AI UX is still UX. The fundamentals — understand your users, design for their goals, test with real people, iterate based on what you learn — don't change because AI is involved. What changes is the surface area of uncertainty you're designing around.

The studios and product teams that get this right will build AI-powered experiences people actually use and trust. The ones that don't will ship impressive demos that users quietly abandon.

We know which one we're building toward.


Working on an AI-powered product? Let's talk about designing it the human way.