January 1, 2025

Prompt Engineering Conversational AI Agents for Human Interaction

Designing conversational AI isn't just about what the AI says — it's about how the whole experience feels to the person on the other side.

Conversational AI agent interface design showing a natural, human-centered chat experience

Overview

As AI-powered conversational agents become more common in digital products, the UX challenge has evolved. It's no longer just about whether the AI can answer questions — it's about whether users trust it, understand it, and want to keep using it. Poor conversational design erodes that trust fast: confusing responses, unexpected behavior, and a lack of transparency about what the AI can and can't do all contribute to users abandoning the experience.

SeaLab partnered with a client building a conversational AI agent to tackle both sides of this challenge: the prompt engineering that shapes how the AI responds, and the UX design that shapes how users experience it.

The Challenge

Conversational AI introduces a design problem that traditional UX doesn't fully account for: the interface is generative. Unlike a button or a form, a conversational AI response is different every time. Designing for that variability — ensuring the experience feels consistent and trustworthy even when outputs vary — requires a different approach.

Specific challenges on this project:

  • Users didn't understand what the agent could and couldn't do, leading to frustrated expectations
  • AI responses were accurate but felt clinical and off-brand
  • There was no clear recovery path when the AI misunderstood a request
  • The onboarding experience didn't establish appropriate expectations upfront

Our Approach

Prompt engineering as UX work. The way a prompt is written directly shapes the user experience. We worked closely with the client to craft prompts that produced responses which were accurate, appropriately scoped, and on-brand in tone. This included defining the agent's persona, establishing boundaries for what it would and wouldn't engage with, and building in fallback behaviors for edge cases.

Designing for the failure states. One of the most important — and most overlooked — parts of conversational AI UX is what happens when things go wrong. We mapped every failure mode: misunderstood requests, out-of-scope questions, ambiguous inputs, and edge cases. For each, we designed clear, human recovery paths that kept users moving forward rather than hitting dead ends.

Onboarding that sets honest expectations. Users who understand what an AI agent can do are more forgiving when it hits its limits. We redesigned the onboarding experience to clearly communicate the agent's capabilities and scope upfront — using examples, starter prompts, and honest framing — so users arrived at the conversation with the right expectations.

Applying the C.L.E.A.R. framework. SeaLab's framework for AI UX — Control, Learnability, Explainability, Accountability, and Responsiveness — shaped the interaction design throughout. Users needed clear control over the conversation, legible explanations of AI outputs, and visible responsiveness when they provided feedback or corrections.

Tone and voice alignment. The AI's responses were technically accurate but felt inconsistent with the client's brand. We worked through a set of voice guidelines for the agent — not scripting every response, but establishing the persona, vocabulary, and register that made the agent feel like a natural extension of the brand.

Results

The redesigned conversational experience felt more human, more trustworthy, and more useful. Users understood what the agent could do, knew how to get the most out of it, and had clear paths forward when something didn't land right.

The prompt engineering work also produced more consistent, on-brand outputs — reducing the variance that had previously made the experience feel unpredictable.

"Getting conversational AI right is as much a design problem as it is a technical one. The prompts, the failure states, the onboarding — all of it shapes how users feel about the product."

— SeaLab Design Team


Building a conversational AI product? Let's talk about designing it the human way.