Slowing Down to Be Helpful: Designing AI Chat Personality and Pacing


Between October 2025 and January 2026, we spent a lot of time overhauling Umamii’s core AI chatbot. Our early versions proved that speed isn’t always the ultimate goal in conversational UX. Sometimes, you need to slow down to be truly helpful.

The “Too Eager” Problem

Early feedback highlighted a surprising issue. The AI gave good answers, but it was too eager. It replied too quickly, creating an unnatural rhythm that users found off-putting.

We discovered that an AI providing instant, hyper-eager answers actually alienates users; building trust requires defining a distinct voice, tone, and intentional pacing.

This issue stemmed directly from our initial system prompt (v1.0.0). It was weak on conversational intent and didn’t enforce a clarifying, question-first approach. Because it rushed to conclusions, the agent consistently offered recommendations prematurely, resulting in less accurate results and a robotic user experience.

On top of the pacing issues, heavy chatting led to persistent error messages. We traced this back to a database schema mismatch between the production and development environments, where parameters were missing default values. We also battled mid-conversation authentication failures, both of which we completely resolved by mid-January to guarantee a stable chat session.

Defining Voice and Tone

To fix the core UX, we realized we needed to formally define the AI’s personality. The initial version of the AI chat already established our energetic, food-savvy “Umamii AI” persona. It laid out core guidelines like using a friendly tone and including “Umamii Pro Tips.”

However, we needed to make it measurable. We specifically outlined the Core Brand Tone, defined specific Clarification Styles, and standardized our Response Structures. Giving the AI a clear personality gave us a framework to evaluate its answers. We verified this goal by rigorously testing whether every response used a friendly tone, included emojis, and offered a unique, non-robotic insight.

Iteration and Prompt Engineering

With the personality defined, we introduced major architectural shifts to the system prompts. We needed to move the agent from reactive to completely proactive while securing data integrity.

In version 1.2.0, we introduced the Consultative Mandate. This was a critical rule that explicitly forced the agent to ask clarifying questions about cuisine, vibe, price, or occasion whenever it received a vague prompt. It couldn’t call any internal tools until it gathered this context. This marked a key strategic shift to a genuinely consultative approach.

Next up was data integrity in version 1.4.x. To eliminate hallucination, we introduced a strict Zero-Tolerance Tool Policy right at the top of the prompt. We explicitly forbade the AI from using its pre-trained knowledge for restaurant details. Instead, we reframed the act of asking questions as a methodical way to gather tool parameters. We instituted a strict “no tool, no answer” rule.

These iterations paid off because we could measure them. We developed an AI Chat QA Playbook built on Prompt-as-Contract Testing. We verified that the agent rigidly adhered to the system prompt and the internal tool schema. We even spun up an automated pipeline to score every chat session against our UX goals, heavily tracking metrics like the number of turns_to_tool.

A Smarter, More Stable Companion

By January 2026, the chatbot transformed into the experience we originally envisioned: a smarter and more stable food-savvy friend.

This final polish focused heavily on conversational memory and helpful guidance. We refined the AI with Short-Term Memory, allowing it to acknowledge previous context like the user’s location without needing a reminder. We also built in a Personalized Nudge feature. Once per session, after a solid recommendation, the AI offers a lightweight nudge suggesting the user write a quick review to get even better recommendations in the future.

Balancing technical execution with thoughtful conversational UX turned a rigid query processor into a trusted dining companion.

TL;DR

Instant, hyper-eager AI responses can actually alienate users. We built trust by formally defining Umamii’s chatbot personality, enforcing a consultative “question-first” rule, and intentionally slowing down the pacing of its answers.