Designing for Essential Complexity
See how conducting AI, not just using it, unlocks higher-impact work.
Across hallway chats, customer interviews, and research sessions, a theme keeps surfacing: adopting AI means adopting more complexity.
Complexity is usually coded as something bad, like clunky UI or confusing setup flows. But the complexity we’re seeing now is different. It’s essential complexity: the kind that emerges when humans reason with AI, shape its behavior, and collaborate with it. This matters because driving real automation means tackling real complexity. The simple queries get automated quickly. The meaningful gains come from the edge cases, the exceptions, the multi-step problems. Fin is already capable of handling those complex queries—and teams are proving that every week—but getting there requires a different kind of reasoning.
As we move from using software to orchestrating it, we have to ask, “How do I teach this system to think and act like us?”
And that shift is redefining how customers behave, how teams learn, and how we design.
Complexity needs a conductor
Across our research with Fin customers, one thing is clear: the work of AI is new work and people are rising to meet it.
AI has made support teams faster and more scalable, but it has also expanded their roles. Instead of simply resolving conversations, they now shape Fin’s reasoning by:
Deciding how it behaves
Tuning prompts
Mapping workflows
Debugging outcomes
Training it with content
Early on, stepping into orchestration often feels like opening a machine and seeing more moving parts than expected. Teams get their first real glimpse of the many decisions that power AI behavior: which guidance to use, how to phrase instructions, what data Fin needs, and how small changes can influence downstream outcomes. The work shifts from “configure this setting” to “understand how the system thinks,” which can be disorienting before it becomes empowering.
But as teams progress, many step into a new identity: the orchestrator—the person responsible for shaping how Fin thinks, behaves, and continually improves.
A hospitality customer described how their work has evolved into designing Fin’s workflows, identifying content gaps, and building roadmaps for data improvements to reach an ambitious resolution-rate target.
Another customer highlighted how they started as a CX manager and now are the AI system strategist.
Some teams have even built dedicated Fin QA groups; former Tier 1 reps who now review Fin’s conversations for accuracy, especially where trust is paramount.
This progression from early uncertainty to confident orchestration is becoming a common pattern. Customers aren’t just using AI anymore. They’re conducting it. And this shift is opening new doors for people. As teams learn to guide and shape Fin, many are stepping into more advanced, higher-impact roles.
Designing for understanding, not simplicity
For decades, success in product and UX design was equated with simplicity: fewer clicks, faster task completion, no friction. But in the world of AI, hiding complexity hides understanding, and that affects agency.
When AI systems reason, customers need to see that reasoning in order to trust.
In our research, customers repeatedly tell us they want visibility into how Fin reached a decision—what data it used and why it responded the way it did so they can judge its reasoning, not just its answers. Without that, they struggle to build accurate mental models.
For example, recently customers have asked for:
Simulations that highlight which instruction step Fin followed so they can trace its reasoning.
The ability to validate every customer path in a procedure during QA before deploying it.
Ways to test common customer queries in bulk and automatically see which content and guidance influenced Fin’s answer.
These orchestrators aren’t asking for simplicity. They’re asking for a clear view into why Fin behaved the way it did and how they can improve it.
And this is where our product work becomes crucial. Design patterns like simulation highlighting, source breakdowns, and explainability surfaces show how a system came to a conclusion. The example below shows how orchestrators can run fully simulated customer conversations from start to finish to test how Fin will respond. This helps them see what Fin is doing, how it is reasoning, and where it passes or fails.
These help customers move from uncertainty → understanding → mastery.
Designing for AI is no longer only about removing friction.
It’s about building confidence through transparency.
Organizations as learning systems
Essential complexity doesn’t just reshape individual work, it reshapes how teams function.
As customers adopt AI, they begin developing:
New languages (“prompt,” “context window,” “workflow”)
New rituals (AI review councils, prompt libraries, content-readiness sprints)
New roles that blend support, data, engineering, and design
One team handling sensitive patient data reviewed every Fin interaction daily to guarantee accuracy, a level of rigor that turned AI oversight into a core competency.
And we’re starting to see upskilling everywhere. Some companies are training team members on prompting. Others are learning how LLMs reason so they can design better inputs. And communities like Support Driven are full of support reps swapping prompts, debugging AI tools like Claude, ChatGPT, and Gemini, and helping each other build reliable ways to categorize and analyze support data.
Teams aren’t just adopting AI; they’re learning in public and becoming more capable together.
This is the organizational version of orchestration—a shift from “AI as tool” to AI as collaborative system.
But there’s a balance to strike
Humans are natural complexity generators.
Every new feature, workflow, data source, and exception adds another instrument to the orchestra. Not all of it is essential.
Some complexity comes from our own building: overlapping systems, fragmented navigation, fast-paced changes that increase learning cost.
Our research shows this clearly. Fin has grown more complex as its capabilities expanded, and customers now face multiple ways of solving the same problem.
Reducing accidental complexity helps customers reach value faster.
But essential complexity—the reasoning, prompting, and teaching—is part of what makes AI powerful.
Our responsibility as builders is to stay analytical about the complexity we create.
We decide whether the system sounds unified or chaotic.
We get back what we put in.
A closing thought
We often ask “How can we make AI easier to use?” But a better question may be “How can we help people become more capable users of AI?”
Essential complexity isn’t a flaw to fix. It’s a literacy to build, a sign that something new is taking shape: humans and AI learning to think together.
Our opportunity now is to design products, education, and systems that help customers master that literacy faster and with more confidence than anywhere else.
•••



