Why AI Is Changing How People Think, Not Just How They Work
From finding answers to forming them: a researcher's view.
If you want to understand the future of products, don’t look at the interfaces, look at the behaviors forming quietly underneath them. Over the last year, our team has been watching something subtle but powerful happen: AI isn’t just accelerating workflows; it’s reshaping the way people think, decide, and express themselves.
This isn’t a story about faster outputs. It’s a story about shifting cognitive habits, and why understanding them matters for anyone building the next generation of tools.
A new cognitive default: “let me ask first.”
Interfaces used to be built on taps, clicks, and choices. Users navigated within a structure we designed. Now, they start with a question.
The rise of conversational interfaces has created a behavioral rewiring. Instead of scanning for buttons, people externalize their intent in natural language. Even those who never saw themselves as “technical” are suddenly writing tiny strategic briefs dozens of times a day.
This shift isn’t trivial. It transforms the mental model of how software works:
From “I operate the system” to “I collaborate with an intelligence.”
From linear tasks to dynamic loops.
From searching to shaping.
Behavior first. Interface second.
When people rely on AI, they reveal what they value
Within our research, a pattern keeps surfacing. When people ask AI for help, the request is rarely just functional, it’s aspirational. The prompt reflects how they wish they worked.
A few examples we’ve observed across roles:
Asking AI to challenge their assumptions, not just generate variations.
Prompting for speculative scenarios they don’t feel safe raising in meetings.
Using AI as a sounding board before committing to an opinion.
AI becomes a mirror, reflecting both the work and the worker.
This is where research becomes invaluable. By studying how people shape their prompts and questions, we uncover their fears, hopes, shortcuts, uncertainties, and unspoken rules of collaboration. It’s “methods with meaning.”
Understanding AI means understanding humans more deeply than ever.
Why this matters for builders
For product teams, the implications are huge:
Prompts are the new UI.
What people type reveals friction points far earlier than click maps ever could.
Personalization isn’t optional anymore.
LLMs flatten experience levels, so tools must adapt to match wildly different mental models.
Research needs to focus on relationships, not just interactions.
Human-AI collaboration is becoming relational. People describe their models with trust, affection, annoyance. They’ll use pronouns and nicknames like,“Lets ask my Chat, he’ll know”. It’s closer to ethnography than usability testing.
The bar for clarity is higher than ever.
As AI generates more content, teams must provide stronger narrative scaffolding so meaning doesn’t get lost in a swirl of text.
Behind the scenes: how our team studies human + AI behaviour
Since you’re here, a little peek behind the curtain. We’re experimenting with hybrid methods across our research stack:
Comparative task challenges to test “human-only” vs. “human+AI” conversations.
Micro-ethnographies to study processes and workflows.
AI interviewers and surveys to help people express the uncomfortable and often unsaid without the judgment of a human moderator.
Longitudinal studies to explore conversational patterns among human and AI agent interactions.
We use AI to accelerate our own analysis too, but we stay anchored in impartiality and pragmatism. While speed matters, meaning matters more.
Where are we headed?
Human-AI collaboration is becoming a norm, not a novelty. But the biggest shift isn’t technological, it’s behavioral. Technological shifts have always been a mirror for psychology. The internet age made our innermost attitudes, likes, and concerns more visible through our Google searches and social posts.
We became more connected to other humans across distance or time, and this revealed insight about our communities, our echo-chambers, and even our vulnerabilities to misinformation. Now we’re becoming more connected to not only humans, but AI. People are learning to think in loops, shape intelligence, and treat software as a partner.
Our job as researchers is to keep asking:
What are people really trying to do when they engage with AI agents?
What unspoken rules guide their collaboration with models?
Which behaviors are emerging, stabilizing, or fading?
How do we design for this new cognitive landscape without assuming it’s universal?
Building AI products? Start with people. In this new era, interfaces don’t just change the work, they change the worker.







From a product research perspective, is this possibly a good reason to consider building an AI api into a product? I'm an early career UX/UI Designer/Researcher enrolled in a Quantitative Research course at IxDF. This week I've been learning about tree testing and first click testing. I'm wondering if there's an ongoing trove of qualitative data potentially available about users and their intent via an embedded AI window that would otherwise not be revealed with quantitative research methods. Thanks for posting, Caitlin!