The AI-native traits we should be hiring for
Lessons learned from one of our new engineers
In my last post, I shared a conversation with Matt, Dan, and Jack about what makes engineers effective with Claude Code. I was trying to understand what’s behind the success that some people are having with these tools – is it years of experience? Deep judgment? Strong mental models? And whatever it is, can we encode it and help everyone get there?
That conversation was with people who’ve been here a while – tenured, senior, already thriving. This time I wanted to talk to somebody new. Salman Shah has only been at Intercom a few months and he’s already having an outsized impact – not just on his team, but on AI adoption across the company. He’s shipping at a rate that stands out. While also contributing to shared tooling, running informal coaching sessions with teammates, and building relationships across the org that most people don’t establish in their first year, let alone their first few months.
I wanted to understand what’s going on – what is it about how he works and thinks that’s producing these results? And what that tells us about the AI traits we should be hiring for?
AI + human, not AI vs. human
Salman’s foundational mindset comes from an unexpected place: a chess book. After Garry Kasparov lost to IBM’s Deep Blue in 1997, he created advanced chess, also known as centaur chess – tournaments where both sides played as AI + human teams. The question was no longer: “Can AI beat humans?” but “What does AI + human look like when it’s working well?”
Salman read Kasparov’s Deep Thinking and it changed how he approaches working with AI:
“My mindset for every task is not ‘how do I type something and go away?’ That’s where I feel a lot of people have felt frustration. They try and then realize, ‘oh, it just wasted my time.’ Instead, how can you do AI + human through everything?”
You always have to ask: what does the human add, and what does the AI add? Not “can AI do this for me?” but “what does the AI + human version of this look like?”
Nobody said no
Salman and I talked a bit about his career path – seven years across various technology companies before joining Intercom. At those places, you get used to asking permission. At Intercom, the opposite happened.
He said: “Here I have tried asking for permission and people have been like, ‘you don’t need permission to do that.’ I’ve just done so many things and up until now, no one’s come and told me, ‘don’t do this.’”
In his first month, this culture was tested. He pushed something, broke something, got paged at eight o’clock in the evening for a minor incident.
“I was very worried the next day. But then I was told not to worry about it – just get the learning out of it. Which is to have an additional test that would have caught it.”
That feedback emboldened him and the message was clear: we’d rather you move fast and learn than move slow and play it safe. And each time he tested that boundary without getting burned, the boundary moved further out.
“Fueled by AI, but also fueled by energy, ambition, and nobody saying no.”
Claude Code first – in everything
This is the most actionable theme from the conversation, and probably the biggest mindset shift.
Salman doesn’t just use Claude Code for writing code. He uses it in incidents, planning, QA, and code review. His question for every process is: “Can we do Claude Code first in this?“ – and not just within the existing process. Can we tear the process apart and rebuild it around Claude Code?
He built an agentic QA skill that runs test scenarios overnight. The old process: someone prepares a QA sheet, it takes three days to get through QA and fix what comes up. Now by the time a human QA session starts, most issues have already been caught and resolved.
An incident example: Salman threw everything at Claude Code – Sentry MCP, admin tools MCP, the codebase – and had a PR out within 30 minutes, on a part of the codebase he’d never worked in before.
“I did nothing special. It was related to reporting, which I have no idea about. So we had someone from the Reporting team look at it. But the fact of the matter is – can we have this mindset shift, that Claude Code can just do things better than us if we give it the right context and just go Claude Code first in everything?”
That phrase – “if we give it the right context“ – is doing a lot of work. Salman’s insight is that when Claude Code fails, it’s almost always a context problem, not a capability problem. Give it the right MCPs, codebase access, and framing, and it performs. The skill is in the setup.
He’s done this in three incidents now. Each time, he shipped a PR during the incident that would have taken much longer without the tool.
An incident is high-stakes and high-visibility – not an obvious moment to reach for a relatively new tool. But when Claude Code is already your default for everything else, there’s no switching cost. You just use it.
Build the network
Before he even joined Intercom, Salman reached out to Brian Scanlan, one of his interviewers, to talk through whether he should accept the offer.
“I spoke to him once and I was like: ‘hey, I’m not sure, should I join? Can we just chat?’ And he said: ‘okay.’ And then we met in Dublin my first week. And since then we’ve been talking monthly.”
Salman also met Eugene at a talk before he joined Intercom and reached out to set up a chat. Eugene and Brian recommended other people for Salman to meet. Within a few months he’d built relationships with senior engineers across the org, using them to understand the landscape, find problems, build trust, and contribute to shared tooling.
This is the human side of thriving with AI tools, and I think it’s easy to overlook. No amount of Claude Code proficiency replaces the fact that you need to understand the people, the problems, and the context around you. Salman is explicit about this:
“There is always a human aspect of things. When you’re working with people, I think that probably just helps.”
The principle here is to seek out context actively, don’t wait for it to come to you, and invest in relationships that help you understand the system you’re building in. It’s a skill that will compound.
The local maximum problem
We got into a back-and-forth about something I keep seeing: the local maximum problem. Most people don’t know what they don’t know or the extent of these tools’ capabilities.
I described the pattern as I see it:
“Someone adopts Claude Code and starts shipping AI-written code. They get comfortable and that feels like the ceiling. But the ceiling isn’t a missing feature or a tool they haven’t installed – it’s the boundary of what they’ve imagined the tool can do. Someone sees Claude Code as a coding tool, and then one day realizes it can also do code review, or planning, or incident response, and suddenly the ceiling moves.”
Salman sees the same thing from the ground. He put numbers on it: maybe 50% of the org is at beginner, 30% in the middle, 20% advanced. The adoption curve is left-shifted. And the people at the top are compounding each other’s knowledge while everyone else doesn’t know what they’re missing.
This echoes what came up in my earlier conversation with Matt, Dan, and Jack – the forums and channels we have for discussing AI tend to select for the enthusiasts. The people who most need help aren’t represented in the conversation. When Jack ran his learning sessions in London, he was hoping to reach people at the far left of the curve. Lots of people showed up, which was great – but the hardest problem is still reaching the people who don’t yet know what they don’t know, what they’re missing, or how to engage.
Push the graph up
Salman tracks his own DX graph as a personal benchmark. He pulled it up during our conversation – a steady upward trend since his first day at Intercom.
“Whenever I find anything that is blocking me, I put it into a skill and see if I can push this graph up.”
This is his operating principle: each week, what’s slowing me down? Can I make it faster? Encode the answer into a skill. Compound the gains.
For example: the React inbox refactor. 280 PRs in four weeks driven by an autonomous Claude Code workflow. When Anthropic released agent teams, daily PRs went from four or five to 15. And the flywheel doesn’t stop at the end of the project:
“We’re going to spend a week collecting all that information and putting it back into a skill so that the next time we do review, we don’t have to think about those things because they’re already captured.”
This is knowledge encoding – he doesn’t just solve a problem, he encodes the solution so it persists for the next project and the next person. The skill becomes institutional memory.
At Intercom we’re using DX metrics to help us to understand Claude usage and proficiency. Sometimes when I talk to engineers about these metrics, they cite Goodhart’s law, and worry that the metrics are gameable and therefore meaningless. Salman flipped it on its head. For him, it’s not a performance evaluation metric, but rather a self-reflection tool.
“What’s helpful for me at the end of the week, if I’m doing self-reflection, is to ask: ‘is there a way to improve this?’ It’s easy to count. If you want to find how to get efficient, whatever can show you that makes sense.”
He’s deliberately gaming his own graph – but not in a cynical way. He’s not finding low-value PRs to inflate a number. He’s driving through high-impact work while also compounding low-hanging fruit on the side.
The 80/20 split
Salman has always been a split-focus person. Eighty percent on the main project, 20% on side quests – defects, feature flag cleanup, plugin contributions, whatever catches his eye.
“80/20 was always the principle. It’s just now I’m able to close the loop on the 20, even if the 80 gets harder – because AI is doing a lot of it for me.”
Before AI, the 20% was aspirational. You’d pick up a side thread and it would balloon in scope and you’d quietly drop it. Now the closing cost is so low that side quests actually get finished. Last cycle he cleaned up 30 feature flags. This cycle he’s chipping away at defects – two this week, compounding over time.
He’s not precious about it either. His PR close rate is about 7% – meaning roughly one in fourteen PRs he opens, he closes without merging.
“Sometimes you pick up something, you feel like it’s a lot more effort than what you initially thought. You don’t have the mental bandwidth for it. Just close it.”
Energy management over completionism. Starting something new is so cheap, being willing to abandon the ones that aren’t worth finishing becomes a real advantage.
Right time, right place – and right mindset
Salman is self-aware about the luck component. He arrived at Intercom with a lot of energy and a great mindset – but he also arrived at just the right moment.
“I came when the models were just getting better and better. And I leaned into betting on Claude Code, which is what Intercom also bet on.”
Seven years in technology companies and he’s never had this kind of impact before. At one, the culture dampened down exactly the energy that Intercom rewards. At another, the business wasn’t ready to bet on AI. The timing never aligned.
“I’ve never done this in my career. This is the first time I’m doing something like this in a very big way. I have never shipped as much as I shipped last month.”
But it’s not just luck. The combination is specific: culture (Intercom’s permission to fail) + timing (the agentic inflection) + personality (ambition, extroversion, initiative) + AI (the compounding enabler).
“It’s never just one thing. It’s probably a combination of everything. The culture at Intercom allows you to do that. Plus maybe some part of my personality always wanted to. But I have never done this – whatever this is.”
What’s staying with me
My earlier conversation with Matt, Dan, and Jack was about learning from tenured engineers at Intercom who’ve already shifted their mindset and found great success with Claude Code. Salman shows that’s possible even without long tenure at Intercom – with initiative, a compounding mindset, and a habit of treating every friction as something to encode and eliminate. He doesn’t have twenty years of engineering judgment. He has a relentless instinct to close loops, share what he learns, and ask “can I do this with Claude Code first?”
So what are the AI-native traits we should be hiring for? I think this conversation surfaces some of them:
Initiative without permission.
The instinct to tear apart a process and rebuild it, not just optimize within it.
A network-building drive that generates context and trust before you need either.
The discipline to encode what you learn into something that persists.
Reaching for Claude Code first – not just as a coding tool but as the starting point for every problem.
These traits reinforce each other: the network gives you context, context lets you deliver, delivery builds trust for bigger bets, and each bet generates knowledge you encode into the next one. Over time, they compound.
I want to push the adoption curve to the right – to accelerate as many people as possible through their own adoption journey. Conversations like this one help. They surface the ways of working that make the difference, and encourage us to create an environment to foster them.
If there’s one thing to take away from this conversation, it’s the question Salman asks himself for every task: “Is there a Claude Code first way to approach this problem?“ That’s the mindset shift. Start there.
To hear more from Salman on what it’s like to work at Intercom, check out this video:
And if you’re looking to join a team that embraces this mindset, check out our careers page.



