How to interview in the age of AI is one of the top questions in a manager peer group that I'm in.
Several hiring managers in the group went all in on AI-assisted interviews because they wanted the interviews to match the tools that engineers can use on their work. Most of them have gone full circle and returned back to no-AI interviews.
The main problem with AI-assisted interviews is that they become a test of how familiar the candidate is with the specific AI tool you're letting them use. They started getting inverted signals because the hardcore vibecoders knew all the tricks to brute force the problem with high token spend. They'd do things like spend the interview trying to spin up parallel subagents to brute force a solution.
Then the careful coders who tried to understand the problem and do it right were penalized because every minute they spent trying to do the problem (instead of offloading all cognitive load to AI) was time lost to letting AI do the work.
There were also simpler problems like when someone was familiar with some visual LLM interface tool but didn't have a familiar workflow for the CLI tool used in the interview.
Most people went back to coding interviews that forbid AI and test coding skills, combined with a discussion about their AI experience.
My takeaway was that it's easy to teach new hires how to use AI tools on the job, but it's much harder to bring someone with weak coding skills up to the level of someone with strong coding skills. It's even harder when that person is leaning on AI so much that they're not learning how to code anything.
> They started getting inverted signals because the hardcore vibecoders knew all the tricks to brute force the problem with high token spend. They'd do things like spend the interview trying to spin up parallel subagents to brute force a solution.
Can’t you just…tell them not do this? Or give them limited model access instead of full Claude Code / Codex?
I'm interviewing currently. I've come across the following.
* Non-technical recruiters using a list of technical trivia questions and a keywords list to check if you answer correctly (e.g. what is the default isolation level in postgres) - this is the worst.
* Code reviews but on non-generic code, like on a specific framework/product/SDK code in which you may or may not have prior experience - slightly less worse, but if you are going to do this, it is counter intuitive to hire someone who doesn't have production experience in that area.
* take home assignments where they say "we recommend you to not spend more than x hours on this", but then, they expect a very sophisticated work and will reject you if you turn-in a "simple" implementation. Maybe they expect speed and quality?
* leetcode is still a thing.
* non-leetcode, but livecoding, on a likely OOP many to many relationship problem
* give a spec in the first x minutes and leave the interview, you do your thing and then they join in the last y minutes - this is my favorite.
I haven't yet participated in a Agentic AI interview. It is not that common AFAIK.
a coding interview that is objective, repeatable, doesn't put the interviewee under pressure (and doesn't trigger unconscious bias based on accent/appearence)
* take home assigment with minimum requirements criteria and tell them "add a feature of your own" or "any extra work is appreciated".
* Pay them for their time and tokens.
* Use a custom agent to review the code and see how many "high", "medium", "low" issues the agent identifies.
Even if you said AI programming is based on "knowing what to prompt" this still comes down to:
(1) understanding software engineering (for one thing knowing if answers make sense)
(2) subject matter expertise and the ability to communicate with SMEs, fake being an SME by reading books, see the old "knowledge engineer" construct from the 1980s.
(3) knowing specifics about AI coding.
I think (1) and (2) are 80-90% of what leads to success in the long term. My guess is the models are going to get better so (3) skills have a short half life and will matter less, but (1) and (2) will stay the same.
Maybe I'm cynical but if I was designing screeners for this thing I would ask people things like
"How many accounts do you follow on X about AI?" where the right answer is "I don't have an X account" and the higher the count the worse it is.
"What percent of your programming time do you spend thinking about AI programming tools?" and anything over 20% is suspect (but maybe it is a tooling job or something in which case I'd drop it)
That is, I want to see that somebody used AI tools to deliver something 100% done end-to-end that worked and I'd like to see them spending 80% of their time doing.
I'd also be thinking about screeners designed to detect FOMO attitudes and reject people for it.
I believe that the new interviews will just get more questions, related to AI.
I read it in an article: the AI amplifies. It amplifies the success of the good professionals and it amplifies the failure of the bad.
Good developer will give good prompts, because knows what to ask, and what might be the problem. Good developer can read the code and point badly generated one, and learn the AI how to perform better, which style =to follow. Good developer can evaluate if the used algorithm is the proper for the task and give suggestions, if needed. Good developer can optimize token usage, by using scripts, for example.
Yes, skills in prompting, knowing about new tools and how to use them is also mandatory, but not the most important one, in my opinion.
I guess better soft/social skill are needed. Some people just can't express themselves in real world, and probably they will have difficulties expressing themselves in free texts as well
When I find problems at that AI does a bad job at, I take note, and those are exactly the kind of problems I give in a take-home assignment + interview.
If the candidate turns in AI slop and doesn't understand the fundamentals of what they're working on, reject. If they took the time to learn the subject matter and feed it their own ideas to improve the output, awesome.
You should interview for the skill that is required in the age of AI-assisted coding.
That is, looking at code that's been written by AI and see what's wrong, what's superfluous and what's missing.
For preparing the code for the interview, I would suggest prompting Claude Code using a requirements document that's purposefully a bit vague so that the AI will have to make choices when writing that code.
When you have the interviewee come over, show them the code and have them criticize those choices and edit the code manually (I know) so that they can demonstrate that they can intervene in the AI process at the correct inflection points.
How to interview in the age of AI is one of the top questions in a manager peer group that I'm in.
Several hiring managers in the group went all in on AI-assisted interviews because they wanted the interviews to match the tools that engineers can use on their work. Most of them have gone full circle and returned back to no-AI interviews.
The main problem with AI-assisted interviews is that they become a test of how familiar the candidate is with the specific AI tool you're letting them use. They started getting inverted signals because the hardcore vibecoders knew all the tricks to brute force the problem with high token spend. They'd do things like spend the interview trying to spin up parallel subagents to brute force a solution.
Then the careful coders who tried to understand the problem and do it right were penalized because every minute they spent trying to do the problem (instead of offloading all cognitive load to AI) was time lost to letting AI do the work.
There were also simpler problems like when someone was familiar with some visual LLM interface tool but didn't have a familiar workflow for the CLI tool used in the interview.
Most people went back to coding interviews that forbid AI and test coding skills, combined with a discussion about their AI experience.
My takeaway was that it's easy to teach new hires how to use AI tools on the job, but it's much harder to bring someone with weak coding skills up to the level of someone with strong coding skills. It's even harder when that person is leaning on AI so much that they're not learning how to code anything.
> They started getting inverted signals because the hardcore vibecoders knew all the tricks to brute force the problem with high token spend. They'd do things like spend the interview trying to spin up parallel subagents to brute force a solution.
Can’t you just…tell them not do this? Or give them limited model access instead of full Claude Code / Codex?
I'm interviewing currently. I've come across the following.
* Non-technical recruiters using a list of technical trivia questions and a keywords list to check if you answer correctly (e.g. what is the default isolation level in postgres) - this is the worst.
* Code reviews but on non-generic code, like on a specific framework/product/SDK code in which you may or may not have prior experience - slightly less worse, but if you are going to do this, it is counter intuitive to hire someone who doesn't have production experience in that area.
* take home assignments where they say "we recommend you to not spend more than x hours on this", but then, they expect a very sophisticated work and will reject you if you turn-in a "simple" implementation. Maybe they expect speed and quality?
* leetcode is still a thing.
* non-leetcode, but livecoding, on a likely OOP many to many relationship problem
* give a spec in the first x minutes and leave the interview, you do your thing and then they join in the last y minutes - this is my favorite.
I haven't yet participated in a Agentic AI interview. It is not that common AFAIK.
a coding interview that is objective, repeatable, doesn't put the interviewee under pressure (and doesn't trigger unconscious bias based on accent/appearence)
* take home assigment with minimum requirements criteria and tell them "add a feature of your own" or "any extra work is appreciated".
* Pay them for their time and tokens.
* Use a custom agent to review the code and see how many "high", "medium", "low" issues the agent identifies.
Even if you said AI programming is based on "knowing what to prompt" this still comes down to:
(1) understanding software engineering (for one thing knowing if answers make sense)
(2) subject matter expertise and the ability to communicate with SMEs, fake being an SME by reading books, see the old "knowledge engineer" construct from the 1980s.
(3) knowing specifics about AI coding.
I think (1) and (2) are 80-90% of what leads to success in the long term. My guess is the models are going to get better so (3) skills have a short half life and will matter less, but (1) and (2) will stay the same.
Maybe I'm cynical but if I was designing screeners for this thing I would ask people things like
"How many accounts do you follow on X about AI?" where the right answer is "I don't have an X account" and the higher the count the worse it is.
"What percent of your programming time do you spend thinking about AI programming tools?" and anything over 20% is suspect (but maybe it is a tooling job or something in which case I'd drop it)
That is, I want to see that somebody used AI tools to deliver something 100% done end-to-end that worked and I'd like to see them spending 80% of their time doing.
I'd also be thinking about screeners designed to detect FOMO attitudes and reject people for it.
What is an example of that 100% end to end that worked?
Could be as simple as “completed some tickets with quality code that I understood and that passed rigorous review and didn’t add technical debt”
The trouble w/ AI slop is that the people who make it don’t know it is AI slop, not that it was AI generated.
So if I use only Claude Code for everything I do, what would that mean to you?
I’d want to look at the output.
I believe that the new interviews will just get more questions, related to AI. I read it in an article: the AI amplifies. It amplifies the success of the good professionals and it amplifies the failure of the bad.
Good developer will give good prompts, because knows what to ask, and what might be the problem. Good developer can read the code and point badly generated one, and learn the AI how to perform better, which style =to follow. Good developer can evaluate if the used algorithm is the proper for the task and give suggestions, if needed. Good developer can optimize token usage, by using scripts, for example.
Yes, skills in prompting, knowing about new tools and how to use them is also mandatory, but not the most important one, in my opinion.
I guess better soft/social skill are needed. Some people just can't express themselves in real world, and probably they will have difficulties expressing themselves in free texts as well
I used to ask them to code a binary search in 1 hour
Now I ask them to code Google Search in 1 hour
When I find problems at that AI does a bad job at, I take note, and those are exactly the kind of problems I give in a take-home assignment + interview.
If the candidate turns in AI slop and doesn't understand the fundamentals of what they're working on, reject. If they took the time to learn the subject matter and feed it their own ideas to improve the output, awesome.
You should interview for the skill that is required in the age of AI-assisted coding.
That is, looking at code that's been written by AI and see what's wrong, what's superfluous and what's missing.
For preparing the code for the interview, I would suggest prompting Claude Code using a requirements document that's purposefully a bit vague so that the AI will have to make choices when writing that code.
When you have the interviewee come over, show them the code and have them criticize those choices and edit the code manually (I know) so that they can demonstrate that they can intervene in the AI process at the correct inflection points.