Logo
Overview
Hiring and Mentoring Engineers in an AI-First World

Hiring and Mentoring Engineers in an AI-First World

March 15, 2026
12 min read

Entry-level tech postings have dropped roughly 60% since 2022. Ravio’s 2025 Tech Job Market Report found entry-level hiring dropped 73% year over year, with junior share in hiring falling from 15% to 7%. Over half of engineering leaders plan to hire fewer juniors. Marc Benioff announced Salesforce would hire “no new engineers” in 2025. A Claude Max 20x subscription costs 200USD a month versus 90,000USD a year for a junior developer plus six to twelve months of onboarding. The maths is seductive, and the industry is acting on it.

But if every company shifts toward seniors and AI agents, where do future seniors come from?

In Part 1, I made the case that AI amplifies what already exists in your engineering team, accelerating seniors, risking stunting juniors, and compounding technical debt as fast as it compounds productivity. The diagnosis is clear. This post is the practical response: how do you actually hire and develop engineers when AI has changed the rules?

Hiring and interviews look different now

If team composition is changing, so is how you assess candidates and most companies haven’t caught up.

The traditional technical interview was built on a simple premise: can this person write code? All of it designed to evaluate the skill that AI is now commoditising. Running those same interviews in 2026 is testing the wrong thing.

What you actually need to know about a candidate is harder to assess but far more important: Can they evaluate code? Do they understand trade-offs? Can they articulate why a design decision is good or bad? Can they hold a system in their head and reason about failure modes?

The engineers who are actually effective with AI aren’t the ones who trust it, they’re the ones who know when to trust it and when to push back. That’s a judgement skill, and it’s exactly what traditional interviews don’t test.

Some things that are working better than the traditional leetcode coding screens:

Solution design conversations
Give the candidate a real system design problem from your domain and talk through it. You’re looking for their thought process, what questions they ask and what trade-offs they recognise.

Code review exercises
Show them a piece of AI-generated code (or a real PR from your codebase, anonymised) and ask them to review it. You’ll learn more about their thought process than a Leetcode problem tells you in an hour.

Debugging walkthroughs
A production incident with logs and traces or a living dashboard. Walk through it together. This tests systems thinking instead of syntax.

If your team uses AI tools daily, testing candidates without them is absurd, it’s like testing a driver by taking away power steering. Give the candidates Claude code in the interview and run through the scenarios interactively.

Hiring juniors is a different problem

Everything above is guidance toward experienced hires. Hiring juniors in 2026 is a harder and neglected challenge.

The candidates entering the workforce now are AI aware. They’ve used Copilot and ChatGPT throughout university. Many are prompters who can produce code quickly, but that’s exactly the skill that’s been commoditised. The risk is that you hire someone who looks productive in the first week from the amount of code they generate and flatlines soon after because they’ve never had to reason through a production problem without AI scaffolding.

What you’re actually assessing in a junior candidate has changed. You’re no longer looking for someone who can write fizzbuzz or invert a binary tree. you’re looking for:

Curiosity
Give them a piece of code and ask them what could go wrong. You’re not looking for the right answer, you’re looking for whether they understand unknown code and challenge rather than blindly copy/paste. The juniors who will grow fastest are the ones who are uncomfortable with code they don’t understand, not the ones who trust it because it compiles and tests pass.

Ability to reason about systems, not just functions
Even at a junior level, you can assess whether someone thinks about how their code interacts with the rest of the system. A simple question: “This function works correctly in isolation. What could go wrong when it runs alongside everything else?” tells you a lot about how they think.

Learning velocity over current knowledge
In a world where the tools change every six months, what a junior knows today matters less than how quickly they can learn what they’ll need tomorrow. Problem solving and the ability to decompose unfamiliar problems are better predictors than any specific technical skill.

Ask a candidate directly: “When you use AI tools, how do you decide whether to trust the output?” There’s no right answer, but the conversation reveals whether they’ve thought about it at all. The best junior candidates have already noticed the limits and have started developing their own instincts.

We recently interviewed and hired a junior candidate who had good knowledge of programming languages and frameworks, however when they were exposed to AI tooling they accepted any and all suggestions without question. Code was being generated that compiled and passed tests but didn’t really solve the problems they were asked to solve. This made us re-think our screening process and how we mentor in an AI enabled world.

It’s worth noting that not everyone in the industry agrees the answer is to hire fewer juniors. GitHub’s CEO Thomas Dohmke has said “the companies that are the smartest are going to hire more developers” and GitHub itself hires more junior devs than ever. AWS chief Matt Garman called replacing entry level developers with AI “one of the dumbest things I’ve ever heard” precisely because of the pipeline question: who learns the systems in ten years time? The argument isn’t simply “keep hiring juniors out of principle.” It’s that the curriculum has to change. The candidates entering now are AI native, they’ve used Copilot and ChatGPT throughout university. Some companies are already redesigning their programmes around this reality. Deloitte’s AI Academy trains interns on agentic AI, RAG pipelines, and production governance frameworks from day one rather than bolting AI awareness onto a traditional curriculum after the fact. The question for engineering leaders isn’t whether juniors use AI. They already do. It’s whether your hiring and onboarding meets them where they are or pretends the tools don’t exist.

Mentoring has to change

Juniors still need to grow so how we develop them has to fundamentally change.

The old model was: junior writes code, senior reviews, junior learns from feedback, repeat. The new reality is: AI writes code, junior submits it, senior reviews and the learning moment is different because the junior didn’t struggle through the problem.

I think mentoring in an AI-first world needs to shift from “here’s how to write this” to “here’s how to evaluate and validate this.” The critical skill becomes the ability to interrogate AI output with the same rigour you’d apply to a colleague’s pull request, but you need a mental model to do that.

Handing someone a Claude licence and expecting them to figure it out is not a development programme. Structured training, a few hours pair programming with hands-on coaching dramatically shifts how people use AI tools. Most companies aren’t making that investment. Pair programming was effective however seldomly used. The gap between those that do and those that don’t will widen fast with AI acting as an accelerant.

Some things I’m experimenting with:

AI-assisted code review as a teaching tool
Instead of juniors writing code and seniors reviewing it, juniors prompt, AI writes code, and juniors review it with seniors. This teaches solution design and architectural thinking simultaneously.

Production feedback loops
Have AI tools inspect production metrics and logs, and have juniors interpret that feedback to identify issues or areas for improvement. This builds the habit of using real-world data to evaluate code effectiveness.

Onboarding needs a redesign

The traditional junior onboarding, “here’s a small bug, fix it, get familiar with the codebase and CICD” doesn’t work the same way when the junior can point an AI agent at the bug and have a PR in twenty minutes without understanding anything about the system.

The first 90 days for a junior in an AI first team need to be structured around building mental models and end to end understanding (the systems thinking part), not shipping fixes.

Week 1-2: System orientation without AI
Have the junior trace requests through the entire system by hand, from the API gateway through the service layer to the database and back. Read the code, understand the flow, draw the diagram, present it to the team. Beyond reading and forgetting the existing outdated architectural diagrams, this builds the foundational understanding of the system that makes everything else possible. It’s slow and that’s the point.

Week 3-4: Supervised AI-assisted work
Introduce AI tools, pair programming with a senior. The junior uses AI to generate solutions while the senior asks “why did it do that?” and “what would happen if we used this other input instead?” at every step. The real learning is in the conversation.

Week 5-12: Ownership with guardrails
Give the junior real ownership of a component or feature, with AI tools available, but require them to write a brief plan before generating any code and to annotate their PRs with what the AI got right and what they had to correct. This builds the habit of thinking before prompting and critically evaluating output, the two skills that define long term effectiveness.

How you know this is working: by day 30, the junior can trace a request through the system and explain the key architectural decisions without prompting. By day 60, their PR annotations show they’re catching real issues in AI output, not just confirming it compiles and the tests pass. By day 90, they can own a component end to end and articulate the trade-offs they made, why they chose this approach over the alternatives the AI suggested, and what they’d do differently next time.

The investment is real, this is slower than just giving someone a Claude licence and a Jira ticket. But the alternative is engineers who are permanently dependent on AI scaffolding and never develop the judgement to work independently.

We’ve had a few occasions where an engineer submitted AI generated code that seemingly worked but immediately failed once it got to production and had to handle live data. A teaching moment, for everyone, where we had to start figuring out how can we evaluate PRs and code changes in an AI enabled workflow.

Where this goes

Senior engineers become more valuable, not less, but not just as IC’s. The most important thing a senior can do right now is invest in growing the people around them instead of pushing record amounts of code. The companies that figure out how to develop juniors in an AI first world will have a massive competitive advantage in 1-3 years. The training investment needs to be consistent, it’s not just handing out tool licences.

But “seniors become more valuable” needs honest qualification. Not all senior skills are equally durable. Some are being encoded into AI agent workflows right now: pattern enforcement, convention adherence, routine architectural decisions that follow established templates, standard code review. Microsoft’s Azure Skills Plugin already packages what used to be tribal knowledge in a senior’s head as versioned, installable agent artifacts. When your coding standards, your testing pyramid, and your deployment checklist can be expressed as agent instructions and enforced at generation time, the senior who was primarily valuable for knowing and enforcing those standards has a shrinking moat.

The skills that remain durable are the ones that resist encoding. Problem decomposition: breaking an ambiguous business requirement into a well specified technical approach. Judgement under uncertainty: knowing when to distrust AI output, when a 95% correct solution hides a critical 5% failure mode, when the technically elegant answer is the wrong product decision. Novel systems reasoning: the kind of cross boundary architectural thinking that only comes from having debugged enough cascading failures to develop intuition about how distributed systems actually behave. And increasingly, orchestration: the ability to coordinate multiple AI agents, human engineers, and automated pipelines toward a coherent outcome. As Addy Osmani wrote, “the best software engineers won’t be the fastest coders, but those who know when to distrust AI.” The shift is from implementer to orchestrator, and that’s a fundamentally different skill from writing code faster.

Klarna’s experience is instructive here. In 2023-2024, CEO Sebastian Siemiatkowski aggressively replaced roughly 700 roles with AI, publicly declaring that “AI can already do all of the jobs that we, as humans, do.” By early 2025, quality had collapsed. Customer satisfaction fell, service was inconsistent, and Siemiatkowski publicly admitted that “cost was unfortunately too dominant a factor in our evaluation. The result is lower quality.” They rehired, but not back to the old model. They’re now running a blended approach: AI handling the predictable work, humans handling the judgement calls. That’s probably where most of the industry ends up. Not pure replacement, not the status quo, but a hybrid that requires both good systems and good people.

Within a few months, the first major companies will have more AI agents contributing to their codebase than junior engineers. What makes this a success or failure depends on the decisions being made right now about how we hire, onboard and mentor the next generation of engineers.

The junior pipeline problem may be the first problem but it’s not the only one. The emerging pattern at major tech companies (extract institutional knowledge from experienced engineers, encode it into AI agent context and workflows, then reduce headcount) doesn’t stop at entry level. It will affect everyone at every level. The skills I’m advocating for, good judgement, systems thinking, solution design, are the hardest to encode and the hardest to master, which is exactly what makes them the most durable advantage you can build. Invest in them not because they make you permanently safe, but because they’re the last skills standing when everything and everyone else gets automated.


I’m still trying to figure this out. If you’re experimenting with different approaches for hiring, onboarding and scaling juniors up in an AI-first world, I’d like to hear what’s working for you and what’s not’.