Entry-level tech postings have dropped roughly 60% since 2022. Marc Benioff announced Salesforce would hire “no new engineers” in 2025. A Claude Max 20x subscription costs 200USD a month versus 90,000USD a year for a junior developer plus six to twelve months of onboarding. The maths is seductive, and the industry is acting on it.
But if every company shifts toward seniors and AI agents, where do future seniors come from?
In Part 1, I made the case that AI amplifies what already exists in your engineering team, accelerating seniors, risking stunting juniors, and compounding technical debt as fast as it compounds productivity. How do you actually hire and develop engineers when AI has changed the rules?
Hiring and interviews look different now
If team composition is changing, so is how you assess candidates, and most companies haven’t caught up.
The traditional tech interview was built on a simple assumption: can this person code? Every stage was designed to evaluate the skill that AI is now commoditising. Running those same interviews in 2026 is testing the wrong thing.
What you actually need to know about a candidate is harder to assess but far more important: Can they understand code? Can they articulate why a design decision is good or bad in the wider delivery and runtime context? Can they hold a system in their head and reason about failure modes?
A recent Engineering 2028 survey by CTO Craft and Damilah, covering 89 senior technology leaders, reinforces this shift. The top skills expected to define high performing teams in 2028 are AI fluency (69%), domain and commercial understanding (67%), human creativity and curiosity (66%), and multi disciplinary collaboration (66%). None of these are capabilities AI can easily replicate.
The same survey asked what AI fundamentally cannot do. Respondents converged on a clear “human moat”: strategy and vision (71%), leadership and culture (69%), empathy and contextual understanding (67%), ethics and moral judgement (65%), and creativity and originality (61%). Traditional coding interviews don’t assess any of them.
So if you’re testing for judgement, not just output, what does that look like in practice? Some things that are working better than the traditional leetcode coding screens:
Solution design conversations
Give the candidate a real system design problem from your domain and talk through it. You’re looking for their thought process, what questions they ask and what trade-offs they recognise.
Code review exercises
Show them a piece of AI-generated code (or a real PR from your codebase, anonymised) and ask them to review it. You’ll learn more about their thought process than a Leetcode problem tells you in an hour.
Debugging walkthroughs
A production incident with logs and traces or a living dashboard. Walk through it together. This tests systems thinking instead of syntax.
If your team uses AI tools daily, testing candidates without them is absurd, it’s like testing a driver by taking away power steering. Give the candidates Claude code in the interview and run through the scenarios interactively.
Hiring juniors is a different problem
Everything above is guidance toward experienced hires. Hiring juniors in 2026 is a harder and neglected challenge.
The candidates entering the workforce now have used Copilot and ChatGPT throughout university. Many can produce code quickly through prompting, but that’s exactly the skill that’s been commoditised. The risk is that you hire someone who looks productive in the first week from the amount of code they generate and flatlines soon after because they’ve never had to reason through a production problem without AI scaffolding.
What you’re actually assessing in a junior candidate has changed. You’re no longer looking for someone who can write fizzbuzz or invert a binary tree. you’re looking for:
Curiosity
Give them a piece of code and ask them what could go wrong. You’re not looking for the right answer, you’re looking for whether they understand unknown code and challenge rather than blindly copy/paste. The juniors who will grow fastest are the ones who are uncomfortable with code they don’t understand, not the ones who trust it because it compiles and tests pass.
Ability to reason about systems, not just functions
Even at a junior level, you can assess whether someone thinks about how their code interacts with the rest of the system. A simple question: “This function works correctly in isolation. What could go wrong when it runs alongside everything else?” tells you a lot about how they think.
Learning velocity over current knowledge
In a world where the tools change every six months, what a junior knows today matters less than how quickly they can learn what they’ll need tomorrow. Problem solving and the ability to decompose unfamiliar problems are better predictors than any specific technical skill.
Ask a candidate directly: “When you use AI tools, how do you decide whether to trust the output?” There’s no right answer, but the conversation reveals whether they’ve thought about it at all. The best junior candidates have already noticed the limits and have started developing their own instincts.
We recently interviewed and hired a junior candidate who had good knowledge of programming languages and frameworks, however when they were exposed to AI tooling they accepted any and all suggestions without question. Code was being generated that compiled and passed tests but didn’t really solve the problems they were asked to solve. This made us re-think our screening process and how we mentor in an AI enabled world.
Not everyone in the industry agrees the answer is to hire fewer juniors. GitHub’s CEO Thomas Dohmke has said “the companies that are the smartest are going to hire more developers,” and GitHub itself hires more junior devs than ever. AWS chief Matt Garman called replacing entry level developers with AI “one of the dumbest things I’ve ever heard” precisely because of the pipeline question: who learns the systems in ten years time?
The argument isn’t simply “keep hiring juniors out of principle.”, how we onboard and mentor needs to be revisited. Some companies are already redesigning their programmes around this reality.
The question for engineering leaders isn’t whether juniors use AI because the reality is they already do. It’s whether your hiring and onboarding meets them where they are or pretends the tools don’t exist.
Mentoring has to change
Juniors still need to grow so how we develop them has to fundamentally change.
The old model was: junior writes code, senior reviews, junior learns from feedback, repeat. The new reality is: AI writes code, junior submits it, senior reviews and the learning moment is different because the junior didn’t struggle through the problem.
I think mentoring in an AI first world needs to shift from “here’s how to write this” to “here’s how to evaluate and validate this.” The critical skill becomes the ability to interrogate AI output with the same rigour you’d apply to a colleague’s pull request, but you need a mental model to do that.
Handing someone a Claude licence and expecting them to figure it out is not a development programme. Structured training, a few hours pair programming with hands-on coaching dramatically shifts how people use AI tools. Most companies aren’t making that investment. Pair programming was effective however seldomly used. The gap between those that do and those that don’t will widen fast with AI acting as an accelerant.
Some things I’m experimenting with:
AI-assisted code review as a teaching tool
Instead of juniors writing code and seniors reviewing it, juniors prompt, AI writes code, and juniors review it with seniors. This teaches solution design and architectural thinking simultaneously.
Production feedback loops
Have AI tools inspect production observability data lakes, and have juniors interpret that feedback to identify issues or areas for improvement. This builds the habit of using real-world data to evaluate code effectiveness.
Onboarding needs a redesign
The traditional junior onboarding, “here’s a small bug, fix it, get familiar with the codebase and CICD” doesn’t work the same way when the junior can point an AI agent at the bug and have a PR in twenty minutes without understanding anything about the system.
The first 90 days for a junior in an AI first team need to be structured around building mental models and end to end understanding (the systems thinking part), not shipping fixes.
Week 1-2: System orientation without AI
Have the junior trace requests through the entire system by hand, from the API gateway through the service layer to the database and back. Read the code, understand the flow, draw the diagram, present it to the team. Beyond reading and forgetting the existing outdated architectural diagrams, this builds the foundational understanding of the system that makes everything else possible. It’s slow and that’s the point.
Week 3-4: Supervised AI-assisted work
Introduce AI tools, pair programming with a senior. The junior uses AI to generate solutions while the senior asks “why did it do that?” and “what would happen if we used this other input instead?” at every step. The real learning is in the conversation.
Week 5-12: Ownership with guardrails
Give the junior real ownership of a component or feature, with AI tools available, but require them to write a brief plan before generating any code and to annotate their PRs with what the AI got right and what they had to correct. This builds the habit of thinking before prompting and critically evaluating output, the two skills that define long term effectiveness.
How you know this is working: by day 30, the junior can trace a request through the system and explain the key architectural decisions without prompting. By day 60, their PR annotations show they’re catching real issues in AI output, not just confirming it compiles and the tests pass. By day 90, they can own a component end to end and articulate the trade-offs they made, why they chose this approach over the alternatives the AI suggested, and what they’d do differently next time.
The investment is real, this is slower than just giving someone a Claude licence and a Jira ticket. But the alternative is engineers who are permanently dependent on AI scaffolding and never develop the judgement to work independently.
We’ve had a few occasions where an engineer submitted AI generated code that seemingly worked but immediately failed once it got to production and had to handle live data. A teaching moment, for everyone, where we had to start figuring out how can we evaluate PRs and code changes in an AI enabled workflow.
Where this goes
Senior engineers become more valuable, not less, but not just as IC’s. The most important thing a senior can do right now is invest in growing the people around them instead of pushing record amounts of code. The companies that figure out how to develop juniors in an AI first world will have a massive competitive advantage in 1-3 years. The training investment needs to be consistent, it’s not just handing out tool licences.
The Engineering 2028 survey frames this as a “seniority crisis.”, “seniority is no longer a safety net for those who refuse to lean into planning, architecture, and documentation.” In an AI first world, the engineer who cannot articulate why a system is built a certain way will be left behind by those who can. Seniority shifts from years of coding experience to depth of understanding and clarity of communication.
But “seniors become more valuable” needs honest qualification. Not all senior skills are equally durable. Some are being encoded into AI agent workflows right now: pattern enforcement, convention adherence, routine architectural decisions that follow established templates, standard code review. Microsoft’s Azure Skills Plugin already packages what used to be tribal knowledge in a senior’s head as versioned, installable agent artifacts. When your coding standards, your testing pyramid, and your deployment checklist can be expressed as agent instructions and enforced at generation time, the senior who was primarily valuable for knowing and enforcing those standards has a shrinking moat.
The skills that remain durable are the ones that resist encoding.
Problem decomposition: breaking an ambiguous business requirement into a well specified technical approach.
Judgement under uncertainty: knowing when to distrust AI output, when a 95% correct solution hides a critical 5% failure mode, when the technically elegant answer is the wrong product decision.
Novel systems reasoning: the kind of cross boundary architectural thinking that only comes from having debugged enough cascading failures to develop intuition about how distributed systems actually behave.
Orchestration: the ability to coordinate multiple AI agents, human engineers, and automated pipelines toward a coherent outcome. The shift is from implementer to orchestrator, and that’s a fundamentally different skill from writing code faster.
It’s also worth stepping back and recognising that the software development lifecycle was never just about writing code. It spans product, operations, platform, marketing, legal, sales, reliability, and the entire process of how software gets built, maintained, and evolved. AI is disrupting all of these functions, not just the engineering side. When we talk about the AI SDLC era, framing it purely as a coding story misses the point entirely.
This shift is also reshaping the landscape around engineering teams. A new category of managed platforms is emerging to address what is becoming the real constraint: “Code is no longer the bottleneck. Everything else is.”
Ownership, operational maturity, production readiness, standards enforcement, deployment governance. These are the parts of the software lifecycle that were already under-invested, and AI has made the gap impossible to ignore.
Platforms focused on engineering operations, production readiness, unified knowledge graphs for agent ecosystems, and managed rules enforcement are carving out distinct positions in this space. They exist because when AI generates code faster than teams can govern, deploy, and maintain it, the lifecycle doesn’t accelerate, it will grind to a halt very quickly.
The bottleneck relocates into service ownership, validation, deployment, and the question of whether AI assisted code is actually production ready.
That’s what makes this a business problem, not just an engineering one. Speed without operational maturity at the other end creates technical debt at a rate no team is prepared to absorb.
The implication for hiring and mentoring is direct: the engineers you develop today need to understand this managed layer, not just the code. The skills that matter increasingly sit at the intersection of engineering and operations, governing the lifecycle, not just contributing to it.
Within a few months, the first major companies will have more AI agents contributing to their codebase than junior engineers.
For us, this balance is rapidly tilting to the AI agents side.
What makes this a success or failure depends on the decisions being made right now about how we hire, onboard and mentor the next generation of engineers.
The junior pipeline problem may be the first problem but it’s not the only one. The emerging pattern at major tech companies (extract institutional knowledge from experienced engineers, encode it into AI agent context and workflows, then reduce headcount) doesn’t stop at entry level. It will affect everyone at every level. The skills I’m advocating for, good judgement, systems thinking, solution design, are the hardest to encode and the hardest to master, which is exactly what makes them the most durable advantage you can build. Invest in them not because they make you permanently safe, but because they’re the last skills standing when everything and everyone else gets automated.
I’m still trying to figure this out. If you’re experimenting with different approaches for hiring, onboarding and scaling juniors up in an AI first world, I’d like to hear what’s working for you and what’s not.