Will there be entry-level AI coders by 2026?
146
1.2kṀ26k
resolved Jan 6
Resolved
NO

"Entry-level coder": an AI can be given natural language descriptions of coding tasks (emails, issues on a tracker, a spec, etc) and go through the full "just out of undergrad" coding loop: branch/fork, make edits, write tests, submit PRs, go back and forth with managers about testing / requirements, etc.

If extra infrastructure to enable the AI (e.g. tooling to let it work with CI) has to be built, that still counts.

Related to these markets:

  • Update 2025-03-04 (PST) (AI summary of creator comment): Existing AIs Exclusion:

    • Existing AIs (i.e. current generation models available at the time of resolution) cannot be considered as fully replacing entry-level coders.

    • Only AIs that demonstrate the complete "just out of undergrad" coding loop—beyond what current systems can achieve—meet the criteria.

Market context
Get
Ṁ1,000
to start trading!

🏅 Top traders

#NameTotal profit
1Ṁ3,515
2Ṁ2,882
3Ṁ2,398
4Ṁ1,533
5Ṁ1,198
Sort by:

This one feels incredibly close but I do not actually find it useful to hand Claude Code a junior dev amount of work and then just review what it submits. Maybe next year.

@vluzko I think this question is misresolved, and that there should have been a discussion period beforehand.

Claude, OAI, and Gemini Deep Research all think Claude Opus 4.5 + Claude Code + the right MCP connections resolve YES (Gemini says yes but noodles a bit). I think the prompt is fair, and these are the one-shot responses from each model.

- https://claude.ai/public/artifacts/aa5f56f1-e395-4832-85aa-0b64a828c033
- https://chatgpt.com/share/695d2ef0-c078-8000-a6df-8dd271188f15

- https://gemini.google.com/share/6f369b1808ce

I think we can all agree that the current models still have limitations, and we can also all definitely agree that interacting with the models and "squeezing the most juice" from them, so to speak, requires a different frequency and type of interaction than between human coworkers. Then again, language like "just out of undergrad" does not imply an especially strict standard. The question does not reasonably imply that the models have to be pareto improvements over junior devs, or that the models have to be better than the very best new-graduate SWEs. If we are being realistic about what the median new-grad SWE can do, I just don't think it makes sense to say that the best AI setups are worse, if at all.

You mentioned that you "do not actually find it useful to hand Claude Code a junior dev amount of work and then just review what it submits". That is just not the same as the criteria you provided when you made the question. If that was the standard you were hoping to use to resolve, you should have specified it earlier.

@AdamK I can see how you might have interpreted the question to not mean "I am indifferent between an AI and a new entry level coder", but quite frankly you had almost four years to request clarifications. If you thought this should resolve YES, why didn't you ask me to resolve it when agentic coders really took off? Or ask why the market was so low?
More generally: if my resolution and the market are in strong agreement, I don't think there's good reason to stop for discussion. The market already represents the consensus, saying the same thing with words usually won't accomplish anything.

@vluzko The expectation is that the resolution criteria act as the basis for resolution; I didn't think it was my responsibility to ask you to apply them. I wasn't keeping a close eye on this market, but certainly would have advocated for a YES resolution earlier if I had known how you were leaning. It's good practice to offer an opportunity for discussion if there is ambiguity in how a question should resolve.

Again depends on what the bar is for a "junior dev", but anyone I'd want to hire would be able to do this task easily. Claude could not despite me spending over $200 USD on trying. (There's been one Claude release since then, but anecdotally it does not seem that much better.)

/IsaacKing/will-i-be-able-to-vibecode-a-full-f

@IsaacKing Question for you: of American CS graduates in the year 2019 who worked as full time SWEs within a year of graduating, what fraction do you suppose would have succeeded at meeting your full project specs with less than 80 hours of work, had they been contracted to do so at the end of 2019, and received about the same density of feedback over time as you provided to Claude during your experiment? I think 80% is a very, very generous upper bound.

While I definitely think that your experiment was informative, and spoke saliently to the limitations of current models, I don't think it speaks much to how this question should resolve.

Tasks get refactored. There are definitely jr dev jobs that don't exist today because an AI coding agent used by a sr dev could do everything the business would have needed the jr dev for, without adding workload for the sr dev.

bought Ṁ2,500 NO

We still do not have the full loop. Going back and forth with managers is a deceptively high bar.

@vluzko Can we get more clarity on how good the AI has to be? Existing AIs can already do this, just not very well. (See e.g. the recent Claude plays Pokemon.)

@IsaacKing existing AIs cannot already replace entry-level coders.

@vluzko "Entry-level" is probably relative, I've come across some pretty bad coders.

How well does a model have to be able to do this? It would probably be possible to hack something up to do this with Langchain today, it would just not be particularly good.

@Thomas42 Presumably it would have to be good enough that many companies start actually using AI coders to do tasks that they once hired entry-level just-out-of-undergrad humans to do. So, the AI wouldn't have to be quite as good as existing entry-level coders (since the AI would be much cheaper, so somewhat worse performance might be an acceptable tradeoff), but it would have to be kinda close (because the kind of coder you could hack up today, AutoGPT-style, probably wouldn't actually save enough time/effort that it would be adopted by many companies).

@Thomas42 As well as an entry-level coder. The most straightforward way for this to resolve is if tech companies start using AI to do entry-level coder work.

@vluzko "The most straightforward way for this to resolve is if tech companies start using AI to do entry-level coder work."
Seems nebulous to me. I imagine the default state is that, "At least some companies have full-AI-loops in their systems, but they don't work too well. They don't 1-1 replace real people, but they help amplify them."

There's already a company that does AI PRs for codebases. It's not too hard to do a shitty job at the basics, and for that to be used at least somewhat.

Is it still YES if multiple AI models are involved? For example, if there is a model to take an issue and write out a description of the needed changes, another to write the code, another to write the tests, another to make changes in response to review, etc.

@hyperion Good question: still YES.

Will there be entry-level AI coders by 2026?, 8k, beautiful, illustration, trending on art station, picture of the day, epic composition

Comment hidden
© Manifold Markets, Inc.TermsPrivacy