What will be the best Remote Labor Index score by Dec 31, 2026?
3
10kṀ13k
2027
11%
<10%
12%
10-20%
15%
20-30%
13%
30-40%
12%
40-50%
12%
50-60%
10%
60-70%
9%
70-80%
7%
≥80%

This market matches Remote Work: Remote Labor Index from the AI 2026 Forecasting Survey by AI Digest.

Resolution criteria

Resolves to the highest reported automation rate on the RLI benchmark as of December 31, 2026, using official Scale AI / CAIS releases (paper or leaderboard).

If a clear successor version of RLI is released with comparable scope, the question resolves according to the newest official version.

Which AI systems count?

Any AI system counts if it operates within realistic deployment constraints and doesn't have unfair advantages over human baseliners.

Tool assistance, scaffolding, and any other inference-time elicitation techniques are permitted as long as:

  • No unfair and systematic advantage. There is no systematic unfair advantage over the humans described in the Human Performance section (e.g. AI systems are allowed to have multiple outputs autograded while humans aren't, or AI systems have access to the internet when humans don't).

  • Human cost parity. Having the AI system complete the task does not use more compute than could be purchased with the wages needed to pay a human to complete the same task to the same level. Any additional costs incurred by the AIs or humans (such as GPU rental costs) are included in the parity estimation.

The PASS@k elicitation technique is not applicable to this benchmark, as each project is evaluated once by human evaluators comparing the AI deliverable against the human reference.

If there is evidence of training contamination leading to substantially increased performance, scores will be accordingly adjusted or disqualified.

If a model is released in 2026 but evaluated after year-end, the resolver may include it at their discretion (if they think that there was not an unfair advantage from being evaluated later, for example the scaffolding used should have been available within 2026).

Eli Lifland is responsible for final judgment on resolution decisions.

Human cost estimation process:

  1. Rank questions by human cost. For each question, estimate how much it would cost for humans to solve it. If humans fail on a question, factor in the additional cost required for them to succeed.

  2. Match the AI's accuracy to a human cost total. If the AI system solves N% of questions, identify the cheapest N% of questions (by human cost) and sum those costs to determine the baseline human total.

  3. Account for unsolved questions. For each question the AI does not solve, add the maximum cost from that bottom N%. This ensures both humans and AI systems are compared under a fixed per-problem budget, without relying on humans to dynamically adjust their approach based on difficulty.

Buckets are left-inclusive: e.g., 10-20% includes 10.0% but not 20.0%.

Market context
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy