MANIFOLD
Will tweet about ~"AI making human forecasting obsolete" hold up?
24
Ṁ1kṀ5.3k
resolved Sep 9
Resolved
NO

Criteria for YES: A change that happens within 60 days (a month plus generous leeway) from February 21st will make human forecasting largely obsolete. This would be decided by forecasting scores that are competitive (top quartile?) with top performing humans in a closed setting, or something like top decile in open alternatives like Metaculus, Manifold, or other.

If this can't be either extended (seemingly competitive model was released in permitted window, but we so far lack data) or resolved by EOY 2024, this resolves NO.

Resolution will be decided by me subjectively, and I will never bet in this market.

I would be happy to get further developed resolution criteria with help of other users.

Market context
Get
Ṁ1,000
to start trading!

🏅 Top traders

#TraderTotal profit
1Ṁ373
2Ṁ87
3Ṁ78
4Ṁ25
5Ṁ22
Sort by:

This could probably resolve NO?

@StopPunting sorry I had missed this and forgot about this market.

Despite today's release, it didn't change anything within the permitted window.

bought Ṁ10 YES

https://arxiv.org/pdf/2402.18563.pdf Does somewhat close to crowd. Unclear how it performs compared to top decile of individual forecasters from a quick skim

@EliLifland Ah, interesting, thanks for posting this!

Skimmed it very quickly. The below is from page 10, seems like it would have a hard time beating the crowd on most of the platforms?

Do we have any nice comparisons on Manifold for what percentile of traders are roughly as good as the wisdom of the crowd?

Even if it does beat the crowd, the OP grantmaking team could just focus more on LLMs. Don't see how they would be obsolete.

This question is about a specific operationalization

curious why you guys hold yes here?

bought Ṁ1 NO

@jacksonpolack I'll take a punt, I heard some compelling argument that, AI would be able to learn from prediction* markets like LLM's learned from our Reddit comments. I doubt that can happen anytime soon, but maybe somebody knows something I don't about Large Prediction Models around the corner

This would be decided by forecasting scores that are competitive (top quartile?) with top performing humans in a closed setting

Are there examples of what the "closed setting" test might look like?

So it needs to score in the >=75th percentile of "top forecasters" -- what's a top forecaster?

@ScroogeMcDuck I could spend more time finding qualifying tournaments, or ones that don't and why, if a lot of your probability hinges on it. Basically I am imagining something like IARPA or Good Judgment inviting experts along with whatever AI tool Dan Hendrycks refers to here.

The description you mention assumes that the invited crowd into closed setting would be ~expected top forecasters of some kind, e.g. domain experts or with a strong forecasting record. So being in top quartile of such tournament is enough, it doesn't have to be >=75th percentile of some other "top" criteria in addition.
I.e. a closed setting where anyone can join (say a membership is needed like Manifold/Metaculus) wouldn't qualify for this.

@HenriThunberg I think that works, thanks!

Bayesian boughtṀ50NO

@Bayesian oh, I totally should have checked that nobody else had done this first 🙃 Sorry about duplication.

bought Ṁ50 NO

@HenriThunberg all good, they're pretty different. there's another near duplicate too, it's a very spicy take so bound to get some attention lol

@HenriThunberg Duplication by different authors is good! Here, have another:

https://manifold.markets/ScroogeMcDuck/will-a-poll-say-ai-obsoleted-human

© Manifold Markets, Inc.TermsPrivacy