In 2028, will an AI be able to play randomly selected computer games at human level without getting to practice?
258
1k
2.8k
2028
34%
chance

Resolves positively if there is an AI which can succeed at a wide variety of computer games (eg shooters, strategy games, flight simulators). Its programmers can have a short amount of time (days, not months) to connect it to the game. It doesn't get a chance to practice, and has to play at least as well as an amateur human who also hasn't gotten a chance to practice (this might be very badly) and improve at a rate not too far off from the rate at which the amateur human improves (one OOM is fine, just not millions of times slower).

As long as it can do this over 50% of the time, it's okay if there are a few games it can't learn.

Get Ṁ200 play money
Sort by:

welp cant exactly liquidate this market for charity anytime soon.

@DanW do I understand right? We can donate to charity now, and then buy mana at 1/10 the cost after May 1?

I'm only holding 2% of what you are, but I put up a limit order in case you go for it. I might lower it once I read the fine print on the pivot.

If those are the terms you won't be the only one trying to liquidate. We just have a coordination problem.

@DanW you can find someone to buy out your position at a discount

@nikki how do I do that?

@DanW go do discord limit land

@DanW In addition, I am willing to buy your positions at a discount, just let me know which

is it allowed to be differently bad from a human? a common pattern in AI capabilities is that they make categorically different mistakes than humans.

predicts YES

Can you change the title for this market and for e.g. the abortion market, or extend the close time? If such an AI is developed in Dec 2028, for example, it will be true that "In 2028, an AI was able to [...]", but this would resolve NO.

predicts YES

@12c498e all of the Scott Alexander 5 year markets are like this. I guess it's common knowledge if you read his stuff, but I agree with you and think it's important for all the information to be in the market. I've bet heavily in some of these and wonder if there's anything else I don't know about the market resolution. It's much more fun to bet on the event than on nuances of a creator's intention.

I hope I lose all my money on this market, but I really don't see it. This requires generalized human intelligence, which seemingly requires a lot of moonshot breakthroughs in theory & compute.

predicts YES

DreamerV3 + multi game pretraining + etc

predicts YES
bought Ṁ100 of YES

wtf lol this market is so underpriced

predicts NO

@L or you are overhyped. we shall see 😛

Can it practice similar games? When a human is playing a video game for the first time, they're usually drawing on experiences from playing other video games, and video games take advantage of this by making many of the inputs/outputs similar across games (left stick usually moves you around, right stick makes you look around, etc.)

predicts NO

I think this would require human level generality. Current video game AIs are this good because they get to play for so long they basically explore the whole state space of the game - not literally the whole state space, but for each possible state, the AI experienced analogous states during the training phase.

When it comes to games like puzzle games or strategy games, it seems to me that this requires having the same ability as humans to generalize from little experience, and is nearly equivalent to predicting human-level AGIs before 2028.

bought Ṁ5 of NO

one actionable consequence of this opinion is that I think the divergence between these two questions is wildly larger than it should be. I will be taking this bet. https://manifold.markets/ScottAlexander/by-2028-will-there-be-a-visible-bre

predicts NO

bought Ṁ100 of NO

This is essentially my view. An agent that passes this test would be dramatically more economically useful than almost any form of historical automation, since the conditions are so general

bought Ṁ10 of YES

70%

what kind of learning during the game would be ok?

e.g.
- self learning after the game has already started?
- using a training-algorithm (e.g. backprop) to update an intemediate modell (e.g. the reward function)?

for instance: with the world-modell approach [1], the agent learns about vision, than it learns predictions about the world, and then it learns strategy in that imagined world. The world-model is first created by observing the word, so no self-training here. The control-model is created by self-learning on the world-model, but not on the real game. So that would would be self-learning, but not on the game, but instead inside its own mind (kinda like a human-player simulates the game world in his head to come of with strategies before trying them out in the real game)

in this example: If an AI would train a world-model (V and M Model in the paper) by observing games, but then use a pre-trained* C-Model, how would that resolve this market?

* pre-trained on other games, not the specific on in question.

[1] https://worldmodels.github.io/

More related questions