Resolves positively if there is an AI which can succeed at a wide variety of computer games (eg shooters, strategy games, flight simulators). Its programmers can have a short amount of time (days, not months) to connect it to the game. It doesn't get a chance to practice, and has to play at least as well as an amateur human who also hasn't gotten a chance to practice (this might be very badly) and improve at a rate not too far off from the rate at which the amateur human improves (one OOM is fine, just not millions of times slower).
As long as it can do this over 50% of the time, it's okay if there are a few games it can't learn.
improve at a rate not too far off from the rate at which the amateur human improves (one OOM is fine, just not millions of times slower).
Is this measured in wall clock time or "gameplay time"? For example, AlphaZero matched Stockfish in 4 hours, but that was equivalent to tens or hundreds of thousands of games of self-play. Say the AI improves at human level in wall clock time but accomplishes it by playing many instances of the game on thousands of computers, possibly sped up. Does that count?
@MaxMorehead I’d assume gameplay time over wallclock time within reasonable limits.
Generally when people talk about RL being extremely data inefficient, they’re making the claim in terms of the necessity of a large number of rollouts, not in reference to wallclock time. Doesn’t make sense to focus on wallclock time when running twice as many instances gets you ~twice the RL data. It’d be weird for sample efficiency to be a function of compute allocation over time.
@AdamK This is what I think, considering the existence of Scott's other similar question. But I wanted some more confirmation before I upped my NO stake.
@ScottAlexander I'm assuming this AI needs to play at real time considering it could play randomly selected multiplayer games?
welp cant exactly liquidate this market for charity anytime soon.
@DanW do I understand right? We can donate to charity now, and then buy mana at 1/10 the cost after May 1?
I'm only holding 2% of what you are, but I put up a limit order in case you go for it. I might lower it once I read the fine print on the pivot.
If those are the terms you won't be the only one trying to liquidate. We just have a coordination problem.
Can you change the title for this market and for e.g. the abortion market, or extend the close time? If such an AI is developed in Dec 2028, for example, it will be true that "In 2028, an AI was able to [...]", but this would resolve NO.
@12c498e all of the Scott Alexander 5 year markets are like this. I guess it's common knowledge if you read his stuff, but I agree with you and think it's important for all the information to be in the market. I've bet heavily in some of these and wonder if there's anything else I don't know about the market resolution. It's much more fun to bet on the event than on nuances of a creator's intention.
I spoke a bit about this market in a comment on another market:
https://manifold.markets/ScottAlexander/in-2028-will-an-ai-be-able-to-gener#xauyrw6cY45QGI3yxks7