Will any agent perform better on Minecraft (or comparable open world game) after being fine-tuned on a manual by 2027?
➕
Plus
13
Ṁ1255
2027
72%
chance
To clarify: the experiment is that there are two copies of an agent that runs on Minecraft (or some other open world game environment). The agent has the capacity to be fine-tuned with text. One version is passed a manual for the game as text (or text + images, but *not* video), the other runs without any finetuning. Will the former perform better than the latter (either better sample efficiency or better final reward)? The agent can't have been trained on that env before, but it can be trained on other envs/data beforehand (e.g. it's okay if there's a pretrained LLM in the loop).
Get
Ṁ1,000
and
S3.00
Sort by:

There is at least a paper claiming to do this for Atari environments : https://arxiv.org/abs/2302.04449

I'm not sure that human agents perform much better given a manual. Maybe instead give the agent access to the Reddit for the game?
@MartinRandall I will accept essentially any text-based vaguely guide-like thing. The specific details of the text aren't what this question is getting at.
how does this resolve if no one attempts this experiment
@April If nothing like this gets attempted I'll resolve it N/A. I'm not very interested in the probability the experiment is performed at all.
Publication bias means that if someone tries it and it doesn't work, they're likely to not report the result, whereas if they try it and it does work they certainly will.

@JamesBabcock which means they can get manifold bux by posting their experiment, which should compensate them for any scientific reputation loss from posting a failure case, right?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules