Will any agent perform better on Minecraft (or comparable open world game) after being fine-tuned on a manual by 2027?
13
1kṀ1255
2027
72%
chance
To clarify: the experiment is that there are two copies of an agent that runs on Minecraft (or some other open world game environment). The agent has the capacity to be fine-tuned with text. One version is passed a manual for the game as text (or text + images, but *not* video), the other runs without any finetuning. Will the former perform better than the latter (either better sample efficiency or better final reward)? The agent can't have been trained on that env before, but it can be trained on other envs/data beforehand (e.g. it's okay if there's a pretrained LLM in the loop).
Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy