Will a multimodal LLM be able to generate instructions for origami crafts by 2030?
Basic
6
Ṁ230
2030
83%
chance

Will resolve to YES if a comment describing a prompt format, left here for a given accessible LLM, manages to generate correct instructions to make a 3D origami figure for any given animal at least 75% of the time. The instructions can be in any form (text, image, speech, video...), as long as they are unambiguous, possible, and achieve the correct shape (admittedly this can be a bit subjective, hence the 75% allowance).

Get
Ṁ1,000
and
S3.00
Sort by:

The trouble I see here is that I don't expect that AI models that fill the role of ChatGPT and similar in the year 2030 will be referred to as "LLMs". They will model language but only as a subset of their more important task of general world modeling. So the question seems a bit off.

@JohnEg GPT-4 is already multimodal, not modelling only language, but we still widely call it an LLM. I really wish you end up right and research does move more towards "general world modelling" instead of larger and larger models on more data, but I am pessimistic. Ultimately you did interpret the spirit of the question correctly: "AI models that fill the role of ChatGPT and similar in the year 2030" so I don't think an urgent fix is needed on the question.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules