See this page for information about the competition: https://lab42.global/arcathon/. See also this podcast for an interview with Francois Chollet about the challenge and his predictions: https://www.dwarkeshpatel.com/p/francois-chollet
The fundamental characteristics of an "LLM" for the purposes of this question:
Sequence-to-sequence type model. (State-space and transformer models would both count, for example.)
No substantial post-hoc computation (like tree search). Sampling as it is practiced now is allowed. Prompting as it is practiced now is allowed.
I will use my best judgement if it’s ambiguous. The main point is that the model should be in the class of models that LLM-naysayers (Chollet especially) refer to when they assert that LLMs cannot solve ARC narrowly and are off-pathway for AGI generally.
See also:
@Tossup Will this resolve YES if the LLM-system is not used by a tree search algorithm from the outside (i.e. Tree of Thoughts), but something like tree search was still used in its training/fine-tuning regime, as some people speculated about e.g. Q*? I.e. the result is still an LLM that gets inferred in the regular way as current LLMs do, but the training/fine-tuning might be a bit/lot more advanced.
Said another way: if training is advanced and inference is simple for a system that wins the prize, will this still resolve YES?