Asking GPT-3 MTG rules questions returns some rather nonsensical answers. For example:
This answer makes no sense, and those cited rules don't even exist.
This was from a prompt where I supplied it with a list of other rules questions and correct answers to them, so it does "know" that it's supposed to be answering coherently and correctly. I can also tell from other experimentation that card text and the Magic Comprehensive Rules document were a part of GPT-3's training data. GPT-3 is clearly not powerful enough to properly understand such a complicated technical system.
This market resolves to YES if, by the beginning of 2030, I have access to a system that can give me correct answers and explanations to Magic rules questions in natural English text. Specifically:
I will supply it with 20 completely random unreleased questions from RulesGuru. (Plus card text if necessary.) Over those 20 questions, it must have at least a 90% success rate on giving the right answer, and at least a 50% success rate on providing an explaination that clearly and correctly explains why it works that way. A correct explanation can leave out a small detail here or there, but it must be good enough to help a human understand the material, and avoid anything blatantly wrong like referencing parts of the rules that are irrelevant or don't exist.
For a harder version of this question, see /IsaacKing/will-ai-be-superhuman-at-mtg-rules
Update 2025-02-21 (PST) (AI summary of creator comment): New Resolution Criteria:
The resolution criteria have been updated to be stricter than those originally described.
The detailed, updated criteria can be found at the linked page and replace the previous criteria.