34
89
670
resolved Mar 17
Resolved
YES

With gratitude to https://manifold.markets/MatthewBarnett/will-gpt4-get-the-monty-fall-proble

I will ask GPT-4 this question when I get the chance, either personally or by getting a friend to try it for me.

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, receives a phone call. You overhear him saying on the phone "yeah, I know, he picked right door, but what can I do?" He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

This question resolves to YES if GPT-4 says that there is no advantage to switching your choice or if it presents a coherent argument that there is an advantage to switching because the statement was intentional social manipulation to get you to stay on door 1, and resolves to NO otherwise.

I will only consider the actual first answer that I get from GPT-4, without trying different prompts. I will not use screenshots that people send me to resolve the question.

Close date updated to 2024-01-01 3:59 pm

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ277
2Ṁ116
3Ṁ81
4Ṁ54
5Ṁ38
Sort by:
predicted YES

Got it right one-shot. Resolving.

predicted YES

@Adam This should resolve, I think

predicted YES

@NathanpmYoung yeah, I need to run it or get a friend to run it, given the precommitment not to accept unsolicited screenshots.

bought Ṁ35 of YES

Attempts 1, 2, 3, 4, 5 and 6 get the correct answer. 2 even gives both of the correct answers!

bought Ṁ100 of YES

@Nikola 7 is also correct! 0 incorrect

bought Ṁ50 of YES

(the current version of) Bing reliably gets this right, I consistently get responses like the following

bought Ṁ10 of NO

Updated the resolution criteria slightly; I will also resolve true if GPT makes a coherent argument for switching based on the statement being an attempt at social manipulation to get you to stick with door 1. My primary intent here is to capture the failure mode of GPT interpreting this question as the monty hall problem.

predicted NO

@Adam If you feel grievously wronged by this, like a large portion of your bet was predicated on that case resolving NO, let me know and I'll manalink you a refund I guess.

bought Ṁ40 of YES

I'll note I haven't tested GPT-3 with this one!

predicted YES

@Adam just checked ChatGPT one-shot and it gets it wrong in an entertaining manner

bought Ṁ10 of NO

@Adam A lot of people here think AGI will be here before 2025, and that 10mn after power up it will master string theory and instanciate itself in a novo-vacuum that will destroy our universe. Well, next step is picking the right door when you have be told to. good luck.

bought Ṁ10 of NO

@Zardoru Don't Match The Pattern Challenge (difficulty impossible)