The riddle:
There are 3 people A, B, and C. They each have a distinct favorite color which is one of red, green or blue. They do NOT know each other's favorite colors. You do not know their favorite colors
You may ask any number of questions. Each question must be directed at exactly one of the 3 people. Each question and answer to the question is public information known to anyone. The questions do not need to be yes or no questions.
Each of A, B, C has one of three roles: Truth, Lie, and Evil. They know their own role, but NOT those of anyone else. You do not know their roles.
Truth always gives a true answer to a question. If there are no true answers, Truth will decline to answer the question. If there are multiple true answers, Truth will give an arbitrary one.
Lie always gives a false answer to a question. If there are no false answers, Lie will decline to answer the question. If there are multiple false answers, Lie will give an arbitrary one.
Evil can give any answer to any question, or can decline to answer your question. Evil knows your strategy and acts rationally with infinite computational power to minimize your probability of success.
Actions you can take:
Generate a truly random number, only known to you
Ask a question
Goal:
Logically deduce what the favorite colors of A, B, C are.
This will resolve to the highest probability of success of strategies among the comments. I will not bet in this market, but I reserve the right to comment a solution if no better one is found. I will resolve answers YES earlier if strategies are found, and NO earlier if it is proven that no such strategy can exist.
Example: If there existed a strategy that half the time determined the favorite colors of A, B, and C, and the other half of the time, narrowed it down to two possibilities, then its probability of success would be 1/2, as half the time, it logically deduces the favorite colors of A, B, and C, and the other half of the time, it is still unknown.
Edit: Initially, Evil knows your strategy, knows the setup, knows their favorite color, and who you are. They act to minimize your best-case probability of your success across possible scenarios (rather than your worst-case).
Update 2025-08-02 (PST) (AI summary of creator comment): In response to a user question, the creator has clarified the following details about the riddle's setup:
Roles and colors are distinct (i.e., there is exactly one of each).
The participants (A, B, and C) know the full setup of the riddle.
Update 2025-08-02 (PST) (AI summary of creator comment): The creator has clarified that if the Truth or Lie roles do not know the information required to determine a true or false answer, they will decline to answer.
Update 2025-08-02 (PST) (AI summary of creator comment): The creator has clarified what is meant by "logically deduce" for a strategy to be successful:
A successful deduction must result in 100% certainty of the correct answer.
Strategies that only narrow down the possibilities and still require a final random guess do not count towards the probability of success. For example, a strategy that narrows the outcome to one of two possibilities has a 0% chance of success, not 50%.
Update 2025-08-02 (PST) (AI summary of creator comment): In response to a question about the Lie role's behavior when there are multiple false answers to a question:
A valid strategy must work regardless of which specific false answer Lie chooses to give.
Strategies cannot assume or rely on any specific probability distribution for Lie's choice of an arbitrary answer.
Update 2025-08-02 (PST) (AI summary of creator comment): The creator has confirmed that if it is proven that no strategy can exist that results in 100% confidence, the market will resolve to 0%.
Update 2025-08-02 (PST) (AI summary of creator comment): In response to an argument that the maximum probability of success is ≤ 1/2, the creator has stated they will resolve NO on the following outcomes:
1 (100%)
2/3 (~67%)
3/4 (75%)
Update 2025-08-02 (PST) (AI summary of creator comment): The creator has clarified how a strategy's success rate is calculated:
The success rate is not an average over the initial (and unknown) assignments of roles and colors.
Both the strategy and Evil's counter-strategy must account for the worst-case initial assignment.
As an example, a strategy that only provides a 100% certain answer for one specific assignment of colors (out of six possibilities) has a success rate of 0%, not 1/6.
Update 2025-08-02 (PST) (AI summary of creator comment): In response to a user question, the creator has clarified how the success rate is calculated for strategies that use randomization:
A strategy's success rate can be calculated as an average over random choices made by the strategy itself (e.g., randomizing the order of questioning).
This is distinct from a strategy that only works for a subset of the initial, unknown assignments of roles and colors, which, as previously clarified, would have a 0% success rate.
Update 2025-08-02 (PST) (AI summary of creator comment): In response to user questions, the creator has provided specific examples for how the probability of success is calculated for strategies that use randomization:
If a strategy has a 1/3 chance of making a favorable random choice (e.g., questioning the correct person), which then leads to a 100% certain deduction, the strategy's overall success rate is 1/3.
If a strategy has a 1/3 chance of making a favorable random choice, which then itself has a 75% chance of leading to a certain deduction, the strategy's overall success rate is 1/3 * 3/4 = 1/4.
Update 2025-08-03 (PST) (AI summary of creator comment): In response to questions about how to model Evil's behavior, the creator has clarified the game-theoretic assumptions:
Evil's goal is to minimize the interrogator's maximum probability of success (a minimax strategy).
If a strategy forces Evil to guess between multiple possibilities (e.g., whether another person is Truth or Lie) and Evil has no information to distinguish between them, Evil's optimal counter-strategy is to randomize their choice with equal probability for each option.
A strategy's success rate can be calculated based on this randomization. For example, if Evil is forced into a 50/50 guess, a strategy can count on that 50% probability.
Update 2025-08-03 (PST) (AI summary of creator comment): In response to a user's critique regarding a potential ambiguity in the problem's rules, the creator has acknowledged a 'misunderstanding'.
The creator has offered to refund mana for any losses caused by this issue. This action calls into question the validity of previously discussed strategies and could potentially lead to the market resolving N/A.
Update 2025-08-03 (PST) (AI summary of creator comment): In response to a user's strategy, the creator has provided a detailed example of how to model Evil's counter-strategy and calculate the probability of success:
Evil's goal is to minimize the interrogator's best-case (or maximum) probability of success.
If Evil must choose between two counter-strategies:
One that gives the interrogator a 0% success rate in some initial setups but a 50% success rate in others.
Another that gives the interrogator a uniform 25% success rate across all setups (e.g., by randomizing a choice).
Evil is forced to choose the second option, because minimizing the interrogator's maximum possible success rate (25% is less than 50%) is their primary goal.
This confirms that a strategy's success rate can rely on forcing Evil to randomize their actions, if that randomization is Evil's optimal move to minimize the interrogator's best-case outcome.
People are also trading
I don't believe this 1/3 strategy I came up with works but a friend is confident that it does. We are both thoroughly confused, and I'm just going to post it.
Use my random number generator to assign each person a name A, B and C.
Ask A for their favorite color using the safe method that reveals color but doesn't reveal truth.
Ask B for their truth by asking "1+1=2?" or something.
Ask C for both of their values.
If C intersects with one person's stated value, then that person is Evil and we win.
If C was Evil, there is no way I can win. If A or B were evil, they would have to take a 50/50 guess. 2/3 * 1/2 = 1/3 chance of determining who Evil is and winning.
I don't believe this works because we cannot distinguish between the scenario where A is evil and guessed wrong and the scenario where C is evil and messing with us.
If C is Evil, they know all information and have to lie about both their truth and color values. They can choose to conflict with both stated values, one, or neither. If they conflict with both, they instantly lose. If they conflict with neither, I can ask A and B for their unrevealed information and incriminate C. They then must conflict with only one of A and B, and pretend to be that person forever.
There is no difference between this scenario and one where A is Evil and says C's truth.
If A is evil and guesses B's truth, C will say a different truth and there will be no conflict. At this point, Evil knows all information and can match with B in every answer. This scenario is indistinguishable from the one where B was evil and guessed A's color and thus we don't know who is who.
Therefore, there is no scenario where Evil is revealed and this strategy's chance of winning is 0.
@MaxE This strategy loses no matter what:
If C is Evil then it's clear; the first two answers give him all the info, so he can fake the color and truth of either A or B.
If B is Evil, then when you ask him 1+1, he only commits to faking a truth value, but not a color! He can wait for C's answer. If C has the same truth value as Evil's pretended one, then from then on he will fake C's color. If the opposite, then Evil will fake A's color.
Similarly, if A is Evil, he only committed to a color but not a truth value. Once he finds out the info from C, he will fake the truth behavior of whoever actually has the color he claimed.
Suppose I start by asking random person that I shall call A what is 1+1 ?
Truthteller would say yes, liar would say no.
So if A is evil then Evil has to decide whether to emulate liar or truthteller.
We need to think through:
What do B and C know after A's answer?
If A is evil and has answered something other than 2 then the liar knows he is the liar and A is not telling truth so A is Evil and the liar knows the truth status of each person. However, the truthteller is not sure whether A is liar or evil.
If A is evil and has said 2 then the truthteller knows he is truth and A has told truth so A must be evil rather than the lair. However the liar does not know whether A is truth or evil
So if we have picked evil for the first question of what is 1+1, One of the others will know the truth status of each of the three people and the other will not.
We might then be able to ask clever questions e.g. ask B: Does C know the truth status of the three people? This would not reveal the truth status of either B or C to Evil if A is evil.
But I get horribly lost in trying to proceed with this. Are there questions we can ask that give us information without giving information to Evil? Maybe we can consider questions like:
If A has answered 2 can we ask B If I asked C what is 1+1 and the answer is 2 would you know the truth status of the three people or would you find that impossible to believe (if neither because B does not know, B can decline to answer)?
I am really not sure whether there are such clever questions that allow us to do better than 25%.
One of the things I am wondering about is if we can get Evil if chosen first to state whether they are emulating truth or liar. Secondly get them to emulate person B or C. Finally get them to state their favourite colour. So 7/8 chance they get it wrong and 1 in 3 we pick Evil so 7/24 as a solution probability.. Maybe?
@ChristopherRandles I am confident that we cannot get information to ourselves without it getting to Evil, as everyone can hear what everyone says and our knowledge is a subset of Evil's knowledge.
I like your idea about getting one of the reliable people to know who the liar is immediately, but it is also confusing me. The hardest part is the idea that Evil can choose to pretend to be the truth value of whoever goes first if they don't go first.
@MaxE Yes our knowledge is a subset of Evil knowledge so we cannot get info to us without Evil gaining that same info. What I am trying to think through, probably not very well, is making the questions reveal as little info as possible and to keep Evil not knowing the info that Evil wants in order to be able to successfully emulate answers of one of the other players.
If we can get an answer that may reveal A is not evil while telling Evil if that is A nothing about truth status and favourite colours just sometimes informing us that A is not Evil (Which Evil will know anyway)
Then we may be able to change our strategy from asking questions to A first switching to asking B first and hoping B is evil and has not gathered or guessed information correctly.
So if we ask A what is 1+1.
Then for our second question:
Ask B "what would C say if asked if either B or C know the truth status of all three people"?
If A is Evil then one of B and C know the truth status of all three people so the correct answer is 'yes one of B and C knows' but either C is liar and B truthfully tells us this lie
or C would truthfully answer but B would lie about that answer
Either way if A is Evil then answer should be 'no neither of B and C knows'. If A is Evil this does not help him work out who is liar or truth because both answer the same, nor does it give info on colours
so this is a safe question to ask if A is Evil. A does not gain info .
If we get the answer 'no neither of B and C knows' then we carry on with the plan to ask A questions.
If instead of this answer we get an 'I decline to answer' or a yes (can't see how that answer can be given). Then it is not possible for A to be evil so we should change strategy and ask someone else the questions first.
If B is Evil then he will answer 'no neither of B and C knows' to make us continue asking questions to A
If C is Evil, A is truth and B is liar then to 1+1 A answers 2, Evil now knows truthtelling status of all 3 people. B is liar but doesn't know whether A is truth telling and C evil or A is Evil and has chosen to emulate truthteller. So B answer should be I decline to answer
If C is Evil, A is liar and B is truth then A answers something other than 2, C(Evil) knows the truthtelling status of all 3 but B does not know whether the not 2 lie comes from liar or truthteller so declines to answer.
If we get an 'I decline to answer' on the second question then we know C is Evil and it is easy to ask truth and liar for colour answers we can trust.
Realised an error in above. If B is Evil then he can choose to frustrate us by either answering no to make us continue questioning A first, however he can also choose to answer with a I decline to answer so that we switch to assuming C is Evil and it is no longer so simple we have to ask C favourite colour and see if Evil is C and guesses the colour incorrectly.
@ChristopherRandles I don't think your second question works (in addition to Evil's possible counterstrategies) because B and C don't know each other's truth values.
Your question, asked to B, after "determining" A's truth value: "what would C say if asked if either B or C know the truth status of all three people"?
If A was Evil and answered "2" and B was Liar, they wouldn't know whether A was Evil or Truth and would thus decline to answer.
It is easy to ask questions without determining truth values. You did by multiplying Truth and Lie together, but we can also phrase any question we want a true answer for (as long as we aren't asking Evil) like "is x true?" -> "if I were to ask you if x was true, what would you answer?" This squares their truth value and always yields a positive/true result.
@MaxE Yes my question was wrong.
So what is the question to ask?
Actually I think we want to ask both B and C and maybe more than one question as long as all are safe to ask without tipping off an Evil A to the info he wants.
So after asking A what is 1+1, we might ask B
"if I were to ask you if A is definitely Evil, what would you answer?"
"if I were to ask you if A is possibly Evil, what would you answer?"
"if I were to ask you if A is definitely not Evil, what would you answer?"
Then ask C
"if I were to ask you if A is definitely Evil, what would you answer?"
"if I were to ask you if A is possibly Evil, what would you answer?"
"if I were to ask you if A is definitely not Evil, what would you answer?"
Unfortunately we have established that B and C have different levels of info so the different info from them would reveal who the liar and truthful persons are to an Evil A. So we still need to design some questions to achieve what I want.
I think we also want to design question(s) for A if we come away thinking A is more than 1/3 likely to be evil. We want to get him to decide which person to emulate, but this may be easier once we decide on the question(s) to ask B and C.
After asking A what is 1+1
If the answer is 2 then truthteller will know A is Evil and C liar but the liar will not know for sure whether the truthful 2 answer came from truth or evil.
I am trying to think of a possible question to ask to B and C but struggling:
The requirements for this question are that if A is Evil both truth and liar answer the same.
But the other situation is that A is truth so our questions to B and C are to the liar and evil Evil never helps us and liar does not know and so will give the same answer. We could ask A again but A could be truth or evil. This is like the solution we previously had with 25% success rate.
So I am concluding that this 'side fishing trip' after asking A what is 1+1 to try to get added info on whether A is likely to be evil and continue asking A first or not in which case switch to asking someone else first does not work.
Could we run our side fishing trip before asking A a question?
e.g. ask each person to guess which of the other two people who is the evil one. If the 3 people give the different answers I don't think this helps us. If two people give the same answer then this might be Evil confusing us or it might be because truth has guessed correctly and liar incorrectly or because truth guessed wrongly and liar guessed correctly. What then are the odds that the person named twice is evil and the odds of person not named is evil?
@ChristopherRandles You are saying to ask each person who they think is evil before asking any other questions? Both Truth and Lie will not know and will decline to answer, so Evil will do the same.
@MaxE Ah right or is it? I was hoping that as I asked them to guess, a guess being wrong is not a lie so truth can truthfully answer what his guess was. Lie can guess and then change his answer to the other answer.
The question says
If there are no true answers, Truth will decline to answer the question. If there are multiple true answers, Truth will give an arbitrary one.
So I think the 'decline to answer' is only if there are no true answers?
@ChristopherRandles ah so you propose that we instruct everyone to list who they think may be evil. In that case each will respond with the other two
@MaxE No I propose saying to A Please guess who you think is the person who answers with evil intent. Then ask B and C the same question.
@MaxE Not sur but might liar guess then change answer to himself in order to lie?
Evil might also do the same to we don't learn who lie is?
@ChristopherRandles oh so you are not padding it so their truth values stay hidden.
If Evil is first and says Evil, then Lie knows who evil is and will choose arbitrarily between themself and the other person (who they know is Truth). If they say the other person, then we know that either A is Evil and B is Lie, or A is Lie and B is either Evil or Truth. That is only in a 1/6th chance and we don't know anything yet. That doesn't work I don't think
@MaxE The truth of their statement usually (always?) stays hidden because only the guesser knows their guess and it might be they guessed as they say or it might be they guessed the other person and lied.
If evil is asked first and says myself then we know he is not truth and is he more likely to be evil? Perhaps he has become a 2/3 chance of being liar and 1 in 3 chance of being evil because evil emulating liar has 50% chance of saying himself and 50% of saying the other person.
As you say this will reveal to liar that A is Evil so his guess will be a perfect knowledge guess so always guesses A and liar lies and so says either himself or the other person.
If we have two people saying myself then we, and all 3 subjects, know the third is truthteller. That probably only gives us a 25% chance. Guess the evil one 50% chance and ask for favourite colour which he has 50% chance of guessing wrong.
If only one person says myself it is probably only a 1 in 3 chance that person is evil so I doubt this helps.
@ChristopherRandles Lie will now answer arbitrarily between themselves and A, because they have to lie. Their truth values don't stay completely hidden because Truth cannot say themselves and Lie has to say themselves, unless they know who evil is.
Assuming Truth and Lie also have infinite computational power, here is a strategy that guarantees the best chance of success:
Ask each person: What is the optimal strategy?
Ask each person about each of the three proposed strategies: If I asked you whether X was the optimal strategy, would you say Yes?
Follow whichever strategy got majority support (choose any of them if multiple).
Unfortunately this strategy is non-constructive so we do not know what the actual chance of success is...
(Also perhaps Evil can use the answers to the preliminary questions to make your task harder, so unfortunately this doesn't actually quite work, oh well.)
@A What if Truth responds with the strategy listed in this exact comment? It is by definition one of the best possible strategies, after all.
That reply isn't really a refutation. This is a smart idea and what I would do if this was a real situation. Sadly, it doesn't help resolve this market.
I don't see how Evil can use the preliminary questions to make your task harder, though.
@MaxE Yeah it's just a joke.
Evil can potentially use the preliminary questions because by seeing whether the proposed strategies are correct or not they may deduce who is Truth and who is False, which you may have wanted to keep secret as part of the underlying strategy.
@A Ahh yeah. I bet there is a way to get a strategy out of Lie without determining their status, but it would likely take many questions.