Will Manifold win at hangman?
26
100Ṁ2899
Feb 28
31%
chance

Resolves YES if Manifold users guess all the letters of the word before making 6 mistakes on the following market.

https://manifold.markets/GastonKessler/manifold-plays-hangman

Resolves NO otherwise.

Get
Ṁ1,000
to start trading!
Sort by:
opened a Ṁ1,000 NO at 21% order

@probajoelistic I would normally not move a market this much, and it doesn't match my gut feeling, but every simulation version I've tried agrees. If I'm wrong, it'll be a fun thing to be wrong about

opened a Ṁ250 YES at 28% order

@probajoelistic Unless I make the same mistake in every version as I've already done once 😑

@probajoelistic I do think simulations don't capture the terrain. There is a bit of the Princess Bride scene with the two chalices going on

filled a Ṁ6 YES at 39% order

@JussiVilleHeiskanen I'm curious about what you mean by that. Can you go into more detail?

@Quroe we got a hint from the creator, but was he engaging in amphiboly, or was he trying to help us, or neither. We are dealing with a human creator though, not a stochastic or model picked word

@Quroe if we could use fuzzy logic ...

@JussiVilleHeiskanen Ah, I agree then, yes. The more guesses we make, the more we play the game of reading minds rather than purely mathematic odds. I think we might be able to Bayes'-Theorem-but-with-extra-steps this problem out by crossing P(All words being even odds) by P(Reading creator's mind). I'm still trying to devise what this would look like.

I've been trying to figure out how to scrape Google Trends data to probe what the average mental state of a person would be and then compare it to the remaining possible words.

@Quroe one could always "be the fish "

@JussiVilleHeiskanen aside from the possibility of sabotage or further hints, I could weigh probabilities by how likely I think each word is to have been selected given what I know about the market's creation

@probajoelistic

given what I know about the market's creation

You have my attention. Where are you going with this?

@Quroe No inside info. Just what they said about "not easy, but doable", English level, and a general sense that they're not trying to bamboozle us

opened a Ṁ1,000 NO at 21% order

I love these markets. Manifold is SO BACK. It's the perfect nerd snipe. I've created a spreadsheet and multiple Python scripts for this thing

@probajoelistic Agreed. *high five

bought Ṁ10 NO

Ironically, the correct N guess seems to bring the winrate down, since it increases the likelihood of "ing" ending, which means that there are only two letters left and relatively many possibilities.

filled a Ṁ79 YES at 68% order

I've run a Monte Carlo simulation with 10000 trials given that a random 6 letter word is chosen. Given we have guessed wrong once with "e" and have 5 tries left, we have a 68.13% chance of winning if we select the best odds letter each time from my algorithm.

However, people can choose to actively sabotage the game and try to make us lose. That makes this... interesting.

filled a Ṁ80 NO at 31% order

At time of post, 5 traders have taken a YES position and 6 have taken NO.

Let's assume that everybody who is on NO is sabotaging and everybody on YES is cooperating.

5 / (5+6) * 68.13% = 30.97%.

Perhaps those are better calibrated odds?

@Quroe Isn't making profit on the main market an incentive not to sabotage for small no holders?

@GastonKessler I suppose it depends on liquidity between these 2 markets, but I don't know if anybody is being that sweaty.

You've nerd sniped me with this game.

There's also the chance that somebody ticks a letter's odds up just because they want to. Whoever has the last say on the market is who decides what worldline we play.

@Quroe I've added 240M liquidity to the other market to disincentivise sabotaging

@GastonKessler I think I may be wrong, but I think the breakeven point for incentives is when this market is 1/26th the liquidity of the other market, given that the liquidity is ideally spread evenly between 26 letters. (It's not, but let's simplify and assume cows are spherical and exude milk in all directions.)

Given that this market has 407 mana in liquidity at time of post, the other market would need at least 10,582 mana across the board in total liquidity to compensate and reduce the incentive to sabotage to be breakeven at worst.

However, in reality, most of the liquidity on the other market is from me betting Q into the ground, so each letter is weighted differently. This changes the math more than I'd like to think about.

@Quroe Isn't 407 the total trade volume? I believe this market only has a 100 Mana liquidity pool. dropping this market to 1% (assuming you could be sure to win your sabotage) would currently give a maximum profit of 47M and anyone else could make somewhat better profits by betting it back up and trying to find the word.

@GastonKessler I think you're right. Maybe the game is still afoot!

I am still of the opinion that 31% still seems calibrated at this stage. "Never attribute to malice..." and all that.

filled a Ṁ38 NO at 29% order

My odds at "______" and failed A, E:
(win rate in 10,000 strategy optimized Monte Carlo simulations) x (YES traders) / (all traders) =
62.57% x 8 / (8+9) = 29.44%
My odds are 29.44% win rate for our current game's worldline.

I learned my program had a bug. I updated my Monte Carlo simulation to correct the bug. This does not seem to change my outlook much.
(win rate in 10,000 strategy updated-optimized Monte Carlo simulations) x (YES traders) / (all traders) =
62.46% x 8 / (8+9) = 29.39%
My odds are 29.39% win rate for our current game's worldline.

I learned my Monte Carlo program had a bug where it wouldn't filter words like "bikini" if we knew "_i_i__". Now it does. I ran this new program at each stage of our game with 10,000 simulations each.

Start: 65.16%
Guessed E wrong: 61.52%
Guessed A wrong: 54.57%
Guessed I right (present): 44.37%

I then ran it with a list of words where I manually filtered words that didn't seem like words the average person would know from the list of remaining words at our present stage in the game.

A, E, and I known, post manual filter: 56.31% win rate

Hilariously, guessing "I" correctly may have made our chances of winning fall! Manually filtering the list may have clawed back some of our odds.

I'm also now convinced that there is a strong enough army of people trying to win this game instead of sabotaging it, making sabotage negligible. I think 56% looks calibrated enough.

10,000 Monte Carlo simulation trials

"_i_in_"

2/6 wrong guesses: A and E

Full population, 62 words: 32.28% win rate

Manually culled population, 43 words: 46.10% win rate

@Quroe Why do you need 10000 trials when there are only ~60 words left?

@BoltonBailey Because I don't want to modify my code. 😆

A better approach at this stage might be test each remaining word 1 by 1 and seeing what percent of words get correctly solved with our "choose most likely letter" strategy.

This game and my subsequent life choices have ruined my sleep schedule enough at this point. If somebody can scrape a extra percent off of me by taking this approach, they've earned it.

@Quroe 👍 Never touch code that works

I touched the code. I go word by word now, seeing if "choose most likely letter" solves it.

After manually culling my list for words that don't sound like common words, my odds are 46.15%.

This assumes that each word remaining in the possibility list is equally likely to be chosen. It's not. Our milage may vary.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules