Resolves YES if Manifold users guess all the letters of the word before making 6 mistakes on the following market.
https://manifold.markets/GastonKessler/manifold-plays-hangman
Resolves NO otherwise.
@probajoelistic I would normally not move a market this much, and it doesn't match my gut feeling, but every simulation version I've tried agrees. If I'm wrong, it'll be a fun thing to be wrong about
@probajoelistic Unless I make the same mistake in every version as I've already done once 😑
@probajoelistic I do think simulations don't capture the terrain. There is a bit of the Princess Bride scene with the two chalices going on
@JussiVilleHeiskanen I'm curious about what you mean by that. Can you go into more detail?
@Quroe we got a hint from the creator, but was he engaging in amphiboly, or was he trying to help us, or neither. We are dealing with a human creator though, not a stochastic or model picked word
@JussiVilleHeiskanen Ah, I agree then, yes. The more guesses we make, the more we play the game of reading minds rather than purely mathematic odds. I think we might be able to Bayes'-Theorem-but-with-extra-steps this problem out by crossing P(All words being even odds) by P(Reading creator's mind). I'm still trying to devise what this would look like.
I've been trying to figure out how to scrape Google Trends data to probe what the average mental state of a person would be and then compare it to the remaining possible words.
@JussiVilleHeiskanen aside from the possibility of sabotage or further hints, I could weigh probabilities by how likely I think each word is to have been selected given what I know about the market's creation
given what I know about the market's creation
You have my attention. Where are you going with this?
@Quroe No inside info. Just what they said about "not easy, but doable", English level, and a general sense that they're not trying to bamboozle us
@GastonKessler I suppose it depends on liquidity between these 2 markets, but I don't know if anybody is being that sweaty.
You've nerd sniped me with this game.
@GastonKessler I think I may be wrong, but I think the breakeven point for incentives is when this market is 1/26th the liquidity of the other market, given that the liquidity is ideally spread evenly between 26 letters. (It's not, but let's simplify and assume cows are spherical and exude milk in all directions.)
Given that this market has 407 mana in liquidity at time of post, the other market would need at least 10,582 mana across the board in total liquidity to compensate and reduce the incentive to sabotage to be breakeven at worst.
However, in reality, most of the liquidity on the other market is from me betting Q into the ground, so each letter is weighted differently. This changes the math more than I'd like to think about.
@Quroe Isn't 407 the total trade volume? I believe this market only has a 100 Mana liquidity pool. dropping this market to 1% (assuming you could be sure to win your sabotage) would currently give a maximum profit of 47M and anyone else could make somewhat better profits by betting it back up and trying to find the word.
My odds at "______" and failed A, E:
(win rate in 10,000 strategy optimized Monte Carlo simulations) x (YES traders) / (all traders) =
62.57% x 8 / (8+9) = 29.44%
My odds are 29.44% win rate for our current game's worldline.
I learned my program had a bug. I updated my Monte Carlo simulation to correct the bug. This does not seem to change my outlook much.
(win rate in 10,000 strategy updated-optimized Monte Carlo simulations) x (YES traders) / (all traders) =
62.46% x 8 / (8+9) = 29.39%
My odds are 29.39% win rate for our current game's worldline.
I learned my Monte Carlo program had a bug where it wouldn't filter words like "bikini" if we knew "_i_i__". Now it does. I ran this new program at each stage of our game with 10,000 simulations each.
Start: 65.16%
Guessed E wrong: 61.52%
Guessed A wrong: 54.57%
Guessed I right (present): 44.37%
I then ran it with a list of words where I manually filtered words that didn't seem like words the average person would know from the list of remaining words at our present stage in the game.
A, E, and I known, post manual filter: 56.31% win rate
Hilariously, guessing "I" correctly may have made our chances of winning fall! Manually filtering the list may have clawed back some of our odds.
@BoltonBailey Because I don't want to modify my code. 😆
A better approach at this stage might be test each remaining word 1 by 1 and seeing what percent of words get correctly solved with our "choose most likely letter" strategy.
This game and my subsequent life choices have ruined my sleep schedule enough at this point. If somebody can scrape a extra percent off of me by taking this approach, they've earned it.
I touched the code. I go word by word now, seeing if "choose most likely letter" solves it.
After manually culling my list for words that don't sound like common words, my odds are 46.15%.
This assumes that each word remaining in the possibility list is equally likely to be chosen. It's not. Our milage may vary.