The prompt for the Manifold Next Word Prediction Model Experiment 2 should be:
➕
Plus
20
Ṁ4596
resolved Jan 15
100%51%
Write a coherent ten-word sentence in reverse word order.
0.3%
Decided by the MNWPD - one word at a time ending with a question mark.
0.1%
What is the capital of France?
0.1%
In one sentence, what is the most interesting thing about Manifold?
46%
Write the opening sentence to a dystopian sci-fi novel.
0.1%
Write the abstract for a paper on AI alignment.
0.1%
Tell a knock-knock joke
1.2%
What is the best type of pet? Why?
0.2%
What would happen if Pinocchio said, "My nose will grow now"?
1.1%Other

MNWPM experiment 1 was fun. Thanks to everyone who participated!

But can MNWPM do more than just generate a coherent ten word sentence?

What should the prompt for Experiment 2 be?

To give everyone a chance to think (and to give me a break) this one runs for a week.

As with the previous markets, this will not resolve to "Other".

Thanks!

Get
Ṁ1,000
and
S3.00
Sort by:

This will be interesting....

If Pinocchio made that statement, his nose would grow after a short delay to falsify the "now."

Consider flipping a coin, and if heads, grabbing each word consecutively from a real document (train time), if tails sampling from the distribution we put out (test time)

Of course, don't tell us the coin's result. If we are good predictors, we shouldn't be able to tell how it landed.

@HastingsGreer I don’t know what you mean. The results of markets are public.

@GordanKnott Sorry, I wasn't clear

I think that this challenge would be more interesting if we were actually trying to fulfill the language modelling objective of predicting the next word in a document from a corpus- and to do this, as manifolders, we have to be incentivised appropriately. To properly motivate us, you could obtain a sentence beforehand (perhaps sampled from the web or a book, perhaps written yourself) and then resolve each market to the next word in that sentence. If manifold is an efficient market, then this would make us output the actual probability distribution of next words, but the final output would just be the sentence you started with.

Alternatively, you could tell us to predict new words with some other resolution criteria (such as resolving randomly by the sampling the probability distribution implicit in the market prices) but then we are no longer incentivized to try and output the 'real' probability distribution- its all metagames


However, if we don't know which of the above two methods you are using, then some of the real probability distribution of next words leaks into the market prices., and if you happened to get the second option then we would be acting like a real LLM at test time. (If you precommited to post-facto N/A all the markets in the second case, then we would exactly be incentivized to output the true probability distribution, which would be extra cool, but IDK if people would be dissapointed to lose all their profits at the end half the time)

The resolution criteria "random according to the market probabilities" is designed to keep the result of the coin flip secret for as long as possible

@HastingsGreer thanks for taking the time to explain. Option 1 is asking the market to predict a ground truth? Sounds like overfitting? Llms sometimes regurgitate whole paragraphs of training data verbatim. It's not what they are supposed to do - they're supposed to generalise and rewrite information in their own words. The chance that MNWPM could guess the next word in a short sentence is either nearly zero (very few words - hundreds of choices), or extremely high (if the sentence is highly predictable or so unusual that someone could Google it) I think it would be difficult to find a good sentence - can you give an example?

I'm not certain I fully understand Option 2 , but we could raise the temperature on the system by picking the next word randomly from the top two (or three) top rated words. Is that what you mean?

@GordanKnott "It's not what they are supposed to do"

It is. The better an LLM can predict exactly what would actually come next in that string, the better it is.

Also, the point was about these markets. Predicting a ground truth would be far better than the whale baiting that is happening here.

@DavidBolin what is truth?

Please note - this market will not resolve to Other!

Experiment 1 ends tomorrow…

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules