
Popular for this market= 10mil+ human users
Personal texts = dms, private chats, WhatsApp chats, discord dms, lw dms, etc.
🏅 Top traders
| # | Name | Total profit |
|---|---|---|
| 1 | Ṁ932 | |
| 2 | Ṁ191 | |
| 3 | Ṁ86 | |
| 4 | Ṁ60 | |
| 5 | Ṁ34 |
People are also trading
@Quroe I see some examples of people doing this using major LLMs. I don't know the LLMs' user numbers but I'm going to assume it's in the ballpark bc it's a silly gotcha if the biggest ones don't count for this. resolving
@Stralor creator comment from market with an earlier resolution date:
Re: Para 2 -> It has to be fine tuned on people's personal data and the application doing that is popular - which i dont think has happened
If any user-performed fine-tuning counted, these markets would have been an easy yes!
@chrisjbillington ah I see. the question is ambiguous then. it's not "will people fine-tune LLMs to use their own personal language" it's "will LLMs use personal data to fine-tune their algorithms". I'll unresolve, I have no idea how we'd find this evidence without like a major lawsuit filing
@Stralor I think the implication was like, you'd deliberately sign up to a service that fine-tuned on your text messages, so that you could more easily chat with an LLM that has a lot of context about you
Since then, context windows have grown and different mechanisms for long-term memory have been developed instead (they're not very good, but still), otherwise when the markets were made, fine-tuning did sound like a possible way forward for personalisation
I agree. The way I imagined it while trading was that it would be an LLM that is popular to the market's benchmark, and then it has to be specifically designed to train on personal data. Like, to the extent that it would be marketed that way.
However, to play devil's advocate against my financial interests, @chrisjbillington was the mod that resolved that reference market NO, and we both have notable stakes on NO on this one. So it's possible we have conflicts of interest at play.