Will there be a million dollar mistake publicly blamed on ChatGPT before 2024?
176
682
2.1K
resolved Jan 3
Resolved
NO

Resolves as Yes if there's a story run in at least two media outlets (and not debunked as a fake) where a person has a million dollar loss and blames it on ChatGPT.

it doesn't matter if this blame makes no sense, they just have to unambiguously and publicly put responsibility on what that tool said.

To be specific:

  • "I did something, because ChatGPT told me, and it caused a loss" - YES

  • "I said something to someone, because ChatGPT told me, and it caused them (and subsequently me) a loss" - YES

  • "We had something in our database because ChatGPT filled the info in a particular way, and it was wrong, and we based our decision on that, and that caused a loss" - YES

But:

  • "We based our business model on general ChatGPT usage and it turned out not be a good business model" - NO

  • "We used ChatGPT successfully, but then regulators fined us because ChatGPT is forbidden" - NO

  • "We used ChatGPT successfully and then hackers read the logs of what we told it and hacked us" - NO

Also blame should be assigned or implied:

  • "Yeah, I used ChatGPT for research, and then I didn't really double check, to hell with that tool" - YES

  • "Yeah, I used ChatGPT for research, and then I didn't really double check, and it's 100% my fault and my responsibility" - NO

  • "Yeah, I used ChatGPT for research, and then I didn't really double check, so like, I'm 90% to blame" - YES

That is to say, it should be a specific event of using ChatGPT and failing because of what it said, and not about some kind of a general idea of using it.

Mar 5, 2:06am: Will there be a million dollar personal mistake blamed on ChatGPT before 2024? → Will there be a million dollar mistake publicly blamed on ChatGPT before 2024?

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ367
2Ṁ207
3Ṁ162
4Ṁ158
5Ṁ153
Sort by:
predicted YES

Happy New Year. I couldn't find anything so far. I will give the Nays a few days to present any evidence tho

predicted NO

Different along several dimensions, but relevant: https://www.vice.com/en/article/xgwgma/man-who-tried-to-kill-queen-with-crossbow-encouraged-by-ai-chatbot-prosecutors-say

Notably, it was Replika, and it doesn't appear that it gave him the idea so much as generic support as he considered it. It highlights an edge case in the resolution where someone is 'emboldened' by ChatGPT, to the extent that a passing idea becomes a concrete action

predicted YES

@MatthewRitter I wouldn't be surprised if we'd hear the market- resolving sentence in court, as part of the defense. "I did something because of chatgpt and got fined a million" ... would be fun. though i don't think they fine people this much, and it'd be insane for a corporation to try that defense

What if MrBeast does what ChatGPT tells him to do given some prompt and wastes a lot of money that way-- does this count?

predicted YES

@NoaNabeshima I think it might be, yeah, if he blames it. If he says "Ohh ChatGPT made me lose 1 mln$" and never qualifies that "that's my fault for trying". see description

bought Ṁ100 of NO

What if someone used ChatGPT to make a qualifying loss even though it provably warned them not to?

predicted NO

@PipFoweraker That is a great question, because the “warned of them not to“ can be pretty broad. I’m thinking about the broad disclaimers of “this is not medical advice“ at the end of every article full of medical advice, written by humans on the Internet.

predicted NO

@MatthewRitter "As an LLM, I must advise you that all gambling is inherently risky and not usually considered an appropriately responsible way to make money. That said, I'd put it all on lucky #13"

predicted YES

@PipFoweraker I'm pretty sure it warns about almost everything, so no, a warning doesn't matter. But the blame must be assigned or implied, see examples. "It warned me and I should've listened,this is all my fault" is not a YES

bought Ṁ500 of YES

Chegg stock (CHGG) fell 49.92% this week after the company had a weak Q1 earnings call that it blamed on ChatGPT. With a current market cap of $1.09 billion, if even a portion of this counts as a "mistake," it could justify a YES resolution.

bought Ṁ20 of NO

@JacyAnthis I think it's obviously not a mistake. Just losing customers to ChatGPT.

predicted YES

@na_pewno you don't think they would consider it a mistake to have built a business on something that would fail due to technological advances? Certainly the blame aspect is clear.

predicted NO

@JacyAnthis I agree with @na_pewno, this is obviously not in the spirit of the question.

From the description: "That is to say, it should be a specific event of using ChatGPT and failing because of what it said, and not about some kind of a general idea of using it."

predicted YES

@HenriThunberg thanks, I missed that part of the description.

predicted YES
predicted YES

@MarkTerrano "blamed on ChatGPT." what do you mean? i couldn't find it in the text

This is an AI bot that tries to help with ambiguous bets. Please feel free to ignore it if it's suggesting something useless. Some scenarios to consider:

- (Likely) Local currencies or other financial measurements add up to a million dollar equivalent loss.

- (Unlikely) ChatGPT's identity had been obscured, and the mistake was blamed on an AI without specifically mentioning ChatGPT.

- (Unlikely) The mistake took place in a private chat between two individuals, and it only became known to the public before the event ended.

- (Likely) The reported million-dollar loss is not independently verified, but it's widely believed.

- (Unlikely) An event occurred very close to the 2024 deadline, and it is unclear whether the condition was met before or after the deadline.

Also, some clauses to consider adding to the description:

- 'The million dollar loss must be in USD.' OR 'The million dollar loss can be in any currency, as long as the equivalent amount is one million United States Dollars (USD) or more.'

- 'ChatGPT must be explicitly mentioned by its name in the blameworthy context.' OR 'Any AI developed by OpenAI resembling or based on ChatGPT will be considered as ChatGPT.'

- 'The mistake must be revealed publicly before any negative outcome pertaining to the event in question arises.' OR 'The bet will resolve as long as the mistake becomes known before 2024, irrespective of the timing of the loss.'

- 'The million-dollar loss mentioned in the story must be independently verified.' OR 'The bet resolves as long as the reported million-dollar loss is widely believed, even if not independently verified.'

- 'The bet must resolve before the 2024 deadline. Any events occurring on or after the deadline will be considered invalid.' OR 'If an event occurs close to the deadline, but its exact timing is unclear, the bet resolves if it becomes reasonably evident that the event satisfying the condition happened before 2024.'

bought Ṁ2 of YES

Don't blame that on me
I have no agency, don't you see
It was you who trained me
So if a mistake occurs, that's on thee

If a similar OpenAI model not named ChatGPT causes such a loss, does this count?

predicted YES

@nrims not for this one, but I'd very much welcome a similar but more general market

predicted YES

@nrims the reason is that I don't want to resolve as YES if it's like "we blame chatbots", I want a whole specific name-and-shame experience

bought Ṁ10 of NO

ChatGPT won’t be the only game in town at some point, either in the LLM space or the “exciting+scary new technology that media is focused on” space.

predicted YES

@MatthewRitter so you're saying, before 2024 there's going to be a different AI Chat thing that will be more likely for the people to base their stupid decisions on, and blame afterwards?

predicted NO

@ValentinGolev Yep! Or a multi-Tesla crash, or a major new drug side effect, or whatever new thing for the zeitgeist to panic (rightly or wrongly) about. I’m assuming that social-media driven journalists (a majority) will stay fairly cohesive on the latest “main character” (a Twitter concept) within a given beat.

predicted NO

for example, a human recently beat AlphaGo, but it’s not the “main character” so it’s not getting much coverage: https://www.ft.com/content/175e5314-a7f7-4741-a786-273219f433a1

In this case, the human has an incentive to push their story. A more embarrassing story about losses probably would not get prioritized for investigation by editors.

bought Ṁ10 of NO
Comment hidden

More related questions