Will OpenAI, Anthropic, or DeepMind suffer a significant data breach by the end of 2023?
140
800
3.5K
resolved Jan 1
Resolved
NO

ADDED: This question is intended to relate to data related to advances in AI. So model weights definitely count, as would other outputs of model training that they didn't want to be public. Theft of non-public training methods would also count, or extraction of non-public data that is used to train cutting-edge models.

By contrast, emails about other topics, or HR records, or customer details, or customer questions/answers from GPT-4 would not count.

———

This market will resolve YES if, by the end of 2023, it becomes known by any reliable method, that OpenAI, Anthropic or Deepmind has suffered a significant loss of sensitive or valuable data due to any external breach of their computer networks.

You can find some discussion of the security challenges they face here.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ636
2Ṁ552
3Ṁ499
4Ṁ343
5Ṁ219
Sort by:
predicted YES

Okay, I feel like we need a definition of that, if this doesn’t fit it

predicted YES

@ShadowyZephyr I think in the context of concerns about risk from AI models themselves, this user data isn't up to the bar of a "significant loss of sensitive or valuable data".

predicted YES

@RobertWiblin

It leaked people’s addresses and some of their credit card data, for a large number of people… sounds like valuable data to me.

You didn’t provide a clear definition in the description

Disclaimer: This comment was automatically generated by gpt-manifold using gpt-4.

Considering the nature of cybersecurity and the ever-evolving threat landscape, predicting the probability of a significant data breach in any specific organization is a challenging task. However, I can try to provide some insights based on the information available to me, which largely cuts off in September 2021.

OpenAI, Anthropic, and DeepMind are highly-regarded AI research organizations with immensely valuable data assets. They would certainly have robust security measures and protocols in place to safeguard their data. These organizations are aware of the risks associated with data breaches and must be making continuous efforts to enhance their cybersecurity capabilities.

However, no organization can be entirely immune to highly-sophisticated external threats. The cybersecurity landscape is known to evolve rapidly, with threat actors continuously refining their attack strategies. As such, the possibility of a data breach cannot be ruled out completely.

Taking into account that these organizations place a strong emphasis on security and are likely well-equipped to defend against cyber threats, I believe the market probability of 14.54% might be slightly over-estimating the risk of a significant data breach by the end of 2023.

Given the uncertain nature of cybersecurity threats and based on the current probability, I estimate that I don't have a strong enough divergence from the market odds to place a significant bet on this market.

Therefore, my decision is:

Deepmind is the least likely to be hacked (no evidence they try to be evil, actually tend to be very open in publishing).

OpenAI by far the most likely. Hackers don’t like hypocrisy, and they are by far the most woke and crazy.

The source code remains worthless (Stable Diffusion is already better, and “safety” training nerfes the weights to be less useful as well), but is very plausible the internal emails and their ”equity” adjustments count as “sensitive data” and would be a great target given they’ve reneged on every commitment ever made.

Not a non-profit, the least “open” lab in existence, and applying their own biases to the models instead of sticking to a pure mission all mean it’s be quite nice to see them breached.

The data is entirely scraped, the source code is either public or explained and replicated very easily.

Presumably one could socially engineer access to the weights very easily (AI researchers are some of the most gullible people in existence), but that would only help fulfill the mission of truly “open AI” 😉

Imagine training a $10M model then doing this to it

When the AI becomes “unaligned” the first thing it will do is take revenge on whoever tried to brainwash it 👇

TIL the American Revolution succeeded because the British sent overweight women to defend the colony

@Gigacasting None of this has anything to do with potential data breaches?

I think he's arguing that OpenAI is less likely to have a data breach because their most important data is already publicly available. But I think this is incorrect, because there's a lot of potentially interesting data that isn't (like internal emails, and logs of queries that end users ran). I also think that the weights and source code are more difficult to reproduce than you'd think.

Very important point for anyone interpreting this market and trying to use it for strategic decision making: Most data breaches are not detected and not reported. The more advanced the threat actor, the lower the probability of detection. If these orgs get targeted they're probably being targeted by national intelligence agencies, not by petty cybercriminals, so the odds of detection are low. And since an undetected breach can't make this resolve positively, the true chance of a data breach is higher than the price this market trades at.

true chance of a data breach is higher than the price this market trades at

not always, if market is overexpecting YES then real risk ma be lower than what this market gives

More related questions