xAI builds truth-seeking AI before 2027?
7
1kṀ805
2026
25%
chance

Elon Musk keeps speaking about the importance of truth seeking in AIs. He himself acknowledged Grok is not that yet.

Will his company xAI be actually able to build something like this before 2027?

Resolves based on my opinion or if there is a disagreement, then based on a poll.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator has provided their definition of truth-seeking AI:

    • Aspiring to be accurate.

    • As opposed to giving the popular, liked opinion.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator provided a concrete negative example for truth-seeking AI:

    • An AI whose reception is characterized by 'glazing' (using the example of GPT-4o's recent popular reception) is not considered truth-seeking.

    • This type of reception is viewed as an instance of an AI giving the popular, liked opinion, rather than aspiring to be accurate.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator has further specified requirements for an AI to be considered truth-seeking, focusing on its implementation:

    • The truth-seeking characteristic should ideally be built into the model, for example, through its training or architecture.

    • A system prompt that merely instructs the AI to "be accurate" will not be sufficient to meet this criterion.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator has indicated an additional signal they will look for to identify a truth-seeking AI:

    • It should be able to predict the future much better than other models.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator has clarified that their evaluation of whether xAI "builds truth-seeking AI" (as per the market question) will also consider:

    • Whether xAI makes obvious attempts towards this goal.

    • Whether these attempts show promise. The creator further related this to the AI needing to have its "map corresponding to territory", which they state should enable better predictions.

  • Update 2025-05-07 (PST) (AI summary of creator comment): The creator has clarified the following points for resolution:

    • For the market to resolve YES, it is not a requirement that xAI is the first to build a truth-seeking AI.

    • A truth-seeking AI built by xAI that might contradict Elon Musk would not in itself prevent a YES resolution, reinforcing that the AI should prioritize accuracy.

Get
Ṁ1,000
to start trading!
Sort by:

It's an interesting question.

Musk has some right ideas on the meta level, e.g. he was quite thoughtful on Community Notes, and generally on the idea of automating/crowdsourcing truth-seeking, in part at least.

But he's also horribly confused about so many things, including thinking that social media is more accurate than traditional media, Wikipedia, academia, etc. Not to mention his numerous object-level delusions.

So given that, I'd be more surprised if he manages to pull off accurate truth-seeking; even if he does, his own AI would either have to contradict him, or be wrong itself.

Also, xAI hasn't produced anything truly new thus far, unlike every other major lab.

@SqrtMinusOne Not sure to what extent it is "horribly confused" vs just promoting his platform and speaking more about the potential, than average user experience.

Technically xAI doesn't have to be first for this to resolves YES.

I don't believe Elon Musk sometimes being contradicted would stop them from releasing it.

How would you define truths seeking?

@WilliamGunn Aspiring to be accurate as opposed to giving the popular, liked opinion.

@patrik For instance the glazing we saw with GPT-4o recently was not that.

@patrik This should ideally be somehow built into the model through different kind of training or architecture. A system prompt saying "be accurate" isn't gonna cut it.

@WilliamGunn Actually a really good signal for that is that it should be able to predict the future much better than other models.

@patrik ok, I think I understand how this could work in practice. "Aspiring to be accurate" is as ill-defined as "truths seeking", but for this, I can imagine a system where you ask "which outcome is more likely" then observe which outcome happens. Is that what you mean?

@WilliamGunn I mean the question is kind of about whether xAI decides to make obvious attempts towards that which show promise. I guess it's about having "map corresponding to territory" and having that should allow for better predictions. Not sure how much better can you define it?

@patrik "obvious attempts that show promise" is very ill-defined!

@WilliamGunn do you want me to give you mathematical definition in lean4 or smthng?

@WilliamGunn that's why i said resolves based on my opinion or poll.

@patrik No, just wondering how you plan to resolve this. Tweets from Elon or actual results?

Not to get all philosophical, and I like the map/territory analogy, but as I'm sure you know, there are many ways a thing can be true that aren't a map:territory relationship. For example, "is there water in the fridge?" could be true if the relative humidity of the fridge is above zero, but that's not the kind of true the asker is looking for, right? So when it comes to truth-seeking, it's important to know how true is defined, or, alternatively you could re-word the question to something like Elon tweets that they have built truth-seeking AI, but it sounded like you had something more rigorous in mind.

@WilliamGunn I know very well what you mean and I like the question. But I made it intentionally abstract and said it will resolve based of poll in case of dispute.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules