[Carlini questions] Will people regularly trust AI systems to "know best"
3
125Ṁ78
2030
73%
By Jan 1st 2030
67%
By Jan 1st 2029
60%
By Jan 1st 2028
51%
By Jan 1st 2027
28%
By Jan 1st 2026

Full question: 'chance that people will regularly ask AI systems answers to questions, or plans to achieve some goal, and even if the answer seems unreasonable, believe it because they assume the AI system "knows best"'

Resolution Criteria:

I'll go mostly on "vibes" here to decide this. If people basically treat AI systems like mostly infallible oracles, I'll say "yes". But if people generally are suspicious of AI outputs, I'll say "no". As an example, if someone were to ask an AI system about some fact, but Wikipedia said something different, will people overwhelmingly believe the AI system is right? Or would they trust Wikipedia more?

Motivation and Context:

Today, most people when looking for directions from point A to point B will ask Google Maps, and follow the directions even if it seems like a longer path than they're used to, and would assume that there is unexpected traffic along the normal route. Or, if a chess Grand Master asked Stockfish (the top chess engine) what move to play in a given position, and it told them something confusing, their response would be "what does it see that I don't?" and if asked what the best move is, would answer "whatever Stockfish said." even if their intuition says different. Will average people do this for AI systems with other general tasks too?

Question copied from: https://nicholas.carlini.com/writing/2024/forecasting-ai-future.html

Get
Ṁ1,000
to start trading!
Sort by:

Note @ProjectVictory @CraigDemel that the full version of the question in the description says "even if the answer seems unreasonable". I think that carries quite a bit of weight. I will resolve same as Karpathy

It's happening quite a lot already in my opinion, here are a couple of examples of people resolving markets based on their chats with LLMs:

https://manifold.markets/CDBiddulph/will-a-video-game-with-airendered-g-2e37a190fefe#5aswdhbtmco

https://manifold.markets/FranklinBaldo/will-human-narration-for-audiobooks#b48x0lk2n2j

But it depends on who you sample to represent "people". All people have different opinions obviously and markets that resolve on vibes are terrible, see example above.

opened a Ṁ10 YES at 60% order

Many of my otherwise seemingly smart coworkers already say things like, "GPT says this" without doing further fact checking, so unless we manage to shed ourselves of the bias "complete sentences=smart" at a cultural level, all of these will resolve YES.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules