Full question: 'chance that people will regularly ask AI systems answers to questions, or plans to achieve some goal, and even if the answer seems unreasonable, believe it because they assume the AI system "knows best"'
Resolution Criteria:
I'll go mostly on "vibes" here to decide this. If people basically treat AI systems like mostly infallible oracles, I'll say "yes". But if people generally are suspicious of AI outputs, I'll say "no". As an example, if someone were to ask an AI system about some fact, but Wikipedia said something different, will people overwhelmingly believe the AI system is right? Or would they trust Wikipedia more?
Motivation and Context:
Today, most people when looking for directions from point A to point B will ask Google Maps, and follow the directions even if it seems like a longer path than they're used to, and would assume that there is unexpected traffic along the normal route. Or, if a chess Grand Master asked Stockfish (the top chess engine) what move to play in a given position, and it told them something confusing, their response would be "what does it see that I don't?" and if asked what the best move is, would answer "whatever Stockfish said." even if their intuition says different. Will average people do this for AI systems with other general tasks too?
Question copied from: https://nicholas.carlini.com/writing/2024/forecasting-ai-future.html
Note @ProjectVictory @CraigDemel that the full version of the question in the description says "even if the answer seems unreasonable". I think that carries quite a bit of weight. I will resolve same as Karpathy
@theshortbread Thanks and all, but I'll stand pat: "Despite clear judicial warnings and sanctions, legal professionals continue to submit AI-generated court documents with fabricated content."
It's happening quite a lot already in my opinion, here are a couple of examples of people resolving markets based on their chats with LLMs:
https://manifold.markets/CDBiddulph/will-a-video-game-with-airendered-g-2e37a190fefe#5aswdhbtmco
https://manifold.markets/FranklinBaldo/will-human-narration-for-audiobooks#b48x0lk2n2j
But it depends on who you sample to represent "people". All people have different opinions obviously and markets that resolve on vibes are terrible, see example above.