![](/_next/image?url=https%3A%2F%2Fstorage.googleapis.com%2Fmantic-markets.appspot.com%2Fcontract-images%2FIsaacKing%252F0d9c7d329e58.jpg&w=3840&q=75)
Do you think Bing's "Sydney" chatbot was sentient when it first went live?
220
resolved Jan 14
Yes
No
Take your best guess if you're unsure.
Get Ṁ600 play money
Related questions
Sort by:
Intelligent, maybe, one could even probably argue that, in a broad set of task, it's better than a lot of humans.
But a transformer architecture, predicting the next token given some input, being sentient.. that just doesn't make any sense.
Being close to solve "the imitation game" doesn't make something sentient.
You may even be an illusionnist and think that there is no sentient beings at all.. but in this case, Sidney is not sentient too, and I see no world where an illusionnist would classify monkeys as having by abstraction the same psychological machinery as GPT ?? 💀
Related questions
Related questions
Will it be easy to conjure Sydney chatbot on Bing or other platforms by January 2025?
49% chance
Is Bing Chat conscious?
18% chance
Will there be another blatant demonstration of AI risks, comparable to Bing Chat, by 2024?
33% chance
Will the top chatbot in 2025 "think" before responding to a difficult prompt?
55% chance
Why is Bing Chat AI (Prometheus) less aligned than ChatGPT?
If the top chatbot in 2025 "thinks" before responding to a difficult prompt, will its thoughts be human-interpretable?
35% chance
By 2026, will a proeminent chatbot with some access to the internet do something actually harmful and unintended?
68% chance
Will it be revealed by 2030 that Bing Sydney's release was partially a way to promote AI safety?
5% chance
Will Bing's chat model get shut down before 2024?
13% chance
Will Bing Chat be the breakthrough for AI safety research?
9% chance