Will Bing Chat be the breakthrough for AI safety research?

Bing Chat seems to make even more headlines than ChatGPT did on its release, though not necessarily for the right reasons. Most reports seem to focus on its lack of emotional control and alignment.

Will this generate so much public interest and raise so much alarm that AI safety and AI alignment will become mainstream and receive significantly more interest and resources than before?

I will attempt to resolve based on media and/or industry reports. The resolution date is currently set to the end of 2025, but I will resolve earlier if there's clear evidence one way or another. If there's a clear uptake in AI safety interest starting shortly after Bing Chat's release, this would count even if it's hard to prove a causal relation.

Get Ṁ600 play money
Sort by:


More related questions