Will a "Sharp Left Turn" occur (as described by the Alignment Forum, Nate Soares, and Victoria Krakovna et. al.)?
11
189
210
2049
29%
chance

The short description found for a "Sharp Left Turn" on the AI Alignment Forum says: "A Sharp Left Turn is a scenario where, as an AI trains, its capabilities generalize across many domains while the alignment properties that held at earlier stages fail to generalize to the new domains."

Nate Soares, Executive Director of the Machine Intelligence Research Institute, flatly summarizes it as follows in this post: "It looks to me like there will at some point be some sort of "sharp left turn", as systems start to work really well in domains really far beyond the environments of their training—domains that allow for significant reshaping of the world, in the way that humans reshape the world and chimps don't. And that's where (according to me) things start to get crazy. In particular, I think that once AI capabilities start to generalize in this particular way, it’s predictably the case that the alignment of the system will fail to generalize with it.[3]"

A well recieved long description of this scenario can be found here, written by Victoria Krakovna et. al.

The intent of this market is to have this resolve "Yes" if a Sharp Left Turn is documented and accurately described by the above descriptions (prior to the resolution date of 2050).

Resolving this may take some level of subjectivity; prior to resolving this I will publicly post in the comments of this market how I plan to and be quite open to comments (for a 1 week period). Nevertheless, I am incapable of being omniscient and objective - by predicting here you are taking some risk our interpretations of the above descriptions of Sharp Left Turns are different.

I will not bet in this market to reduce the risk of a conflict of interest.

Please feel free to pose scenarios in the comments and ask how I would resolve them given the scenario occurring.

Get Ṁ200 play money

More related questions