The Open Philanthropy Worldview Contest awarded six prizes. Now I need to decide - would it be a good use of time to review and respond to some or all of those winners? Thus, six markets. I will use the trading to help determine whether, and how in depth, to examine, review and respond to the six posts.
If I read the post/article for a substantial amount of time, and in hindsight I judge it to have been a good use of time to have done so whether or not I then respond at length, this resolves to YES.
If I read the post/article for a substantial amount of time, and in hindsight I judge it to have NOT been a good use of time to have done so whether or not I then respond at length, this resolves to NO.
If I read the post long enough to give it a shot and then recoil in horror and wish I could unread what I had read, that also resolves this to NO.
If I choose NOT to read the post for a substantial amount of time, then this resolves to my judgment of the fair market price at time of resolution - by default the market price, but I reserve the right to choose a different price if I believe there has been manipulation, or to resolve N/A if the manipulation situation is impossible to sort out.
If I do trade on this market, that represents a commitment to attempt the review if I have not yet done so, and to resolve to either YES or NO.
Authors of the papers, and also others, are encouraged to comment with their considerations of why I might want to review or not review the posts, or otherwise make various forms of bids to do so (including in $$$ or mana, or in other forms).
These markets are an experimental template. Please do comment with suggestions for improvements to the template.
The post can be found here: https://www.openphilanthropy.org/wp-content/uploads/2023.05.22-AI-Reference-Classes-Zachary-Freitas-Groff.pdf
Long. Probably not worth reviewing because there are just a lot of reference classes with a lot of varying probabilities, and they're things like 'megafauna extinction rate' and 'government spending as a percent of gdp'. AI is very different from those.
The main overall conclusion of this reference class work is that a wide range of probabilities are consistent with historical reference classes; the playing field for reference class tennis is large. Estimates of existential risk in the single digits or low double digits look perfectly consistent with a reference-class approach
less related to how important it is to review: I don't really get why it's important to have this many numerical estimates for reference classes, generally. It feels like something's gone at least a bit wrong culturally, somewhere? Similar to how academic writing puts more weight on serious academic work-ness than it does 'how much does this actually matter', this seems to put more weight on ... just having a lot of numbers?
Plausibly, the central government is a better analogue to a superintelligent being than government as a whole.
(ofc, serious people wrote this and evaluated it as important and it's rather more likely that I, who read it for all of seven minutes and laughed nervously, have an impresion that is wrong, but it's important that casual negative ideas are surfaced as much as casual positive ones so that the distributed process of reasoning isn't biased bla bla bla)
Maybe there are a lot of serious people and they're running out of theoretical things to think about, because AI is such a serious problem but also there just aren't that many remaining productive angles of attack but they still want to do something?
@jacksonpolack Yeah. What I want to know is, what about this work did they or others find interesting or convincing? Why did it seem useful? What could usefully be done?
From your description, this work does not sound super useful, even if done right.
Claims that governments are akin to ASIs (or simply SIs) are generally a sign of not understanding what an SI looks like.
The most interesting-seeming bits in the conclusion were, summarized,
Putting together all the reference classes, AGI-caused extinction is 1/10000 and 1/6 (mean of ~0.03%-4%)., and AGI-caused 'control over current human resources' is 5% and 60% (mean of ~16-26%). What surprised the author the most was that events like takeover/control of resources were much more common than events like extinction.
To not overstate it, I'm pretty sure they're very aware that ASIs are very different from governments, but nevertheless they're one of many of the "closest" things where we can get a hard number for reference classes. It's just not close enough to be worth it. I picked that quote to emphasize how far afield the number-gathering-process took us.