Would it be a good use of time to review 'Dissolving AI Risk - Parameter Uncertainty in AI Future Forecasting'?
Basic
1
Ṁ140
resolved Oct 6
Resolved as
9%

The Open Philanthropy Worldview Contest awarded six prizes. Now I need to decide - would it be a good use of time to review and respond to some or all of those winners? Thus, six markets. I will use the trading to help determine whether, and how in depth, to examine, review and respond to the six posts.

If I read the post/article for a substantial amount of time, and in hindsight I judge it to have been a good use of time to have done so whether or not I then respond at length, this resolves to YES.

If I read the post/article for a substantial amount of time, and in hindsight I judge it to have NOT been a good use of time to have done so whether or not I then respond at length, this resolves to NO.

If I read the post long enough to give it a shot and then recoil in horror and wish I could unread what I had read, that also resolves this to NO.

If I choose NOT to read the post for a substantial amount of time, then this resolves to my judgment of the fair market price at time of resolution - by default the market price, but I reserve the right to choose a different price if I believe there has been manipulation, or to resolve N/A if the manipulation situation is impossible to sort out.

If I do trade on this market, that represents a commitment to attempt the review if I have not yet done so, and to resolve to either YES or NO.

Authors of the papers, and also others, are encouraged to comment with their considerations of why I might want to review or not review the posts, or otherwise make various forms of bids to do so (including in $$$ or mana, or in other forms).

These markets are an experimental template. Please do comment with suggestions for improvements to the template.

The post can be found here: https://www.openphilanthropy.org/wp-content/uploads/Dissolving-AI-Risk-v4-Alex-Bates.docx.pdf

Get
Ṁ1,000
and
S3.00
Sort by:

It seems highly unlikely I will review this, and there is only one trader, so I am resolving this to the final percentage 9%. It seems like a reasonable estimate!

predicted NO

I have no idea what a good use of your time is, or what you consider to be a good use of your time. Also, there's no liquidity here so people who engage are probably 'forum posting' as much as they are 'trading'. Also, I'm skimming and might be very wrong, and probably each post has a rebuttal of my concerns that I've skimmed past (but wouldn't agree with). Also unsure if my input is useful at all.

I don't think the 'estimate probabilities for steps separately and then multiply them' really works here. The probabilities of events like 'will AGI be misaligned' and 'will be built' and 'will be exposed to high-risk power' depend on a lot of the same cruxes and you shouldn't multiply them, and multiplying them will show an artificially low risk. (but that applies to the work the paper builds on too). The paper considers a model with those probabilities as parameters and then distributions for those parameters. The main claim is this means 'in most worlds the risk is less than 3%', and the geometric mean of probabilities is 1.5%. Because in most worlds, some of the probabilities will be low, so the product will be low. But high risk is concentrated in some worlds where every probability is high. I don't think the geometric mean matters, or the 'safe in most worlds' matters, more than the mean probability of .15 does. It uses this to argue we should learn more about the parameter values, and that doing so could show AI risk to be quite low. Which ... yeah, if we knew more we'd know more, everyone wants to know more about what the probabilities are exactly.

I don't think this is interesting enough to a general audience to review.

@jacksonpolack Thanks for your thoughts! I agree that the market prices likely shouldn't be taken literally.

Good use of my time is inherently and intentionally subjective, but intention is 'relative to choosing other AI-related stuff to cover.'

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules