Would it be a good use of time to review 'Evolution provides no evidence for the sharp left turn'?
13
284
250
resolved Oct 6
Resolved
YES

The Open Philanthropy Worldview Contest awarded six prizes. Now I need to decide - would it be a good use of time to review and respond to some or all of those winners? Thus, six markets. I will use the trading to help determine whether, and how in depth, to examine, review and respond to the six posts.

If I read the post/article for a substantial amount of time, and in hindsight I judge it to have been a good use of time to have done so whether or not I then respond at length, this resolves to YES.

If I read the post/article for a substantial amount of time, and in hindsight I judge it to have NOT been a good use of time to have done so whether or not I then respond at length, this resolves to NO.

If I read the post long enough to give it a shot and then recoil in horror and wish I could unread what I had read, that also resolves this to NO.

If I choose NOT to read the post for a substantial amount of time, then this resolves to my judgment of the fair market price at time of resolution - by default the market price, but I reserve the right to choose a different price if I believe there has been manipulation, or to resolve N/A if the manipulation situation is impossible to sort out.

If I do trade on this market, that represents a commitment to attempt the review if I have not yet done so, and to resolve to either YES or NO.

Authors of the papers, and also others, are encouraged to comment with their considerations of why I might want to review or not review the posts, or otherwise make various forms of bids to do so (including in $$$ or mana, or in other forms).

These markets are an experimental template. Please do comment with suggestions for improvements to the template.

The post can be found here: https://www.openphilanthropy.org/wp-content/uploads/Evolution-provides-no-evidence-for-the-sharp-left-turn-LessWrong-Quintin-Pope.pdf

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ271
2Ṁ131
3Ṁ82
4Ṁ4
5Ṁ4
Sort by:
bought Ṁ2,000 of YES

The review is posted here: https://www.lesswrong.com/posts/Wr7N9ji36EvvvrqJK/response-to-quintin-pope-s-evolution-provides-no-evidence. There's already been enough good discussion and karma that it seems safe to conclude this was time well spent. I am resolving this to YES.

Note to all: I have decided to indeed review this post based on this feedback, so place your bets.

I predict that you would agree in broad strokes with my own critiques of this post[0]. I'd buy such a market up to 80%, and also bet up to 80% or so that you would agree or mostly agree with at least some of the top comments on the LW version[1][2].

Conditional on you NOT agreeing with me and / or those commenters, I think you'd very likely find reviewing the piece worthwhile, and I personally would definitely find it worthwhile to know that (e.g. it would probably cause me to prioritize writing a post elaborating / clarifying my own views).

Conditional on you agreeing with my core claim that the post is not really engaging with the actual SLT argument, I think the worthwhileness judgement is more complicated. I personally would find it valuable to know that and see it signal-boosted, especially if it was accompanied by a write-up similar to the one you did for the interest rates post.

It's harder to say how valuable your blog readers, Open Phil judges, EAF readers, and the original author would find your engagement in this case (because you would more likely be saying things others have already said in different words), and whether you personally would find it worthwhile. But I think if you think your engagement with the interest rates post was worthwhile, you would find engaging here similarly worthwhile. I'm buying YES in this market, mostly on the assumption that you did find your interests rates engagement so far to be worthwhile.

[0]: https://forum.effectivealtruism.org/posts/eSZuJcLGd7BacjWGi/announcing-the-winners-of-the-2023-open-philanthropy-ai?commentId=WZJJYcAnjyLTiCAad https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn?commentId=o6SexPykTL64fzpre

[1]: https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn?commentId=AksuMpkdbdqSvDNoA

[2]: https://www.lesswrong.com/posts/hvz9qjWyv8cLX9JJR/evolution-provides-no-evidence-for-the-sharp-left-turn?commentId=7KHfovffmQLkt9Dzt

Announcing the Winners of the 2023 Open Philanthropy AI Worldviews Contest — EA Forum
Comment by Max H - On the first point, my objection is that the human regime is special (because human-level systems are capable of self-reflection, deception, etc.) regardless of which methods ultimately produce systems in that regime, or how "spiky" they are.  A small, relatively gradual jump in the human-level regime is plausibly more than enough to enable an AI to outsmart / hide / deceive humans, via e.g. a few key insights gleaned from reading a corpus of neuroscience, psychology, and computer security papers, over the course of a few hours of wall clock time. The second point is exactly what I'm saying is unsupported, unless you already accept the SLT argument as untrue. You say in the post you don't expect catastrophic interference between current alignment methods, but you don't consider that a human-level AI will be capable of reflecting on those methods (and their actual implementation, which might be buggy). Similarly, elsewhere in the piece you say:   And   But again, the actual SLT argument is not about "extreme sharpness" in capability gains. It's an argument which applies to the human-level regime and above, so we can't already be past it no matter what frame you use. The version of the SLT argument you argue against is a strawman, which is what my original LW comment was pointing out. I think readers can see this for themselves if they just re-read the SLT post carefully, particularly footnotes 3-5, and then re-read the parts of your post where you talk about it. [edit: I also responded further on LW here.]
predicted NO

Read this at the time, a bunch of interesting theoretical technical arguments that I still ended up disagreeing with. Plausibly worth reviewing if you think digging deep into that is worth doing, not sure.