
Discussing alignment concerns with people such as
Paul Christiano
Richard Ngo
Buck Shlegeris
for more than 10 hours in total would count. I also would count spending around 2x this amount of time reading and/or writing takes about whether or not alignment research is important, how bad/good speeding up capabilities is, or similar topics. I'm only counting targeted efforts here - reading less wrong for comparable amounts of time without focusing on ataining better views about this questions doesn't count. I'm counting such efforts from the start of 2022 onward.
This will be based on my best judgement.
Resolves to positive whenever I am >95% confident this has occured or resolves one month after the end of 2023 to my best guess at the correct answer. (This includes me attaining sufficent confidence that this engagement occured prior to the market being created, see above about 2022 start time).
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ1,030 | |
2 | Ṁ388 | |
3 | Ṁ172 | |
4 | Ṁ119 | |
5 | Ṁ65 |
People are also trading
@Gigacasting i have been writing about this https://www.lesswrong.com/posts/PaWpTPkbnkGRtDrDs/who-aligns-the-alignment-researchers
@Gigacasting yeah lol this is actually an extremely common concern. See e.g. all of the ea forum. Or people gripping about anthropic leadership. Or people gripping about open phil.
IMO, ea's/rats are generally high on worrying (particularly rats but also eas these days...).
@Gigacasting Nah, there's plenty of mighty worriers concerned with late stage capitalism. I'd say more "pause billionaires" than "pause AI" people in absolute terms.
And, why not both?
@L john carmack would have to have vocalized before for that to happen, which has never occurred before. john carmack is not known to have ever communicated with another being. if he had, then maybe, but as it is, no chance
@L (In other words I think it's quite likely that he will and I'm making an intentionally bad bet so the people who think they can help make it happen can bet yes)
@L IMO, you should just add subsidize in this sort of situation. But not very important and I def appreciate : )
@Rodeo this is Carmack replying to an AI risk person. looks like he did a few days ago also (https://twitter.com/ID_AA_Carmack/status/1623207042541404161?s=20&t=YZ1EIhuvWtJ2KBI-TceU2Q). I’m not saying this qualifies - not even close - but I think it raises the probability of extensive engagement by the end of the year
@Alana I agree, but I think it raises the probability of the “spend 20 hours reading or writing” about this stuff
FYI, in case John Carmack sees this - I would recommend spending some time looking at stuff like:
- https://www.alignmentforum.org/posts/KbyRPCAsWv5GtfrbG/the-alignment-problem-from-a-deep-learning-perspective
- https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to
- https://www.lesswrong.com/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment
I think that MIRI-ish takes (similar to the ideas in superintelligence, though more specific) are poorly argued for and I'm not currently compelled.
You could also consider emailing some of the people I listed (or myself!), they would plausibly be interested in talking or atleast refer to someone who'd be interested in talking about AI risk and what people can do now.
What I found with low effort Google: https://mobile.twitter.com/ID_AA_Carmack/status/1368255825412825089