Will John Carmack seriously engage with the alignment community by the end of 2023?
97
836
αΉ€2.9K
resolved Feb 1
Resolved
NO

Discussing alignment concerns with people such as

  • Paul Christiano

  • Richard Ngo

  • Buck Shlegeris

for more than 10 hours in total would count. I also would count spending around 2x this amount of time reading and/or writing takes about whether or not alignment research is important, how bad/good speeding up capabilities is, or similar topics. I'm only counting targeted efforts here - reading less wrong for comparable amounts of time without focusing on ataining better views about this questions doesn't count. I'm counting such efforts from the start of 2022 onward.

This will be based on my best judgement.

Resolves to positive whenever I am >95% confident this has occured or resolves one month after the end of 2023 to my best guess at the correct answer. (This includes me attaining sufficent confidence that this engagement occured prior to the market being created, see above about 2022 start time).

Get αΉ€200 play money

πŸ… Top traders

#NameTotal profit
1αΉ€1,030
2αΉ€388
3αΉ€172
4αΉ€119
5αΉ€65
Sort by:
predicted NO

resolves

Why do these people always think the world will end exactly when they hit the wall πŸ€”

β€œIf I make fifteen thousand assumptions and have no knowledge of history, economics, geopolitics, or businessβ€”then here’s my theory about why AI is bad” - every aligncel

everyone worries about unaligned power seeking ais but no one worrying about unaligned power seeking eas πŸ€”

predicted NO

@Gigacasting yeah lol this is actually an extremely common concern. See e.g. all of the ea forum. Or people gripping about anthropic leadership. Or people gripping about open phil.

IMO, ea's/rats are generally high on worrying (particularly rats but also eas these days...).

@Gigacasting Nah, there's plenty of mighty worriers concerned with late stage capitalism. I'd say more "pause billionaires" than "pause AI" people in absolute terms.

And, why not both?

bought αΉ€100 of NO

nope can't happen

predicted NO

@L john carmack would have to have vocalized before for that to happen, which has never occurred before. john carmack is not known to have ever communicated with another being. if he had, then maybe, but as it is, no chance

predicted NO

@L (In other words I think it's quite likely that he will and I'm making an intentionally bad bet so the people who think they can help make it happen can bet yes)

predicted NO

@L IMO, you should just add subsidize in this sort of situation. But not very important and I def appreciate : )

sold αΉ€674 of NO

@RyanGreenblatt Fair enough. I moved my bet to a subsidy.

predicted YES

@Rodeo this is Carmack replying to an AI risk person. looks like he did a few days ago also (https://twitter.com/ID_AA_Carmack/status/1623207042541404161?s=20&t=YZ1EIhuvWtJ2KBI-TceU2Q). I’m not saying this qualifies - not even close - but I think it raises the probability of extensive engagement by the end of the year

predicted YES

@Rodeo β€œBet with tetra” feels very different to me from β€œchat with Buck”

predicted YES

@Alana I agree, but I think it raises the probability of the β€œspend 20 hours reading or writing” about this stuff

bought αΉ€10 of YES

Seems underpriced conditional on alignment people probably trying a non-zero amount.

Also: s/would/word?

predicted NO

FYI, in case John Carmack sees this - I would recommend spending some time looking at stuff like:
- https://www.alignmentforum.org/posts/KbyRPCAsWv5GtfrbG/the-alignment-problem-from-a-deep-learning-perspective
- https://www.alignmentforum.org/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to
- https://www.lesswrong.com/posts/6ccG9i5cTncebmhsH/frequent-arguments-about-alignment

I think that MIRI-ish takes (similar to the ideas in superintelligence, though more specific) are poorly argued for and I'm not currently compelled.

You could also consider emailing some of the people I listed (or myself!), they would plausibly be interested in talking or atleast refer to someone who'd be interested in talking about AI risk and what people can do now.