The book "If Anyone Builds It, Everyone Dies" by Yudkowsky and Soares argues that the development of sufficiently advanced AI would lead to human extinction. The book will be released in September. This question resolves YES if an elected US Federal lawmaker (President or Vice President, or member of the House of Representatives or the Senate) publicly expresses at least modest personal confidence that AI could cause human extinction in a manner or context that references the book.
I'll loosely operationalize this as a statement that appears to imply the speaker has a "p(doom)" above 5%. To provide an extrinsic definition, here are examples of statements that would or would not resolve this market to YES.
Not enough to resolve YES:
"There are experts saying that if anyone builds it, everyone dies."
[Not in a manner or context that references the book] "p(doom)? I'd put it at maybe 10%."
"There is a remote, hypothetical possibility that humanity could lose control of AI completely."
[When asked a question that references the book or its framing] "The superintelligence thing? Yeah, scary stuff, we're looking into it."
Enough to resolve YES:
"There are experts saying, and frankly I can see how they might be right, that if anyone builds artificial superintelligence, it would end humanity."
[When asked a question that references the book or its framing] "AI doom would be an ugly, ugly thing, and we're working hard to make sure it doesn't happen!"
[In any context referencing AI] "If anyone builds it, everyone dies."