The Senate Judiciary Subcommittee on Privacy, Technology and the Law will hold a hearing on "Oversight of A.I.: Rules for Artificial Intelligence" on May 16, 2023 at 10am ET. Expected witnesses include Sam Altman, CEO of OpenAI, and Gary Marcus, Professor Emeritus at NYU. Both have discussed AI existential safety in the past. Will at least two of the following phrases be said by Congress or witnesses during this hearing?
Alignment problem
Digital mind
Eliezer Yudkowsky
Existential risk
Extinction
God-like AI
Instrumental convergence
Intelligence explosion
Light cone
Off switch
Power-seeking
Sentience
Superintelligence
I will allow for alternative spellings in the transcript (e.g., "super-intelligence" would count) but have very little leeway otherwise (e.g., synonyms wouldn't count; "alignment issues" wouldn't count; "extinct" wouldn't count).
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ635 | |
2 | Ṁ395 | |
3 | Ṁ20 | |
4 | Ṁ6 | |
5 | Ṁ6 |
There were discussions of "long-term risk," "self-aware" AI, and FLI's 6-month pause letter, and Sam Altman was quoted, "development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity" (though discussed in terms of job loss), but I didn't hear any mentions of the listed terms.
The official transcript may take a while, and I wasn't listening to the whole thing. But I will likely resolve this as NO soon if nobody provides a time stamp to at least two terms from the list being mentioned.
There is now a transcript, and I confirmed that none of these terms are in it, though there are some close calls, e.g., "self-aware" instead of "sentient", "superhuman machine intelligence" instead of "superintelligence", and "Cambrian explosion" but not "intelligence explosion".