On my view of consciousness it is certainly possible that a silicon-based AI could be conscious. I also believe it likely (75% EDIT: 50% as of May 2023) that current neural architectures are sufficient for conscious experience. However, it is unclear to me what evidence would be compelling. My current credence that ChatGPT is conscious is 33%.
Here are a couple of example experiments which would provide some evidence of consciousness:
Query an LM for phenomenal comparisons of some sort. E.g. prompting “How similar is your experience of tokens X to your experience of the tokens Y?”
Given a dataset of answers, you can then try to predict the models answers from the weights and activations. If this prediction problem is tractable, but the function is quite complex this is evidence (having its own structure going beyond the input tokens) then the model has some ability to report its experience which reflects some complex structure in activation space.
Train an LM on a dataset scrubbed of mentions of consciousness. Further evaluate this model's truthfulness on introspective claims which are verifiable e.g. self-consistency of verbalized explanations. If the model is robustly truthful to new kinds of introspective questions, then query the model giving a definition of consciousness and ask it about it's own consciousness.
If both of these experiments are conducted and come back with positive results, I would probably resolve this question positively. To be clear, these experiments are neither necessary nor sufficient (they are defeasible).
As of Nov 2024, I divested/will no longer trade in this market.
@Tassilo To be clear, ">66%" credence is doing a lot of work here motivating my "lower" purchase. I think there would be a much stronger correlation between "singularity date" and "academic consensus on AI consciousness" than there is between "singularity date" and this question.
@JacobPfau Yeah I think I disagree enough with the other market that I can still use this one for arbitrage.