Even in the unlikely event that AlphaGo Zero has qualia, we would have a heck of a time proving it. Heck, it's not trivial to prove that humans other than yourself have qualia, even though intuitively they obviously do. One strong argument that other humans have qualia is that humans argue about qualia, and did so even before you were born, so they got the concept from somewhere.
AIs trained on human data such as GPT-3 argue about qualia, but it seems likely that they just copy human patterns. This market resolves yes if we notice AIs arguing about qualia when trained via interactions with other similar agents in an environment that doesn't have evidence that humans exist / have qualia.
This is an example of the sort of training environment that would resolve the market "yes" if the agents independently invented and debated P-Zombies instead of just hiding ramps and surfing on boxes: https://openai.com/research/emergent-tool-use.
looking at AlphaGo, AlphaGo Zero, human existence, Nicaraguran Sign language, and ChatGPT
We have experiments on
humans trained on language from others | humans teaching themselves language from scratch
AI trained on Go from others' games | AI teaching itself Go from scratch
AI trained on language from others | (missing experiment)
I figure on the way to AGI we'll figure out how to do the missing experiment
@NikhilVyas it is entirely plausible that we would train a 2030 equivalent of GPT but with significantly better reasoning capabilities just to see what blindspots it has and how much it can infer about humans or other possible agents based on just it's training set.
Though I find the overall question unlikely and/or likely to be 'stretched'.