
Computation on closed source chips is only trustworthy if the chip is trustworthy. This lack of transparency might make it harder for anyone to trust anyone else about what software they are (not) running. Are open source chips instrumental for a coordinated AI pause?
This question is inspired by "How to catch a Chinchilla?" https://arxiv.org/abs/2303.11341
It proposes that labs use chips capable of monitoring the computations run on them such that auditors can verify that labs do as they promised.
See https://manifold.markets/Jono3h/will-ai-labs-need-to-switch-to-moni for a version of this question that is instead about the chips as described in that paper.
For now my resolutions are going to be very gut-feeling and expect me to fleshen out the criteria if/when this question gets more traction. Expect an N/A, unless open source chips get to a level and affordability comparable to closed-source equivalents.
NO if:
If reputable sources credibly claim that open source chips are not developed _because_ they won't help in this scenario, there could still be a NO, though I'll err on N/A.
A successful moratorium launched and few complained about the lack of open source chips.
YES if:
A successful moratorium relied on open source chips.
The closed source nature of particular chips are blamed for a failed moratorium.
The closed source nature of particular chips is a commonly known reason for why a moratorium can never be attempted.
This market resolves when we get a pause and are able to evaluate its success to some degree, humanities dependence on AI is so large that a significant fraction would die were AI to take a treacherous turn or there is expert consensus on this question.