The intent of this question is to get at whether the open-source community, and/or random torrent pirates or darkweb people or whatever, will be able to download and then run a model as generally capable as GPT-4. (Assuming they have the right hardware). Doesn't have to be legal; if a hacker steals the model and sells it for $$$$ on the darkweb that still counts, if lots of different hackers on the darkweb are able to get it. (If instead it's a one-off sale to someone else who doesn't resell it, that would not count.)
In case of conflict between the "spirit" and the "letter" of this question, I'll resolve in favor of the spirit.
Related questions
Glad to see such engagement/volume on this question!
Some more thoughts on why I made it:
--IMO, GPT4 is close to the level of capability required to be an autonomous agent. But not quite there yet according to ARC's autonomous replication eval. But... maybe in another year, there'll be techniques and datasets etc. that push models noticeably further in that direction, analogous to how RLHF can 'stretch' the base model in the direction of being a helpful assistant or chatbot. Also, I couldn't ask about GPT-5 or whatever because that would make the question harder to resolve.
--IMO, GPT4 is already somewhat useful for bioterror and hacking and various other such things. So if a "uncensored" and unmonitored GPT4-class model is widely available, we might start seeing interesting effects in those domains.






















