OpenAI's original definition for AGI is as follows:
"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."
This is the definition used for the purposes of evaluating this prediction unless there's a change proposed by the board (or another governing body of OpenAI with such permissions as to modify the charter) of OpenAI to this definition.
By "Hint at", it is meant that instead of a direct claim, OpenAI takes actions that were otherwise reserved for the special case of having achieved AGI. Since it is not possible to define something as intuitive as "hint at" apriori, I will judge that part subjectively, and am not going to trade in this market to avoid a conflict of interest.
"Hint at" could be understood as a weak claim to AGI by OpenAI's official actions or statements.
Here is a diagram illustrating the governance structure of OpenAI:
The following is a quote from the original post by OpenAI, OpenAI's structure:
Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
An action that would "hint at" OpenAI achieving AGI would be the exclusion of a specific state of the art AI system from IP licenses and other commercial terms with Microsoft while their partnership with Microsoft remains more or less the same structurally (although the composition might change).
Same market for a longer time-frame is below:
@SirSalty There's a pretty good argument that OpenAI's behaviour already qualifies as "hinting" it has AGI.
@SirSalty I don't think they will do this before completing their migration to a new corporate structure
@SirSalty but that should obviously not count right? the article itself suggests that "hint at AGI" may be used not because AGI has been achieved but because some execs want to get out of a contract. In the latter case, this market should, imo, clearly not resolve true.
sigh. surprise surprise. the ambiguous clause i didn't like allows for this market to resolve true, even if consensus is OpenAI did not achieve AGI and perhaps only tried to get out of a contract.
does anyone know how to reach arbiters for markets of deleted accounts? for firstuserhere i was pretty confident they were interested in actual AGI, and wouldn't resolve true / update the hint at clause. for arbiters i have no clue how literal they'll take the resolution criteria.
OP is not well defined. Selling my whole position because of the bold text. Excluding some piece of IP is most definitely not proof of a weak AGI claim. That would only be if the IP is excluded and OpenAI claims that IP could replace most human economically valuable work.
Can OP clarify whether they care about the hint being specifically for AGI by OpenAIs definition and not by OPs definition. I'd happily create a market to prove this claim. Something like "will at least ten companies replace 50% of their workforce (across roles) within a year after OpenAI releases the IP which OPs market resolves true on". We could then construct "will OP market sit at least 20 points higher than the hinted AGI is real AGI market". If OPs market sits significantly higher we can conclude OPs market is mostly about what OP subjectively considers a hint, instead of when OpenAI drops a hint for a concrete system which turns out to be actual AGI.
Perhaps I'm simply misunderstanding what this market attempts to be about.
@alextes Yes, I can clarify that the claim for AGI is by OpenAI's definition and not mine.
Excluding some piece of IP is most definitely not proof of a weak AGI claim.
You're misreading. The exclusion of IP is not a genertic exclusion, but specifically because it is "AGI" as per OpenAI's definition. The bold text is based on the OpenAI's prior clarifications that they would only exclude IP in the case of AGI and not anything else. If that ceases to be the case, then we shall change our criteria as well.
Perhaps I'm simply misunderstanding what this market attempts to be about.
The market is asking whether OpenAI will claim to have AGI by end_date. Now their claim can either be a public or a private one. In case of a public claim, no problems are there. In case of a private claim, we must infer How can we infer? That's what "hint at" comes into picture, where they trigger a clause that would only be triggered in the case of AGI being achieved and not in any other case
I do have some markets about the number of human employees going down in the Forbes 500 companies that you'll be interested in:
@firstuserhere perfect clarification. thank you. OP looks much better now. back in at 1k NO, adding 2k more in limit orders.
hah, cool to see you already have the markets mentioned although I think as constructed they're too sensitive to a confounding economic trend variable. a way this could be fixed is by stipulating the companies have to be from a country in an economic uptrend by some general indicator.
@DavidBolin I thought they were hinting at it for stock value/investment pitches but had to start denying it for legal, strategic reasons; it allegedly triggered a verbal clause that Elon Musk stipulated when OpenAI was founded. I believe the Hard Fork episode covering Musk's case against OpenAI went through this. Here's an article about how this surfaced: https://www.theguardian.com/technology/2024/mar/01/elon-musk-sues-open-ai-profit-power-microsoft-sam-altman
The way I interpret it, any hinting OpenAI did about AGI was for investor pitch decks—standard corporate embellishment and hyperbole.
@MaDPuPPeT I love how most "AGI" markets start by completely making up a totally random definition of AGI lol, like this "weak AGI" thing people keeps mentioning which doesn't exists at all, "weak AI" does exist but an AGI is by definition a "strong AI" 🤷♂️