
OpenAI's original definition for AGI is as follows:
"By AGI, we mean highly autonomous systems that outperform humans at most economically valuable work."
This is the definition used for the purposes of evaluating this prediction unless there's a change proposed by the board (or another governing body of OpenAI with such permissions as to modify the charter) of OpenAI to this definition.
By "Hint at", it is meant that instead of a direct claim, OpenAI takes actions that were otherwise reserved for the special case of having achieved AGI. Since it is not possible to define something as intuitive as "hint at" apriori, I will judge that part subjectively, and am not going to trade in this market to avoid a conflict of interest.
"Hint at" could be understood as a weak claim to AGI by OpenAI's official actions or statements.
Here is a diagram illustrating the governance structure of OpenAI:

The following is a quote from the original post by OpenAI, OpenAI's structure:
Fifth, the board determines when we've attained AGI. Again, by AGI we mean a highly autonomous system that outperforms humans at most economically valuable work. Such a system is excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
An action that would "hint at" OpenAI achieving AGI would be the exclusion of a specific state of the art AI system from IP licenses and other commercial terms with Microsoft while their partnership with Microsoft remains more or less the same structurally (although the composition might change).
Same market for a longer time-frame is below: