The September 2024 "Has weak AGI been achieved?" Manifold poll has closed and the existing linear trendline has yet again been invalidated. The percentage of Manifold users who agree that weak AGI has been achieved now stands at 36.4%, an increase of 5% since the August 2024 poll. Also for the first time, the two polls had exactly the same number of votes; there was a vote swing where some respondents changed from NO to YES.
The trendline has become steeper and now projects that Manifold will declare the achievement of AGI in four months.
Will the new linear trendline again be exceeded by accelerating progress, with at least 40% of Manifold respondents (ignoring those who express no opinion) agreeing that weak AGI has been achieved in the October 2024 poll? This market will resolve YES if that occurs, and NO if it does not.
PREVIOUS ITERATION OF THIS QUESTION:
@Philip3773733 I've been voting YES on these markets since July, I believe. First, I would bet that at least 75% of humans would not be able to draw a game of tic-tac-toe.
Second, I don't really think that tic-tac-toe is relevant to whether something has achieved general intelligence. These models can spit out cancer immunotherapies in five minutes of thought that are going into real-world testing, and we're going to judge them on stupid things like how many Rs are in "strawberry?" I think there are too many people who get caught up in these minute details about irrelevant things.
But as to this market, every single predecessor has always resolved YES, probably because technology does not progress linearly but the questions are about the linear trendline (which always keeps moving up.)
I expect that o1-lol will be released later this year and the polls will surpass 50% by December, as o1-lol's 91% score on the programming questions will undoubtedly be AGI.
@SteveSokolowski i have not seen any llm release a novel cancer therapy, could you provide a source? If it is a general intelligence it should at least know how to win in tic tac toe. Just try it. It’s completely dumb. Because it was not trained on it. So it is not general.
@SteveSokolowski sure if posts on X are your burden of proof I know understand how you arrive at this conclusion.
Here is an actual research that shows that Chatgpt will probably kill you if you use it for cancer treatment: https://news.harvard.edu/gazette/story/2023/08/need-cancer-treatment-advice-forget-chatgpt/