Is the Reuters story about OpenAI's Q* substantively true?
493
9K
3.1K
resolved Apr 2
Resolved
NO

Alex Heath reports it isn't: https://twitter.com/alexeheath/status/1727472179283919032

This resolves to YES if, in my best judgment, after 30 days, the story is substantively true. The rules of bounded distrust apply, so misleading does not mean false. Falsehood must change the substantive implications of the information, again as per my best judgment.

EDIT: I want to clarify and make explicit what I will do if we do not get clarity on this. If after 30 days, taking into account the market price and trading history as a key component, I am >90% confident in the right answer, I will resolve at the deadline, even if not 100% sure. If I am not, I will extend the deadline until such time as I am confident.

EDIT 2: I have extended to April 1, 2024, as the date at which I will resolve this even if no substantial new evidence comes to light and trading continues to not be more confident. The only exception would be if we get new expectation of additional evidence coming afterwards. I also could resolve this before then if I ever become >90% confident.

"Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers sent the board of directors a letter warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters."

The previously unreported letter and AI algorithm was a catalyst that caused the board to oust Altman, the poster child of generative AI, the two sources said. Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing. Reuters was unable to review a copy of the letter. The researchers who wrote the letter did not immediately respond to requests for comment.

OpenAI declined to comment.

"According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions."

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as AI systems that are smarter than humans.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ10,129
2Ṁ2,032
3Ṁ1,025
4Ṁ881
5Ṁ828
Sort by:

The internal review has concluded. No mention in the "summary of the review and findings" of letters about Q* precipitating anything.

https://openai.com/blog/review-completed-altman-brockman-to-continue-to-lead-openai

Summary of WilmerHale review & findings

On December 8, 2023, the Special Committee retained WilmerHale to conduct a review of the events concerning the November 17, 2023 removal of Sam Altman and Greg Brockman from the OpenAI Board of Directors and Mr. Altman’s termination as CEO. WilmerHale reviewed more than 30,000 documents; conducted dozens of interviews, including of members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; and evaluated various corporate actions.  

The Special Committee provided WilmerHale with the resources and authority necessary to conduct a comprehensive review. Many OpenAI employees, as well as current and former Board members, cooperated with the review process. WilmerHale briefed the Special Committee several times on the progress and conclusions of the review.  

WilmerHale evaluated management and governance issues that had been brought to the prior Board’s attention, as well as additional issues that WilmerHale identified in the course of its review. WilmerHale found there was a breakdown in trust between the prior Board and Mr. Altman that precipitated the events of November 17.  

WilmerHale reviewed the public post issued by the prior Board on November 17 and concluded that the statement accurately recounted the prior Board’s decision and rationales. WilmerHale found that the prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. WilmerHale also found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman. WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.  

bought Ṁ800 NO from 11% to 9%
bought Ṁ100 YES

Musk suit filed today specifically alleges ("on information and belief" i.e. without any insider information) that the infamous Q* letter precipitated the firing of Sam Altman (musk-v-altman-openai-complaint-sf.pdf (courthousenews.com)

This means there is a high likelihood we will have sworn testimony available at some point regarding the veracity of this story and so this market should not have a deadline to resolve

[edit: this was incorrect]

@JosephBarnes can we define some point and what evidence? I don't love letting this drag on for a year.

from Sam Altman explains being fired and rehired by OpenAI - The Verge
The reports about the Q* model breakthrough that you all recently made, what’s going on there?

SA: No particular comment on that unfortunate leak. But what we have been saying — two weeks ago, what we are saying today, what we’ve been saying a year ago, what we were saying earlier on — is that we expect progress in this technology to continue to be rapid and also that we expect to continue to work very hard to figure out how to make it safe and beneficial. That’s why we got up every day before. That’s why we will get up every day in the future. I think we have been extraordinarily consistent on that.

Without commenting on any specific thing or project or whatever, we believe that progress is research. You can always hit a wall, but we expect that progress will continue to be significant. And we want to engage with the world about that and figure out how to make this as good as we possibly can.

It's also being reported today that another startup has made a comparable breakthrough: The ‘Magic’ Breakthrough That Got Friedman and Gross to Bet $100 Million on a Coding Startup — The Information

This question should resolve yes if there's no new evidence by April

I don't think this is informative about this question. Q* might be real, and the leak would be that Q* exists, which is what the question was about but this is about the letter to the board, and I believe the most recent reporting on that specifically (discussed below) is that it didn't happen.

What’s the spike about?

@Paul maybe https://openai.com/sora? Which doesn't seem to resolve this market YES but is plausibly an update towards more surprising big advances in the pipeline.

As discussed above: I am extending the deadline to April 1, as it is clear there is still substantial uncertainty among traders, and I want to give time for more evidence to come out.

I will not postpone again, unless new evidence comes to light that makes waiting for additional evidence more valuable. Barring that, I will resolve to either YES or NO.

bought Ṁ25 of NO
predicted YES

@SteveSokolowski Looks like this is again referring to the 4chan letter, which was posted a day after the Reuters story and probably based on the Reuters story. @zedmelody pointed out that the 4chan letter was probably fake in this very thread a few days later. But it doesn't seem like it tells us anything about the original Q* leak.

It is really hard to tell what "true" means in this context.

There will always be OpenAI leakers (and other evangelist) claiming that AGI is really just around the corner, so the news agencies' reporting is true in this sense. Whether such beliefs have any connection to actual reality is a much harder question (although very likely ultimately answered "NO" for at least several decades into the future).

I don’t think I can bet on a market that so intensely depends on one person’s definition of a subjective adverb.

bought Ṁ80 of YES

Catching up late, but from what I see, the core claim made in the Reuters article is this:
> Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters

The rest seems to be extra interpretation and context added by Reuters. Even if that interpretation is incorrect, I don't think that makes the story substantively false, and at worst, it just makes it misleading, which still means a YES resolution.

The following arguments are not core to this article, and are not even confidently claimed in it:

1. Q* is causally related to Sam's firing
2. The model actually could threaten humanity (as opposed to just the researchers thinking it could)

Everything we've seen thus far seems to indicate that the core claim is true. Why was this at 18%?

predicted NO

where was the letter to the board about q* claim confirmed outside the reuters article? there's the Verge denying it happened, and what else?

bought Ṁ100 of YES

@jacksonpolack The Verge and the OpenAI board have denied that Q* led to Altman being fired, but I don't think this is enough for a NO resolution, as this is just an interpretation that Reuters offers. The language that Reuters offers about the causality here is intentionally weak:
- "A catalyst"
- "One factor"
- "precipitated"
I don't think that having this weak causal connection that they've drawn refuted means this article is substantially false.

predicted NO

Separately, a person familiar with the matter told The Verge that the board never received a letter about such a breakthrough and that the company’s research progress didn’t play a role in Altman’s sudden firing.

predicted YES

@jacksonpolack On the other hand, Sam and Mira pretty much confirmed (although with some weasely language) that Q* exists.

predicted YES

@jacksonpolack Oh okay I must have missed that

predicted YES

That still makes it the word of one person familiar with the matter against 2. I don't think that's conclusive.

bought Ṁ10 of NO

@Shump This is the part I doubt:

“Several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity”

a) This is a very strong claim that OpenAI researchers would not make unless there really was a major breakthrough toward AGI, of which there is no indication (Q* likely exists but does not seem to be very capable at this point)

b) Any such letter would likely have leaked by now

c) The letter seems to have been sent after Altman’s firing

predicted YES

@n1psey

a) There are already AI researchers who are worried that AI can threaten humanity. I'm sure that even a spark of planning capability can spook them.

b) Why? From this reporting, it's possible that only like ~10 people even know of this letter's existence. Time Magazine managed to avoid leaks for POTY, and there's hundreds of people keeping that secret.

c) The article very clearly says that it was sent before.

If this is a fake, how did they know that Q* exists? Their sources must be real OpenAI employees or related to them, otherwise thry wouldn't know about that.

predicted NO

@Shump

a) I think being worried about this in general and in regard to a specific technology are very different

b) If the researchers really believed humanity was threatened and their concerns weren’t being taken seriously, they would surely go public. Going to the board is already a last ditch effort to raise awareness within the company, and if the article is correct, that effort failed spectacularly, the board was fired as a result and Altman, who is not known for being particularly worried about safety, is basically untouchable.

c) I know what the article says, that’s why it’s wrong. Other articles issued a correction that the letter was sent after

predicted NO

@n1psey And I think it’s fallacious to say that the leaker got one thing correct, therefore everything is correct. Anyone could have gotten wind of Q* and spun this entire fairy tale around it

predicted YES

@n1psey a) If you're worried about AI, wouldn't you be worried by AI advances?
b) No going to the board is a much more logical choice, because the board can actually do something about it and will probably not fire you like going public might. I think what happened is that the letter was written but it just didn't sway any board member's minds.
c) Which other articles? How can you both claim that the letter doesn't exist and that it was sent after?
d) I don't appreciate my arguments being strawmanned and then called fallacious. Here, I'll spell it out for you. Since the Reuters article was the first to publicly mention Q*, their source must have been an insider. An insider is more likely to know the actual details rather than make shit up. Of course, everyone can make shit up, but it's not just random rumors.

I think it's a bit silly to deny that the story is not consistent, honestly. Maybe it's still wrong, but denying it on consistency grounds is doubtful.

predicted NO

@Shump Why are the researchers just giving up after their letter to the board has the exact opposite of the intended effect? Humanity is at stake!