Will OpenAI release GPT-4 finetuning by Fall 2023?
➕
Plus
170
Ṁ85k
resolved Dec 22
Resolved
NO

Resolves YES if GPT-4 is publicly available to be finetuned by 2023-12-21 19:19:00 PST. NO otherwise.

with fine-tuning for GPT-4 coming this fall

Context(tweet embed):

Get
Ṁ1,000
and
S3.00
Sort by:
repostedpredictedYES

Time to revise my "never bet against OpenAI" heuristic...

  • From some preliminary results, GPT-4 fine-tuning seems to be much less effective than it was for previous models (possiblity very sensitive to hyperparameters). Also, GPT-4 already follows the system prompt and few-shot examples very well.

  • Fine-tuning based jailbreaks are still possible despite the new automated dataset moderation that already takes a very long time to process uploads.

  • "sir the GPUs are melting"

predictedYES

Not public. Invite-only program primarily, or a researcher-only agreement.

It's not public because "Contact Sales" is always an option if you have sufficiently large amounts of money, and I'm not going to count that.

predictedNO

Resolves no?

Will it resolve yes if OpenAI releases fine-tuning for GPT-4 Turbo, since if it's full GPT-4, my bet would be 40Ṁ for "No".

predictedYES

@31ff Anything in the GPT-4 series will count, including GPT-4 Turbo. I applied for GPT-4 finetuning and they outright deleted my application a week later... didn't even reject me, just deleted it. So I would currently resolve this NO.

Arb trading with https://manifold.markets/getby/will-gpt4-finetuning-be-available-b because I predict that whenever they release this, it will either be before Dec21 or after Dec31.

Unlikely if the entire team quits

predictedNO

From the announcement today:

GPT-4 fine tuning experimental access

We’re creating an experimental access program for GPT-4 fine-tuning. Preliminary results indicate that GPT-4 fine-tuning requires more work to achieve meaningful improvements over the base model compared to the substantial gains realized with GPT-3.5 fine-tuning. As quality and safety for GPT-4 fine-tuning improves, developers actively using GPT-3.5 fine-tuning will be presented with an option to apply to the GPT-4 program within their fine-tuning console.

predictedNO

I’d say this is still not “publicly available” since it requires applying for a private program.

predictedYES

@AaronBreckenridge If I get access to GPT-4 finetuning(I'll apply if given the option), or if the public can use finetuned GPT-4 models even if they can't train them, those will definitely count for this market.

A EULA is fine, but if it's strictly b2b(the models can only be used in a non-personal organization) or you need to sign NDAs, that won't count. Has to be available to "non-affiliates" of OpenAI.

If it's partially released to a small group of members of the public, but it's a lucky find like a lootbox in a mobile game, that sounds like NO or resolve it 50%. I'll count it YES if at least the signup form is public, and there are reports of at least some non-affiliates getting access to it.

predictedNO

This description says nothing about using fine tuning.

“Releasing fine-tuning” would necessitate training per the tweet linked in the description: “fine-tuning lets you train…”

You also say it must be “publicly available to fine tune”.

@Mira It looks like you have a conflict of interest

predictedYES

@chrisjbillington just felt like this is probably NO and noone dares to contradict

Not only is there the fact that fine tuning it would be INCREDIBLY slow, GPT-4 has a LOT more capabilities. By the time they released GPT-3.5 finetuning, open source models had almost caught up to GPT-3.5. Allowing people to finetune GPT-4 would have implications that OpenAI themselves probably don't even fully understand yet. The applications that people could find would be both great and terrifying.

You guys understand that this model is >10x the size of the ChatGPT right? Their own in house engineers can probably barely fine-tune this thing

@jonsimon Well, the procedures can be the same, just slower.

predictedNO

@gigab0nus Computing gradients is quadratic in the number of parameters, so we're talking 100x slower. That's a lot slower.

predictedYES

@jonsimon You often don‘t fine-tune the whole network. But yes it will be much slower. If they say so, they will surely find a way to make the fine-tune experience acceptable.

@jonsimon Why couldn't they use something like LoRA to massively save compute? (and also importantly, memory loading multiple versions of the model for inference)

predictedNO

@TomPotter They probably could, but in general it's more practically complicated to add a LoRA adapter, especially for a mixture-of-experts style model like GPT-4.

Not saying it's not doable, but I am saying that you shouldn't expect it to be as straightforward as it is for GPT-3.5

@jonsimon Do you have a source for the quadratic cost of computing gradients? Doesn't backprop have approximately the same time complexity as the forward step?

Comment hidden
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules