By 2024 will any AI be able to watch a movie and accurately tell you what is going on? (Based on Vincent Luczkow's 2029 market)
Basic
84
Ṁ14k
resolved Jan 1
Resolved
NO

Will Vicent's market resolve yes before January 2024?
If for some reason Vicent's market resolves in a way that is obviously wrong(for example if he resolves by accident) or he is unable to resolve the market then I'll use my own judgment.

Get
Ṁ1,000
and
S3.00
Sort by:

Seems like this resolves no, unless @vluzko decides that gpt +vision thing counts as being able to do this.

predicted YES

@VictorLevoso I am not going to resolve the 2029 market based on the GPT + vision thing

@vluzko ok thanks resolving no then.

For people asking for clarification about what is needed, here is the original suggestion from the linked New Yorker article. TL;DR: can watch an arbitrary TV program or YouTube video and answer questions about its content. Examples given: war documentaries, Breaking Bad, the Simpsons, Cheers.

I'm not Gary or Vincent, but I'm going to go ahead and posit that this ability won't be acknowledged as achieved by Gary at least, unless it can be done in real-time, with the AI listening to the audio, as a human would, without subtitles. Arbitrary videos don't have subtitles, in any case.

As I have learned from two decades of work as a cognitive scientist, the real value of the Turing Test comes from the sense of competition it sparks amongst programmers and engineers. So, in the hope of channelling that energy towards a project that might bringing us closer to true machine intelligence—and in an effort to update a sixty-four-year-old test for the modern era—allow me to propose a Turing Test for the twenty-first century: build a computer program that can watch any arbitrary TV program or YouTube video and answer questions about its content—“Why did Russia invade Crimea?” or “Why did Walter White consider taking a hit out on Jessie?” Chatterbots like Goostman can hold a short conversation about TV, but only by bluffing. (When asked what “Cheers” was about, it responded, “How should I know, I haven’t watched the show.”) But no existing program—not Watson, not Goostman, not Siri—can currently come close to doing what any bright, real teenager can do: watch an episode of “The Simpsons,” and tell us when to laugh.

predicted NO

In order for this to be fair, it should probably be verified with a brand new film shown at Sundance or similar where IMDB etc does not already have a synopsis.

What length, feature film?

I mean... can't it already do this if you just upload every 10th frame from the film into GPT-vision?

predicted NO

@benshindel Not if sound or dialogue is important to understanding a movie

To be clear this resolves yes if Vincent resolves yes after 2024 based on an AI that came out before 2024.

predicted NO

I presume for positive resolution a movie whose plot is not in the training set will need to be used?

bought Ṁ10 YES from 29% to 30%
predicted NO

@firstuserhere This is impressive! It seems this market will need some clarifications, like what counts as a “movie” (minimum length, does it need to be released in theaters, does it need to work in multiple genres - ex maybe a nature clip is easy where sound is not important, but the average person hears “movie” and thinks of a feature film with lots of dialogue that is important to understanding the plot)

@JoshuaHedlund so my criterion for this one is basically "whatever Vincent Luczkow's criterion is unless its straightforwardly wrong" so you'll have to ask on the original market.

I was thinking, let's try with "Mulholland Drive", but there will be spoilers in the training set.

Ugh Video+text gptN will totally be able to this but can't buy yes in case market needs my judgment to resolve.

@VictorLevoso like maybe it takes longer than this, (and now that @firstuserhere bought yes up to 50% I don't think I would buy more yes) especially if openAI is willing to take longer to publish a video-gpt or whatever, but also seems like something that could come up around december for example and I wouldn't be that surprised.
I mean gpt4 can already answer questions frame by frame.
Actually I guess figuring out how to mix new modalities sounds more like a deepmind does it first thing.
Maybe they release flamingo2 now with video.
I guess maybe we get a short video version first and it takes 1 more year until something like that.
But at the same time that seems like you should be able to coble together something that resolves the market using a shorter context version.

@VictorLevoso turns out video stuff was slower than I though, I think partly cause people can't buy enough gpu and openAI was likely more focused on perfecting images than on video.

I see timelines I push the up arrow

@citrinitas I wish I could say that is not a good strategy that's going to get you ton of M$.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules