Part 2: In 2023, will typical use cases that people envision for A.I. fail to come to pass?
19
76
370
resolved Jan 2
Resolved as
15%

On January 4th, 2023 Term Sheet, a well-known financial newsletter by Fortune (typically regarding PE/VC) posted a series of predictions regarding the calendar year 2023.

One of these predictions was the following:

“I think a lot of boring philosophy is going to become important next year—like what is the meaning of truth and how do we know things… You have two problems [stemming from generative A.I. and large language models] that I think we’re going to face next year. One is that the amount of lies and bullshit that we are subjected to is going to increase exponentially, just because bad actors will use A.I. to generate all sorts of horrible garbage. But the second problem is that most of the use cases that people are thinking about—what A.I.s are going to solve—aren’t actually going to come to pass until we get a pretty different type of technology that is capable of actually reasoning, rather than just auto-completing words… It’s going to force people to think about epistemology and stuff that investors haven’t thought about. College philosophy majors will become employable.” —Phil Libin, co-founder and CEO, All Turtles and mmhmm

I will not attempt to initially define all resolution criteria in this market and will instead attempt to handle any nuances/complications/data feasibility as it arises. If by end of 2023 I think it is not possible to confidently resolve this market in the spirit in which it was intended, I reserve the right to resolve as "n/a".

Any clarifications to the resolution criteria will be listed below, along with the applicable date:

  • [TBU]

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ34
2Ṁ16
3Ṁ5
4Ṁ5
5Ṁ3
Sort by:

Open to be persuaded here, but my sense has been that, while there's been some honing in of AI use cases, the typical stuff that people thought would be useful from LLMs didn't change much by EOY 2023.

bought Ṁ75 of NO

I think there's a large "it's happening" bias on this site at large (let me know if there's a better term). Most things that seem like they'll be dramatic usually aren't, or at least take more time than a year.

bought Ṁ5 of NO

unconfident no: might not be safe enough for deployment by then. but medical ai will be an incredible change when it's ready, and it's already showing its worth now.

More related questions