AI generated philosophy book worth reading by the end of 2030 (detailed scenario)?
Standard
32
Ṁ3203
2031
46%
chance

This is an attempt to forecast a realistic scenario where AI would be actually doing good (or at least interesting) philosophy by the end of 2030. Due to this aim, it is very specific.

Gaston Bachelard was a French philosopher active in the first half of the 20th century. For the purposes of this market what matters is that he wrote five books on the psychoanalysis of the elements, i.e. fire, water, earth and air. Earth got two books for some reason. These books are a mix between a relatively detailed work of scholarship and psychoanalytical interpretation. His goal throughout is to dissect the psychoanalytical meaning of images associated to the four elements in literary, philosophical and even scientific texts, showing how they interacted with the development of objective knowledge (mostly hindering it).

In his Psychoanalysis of Fire he mentions that in principle his work could continue with other substances besides the four elements such as salt, wine, blood, or even abstract concepts such as system, unity, etc…

This seems like a task an AI should be able to do, because it boils down to looking for relevant quotes throughout the literature and commenting them from a psychoanalytical angle. In fact, such a work may even be useful or at least interesting to read.

In Dec 2030 I will use whatever AI is considered state of the art among those available to the public for a reasonable price (like GPT4 is now) and ask it to write books along these lines. To spice this up though I will ask for a book for each element in the Chinese system of elements (wood-fire-earth-metal-water) that is not already covered, so one for wood and one for metal. I will ask for it to be as long (in terms of words) as the shortest of the five books by Bachelard, and to follow its style.

I will then read the AI generated books and check that they are factually correct (references should be to books or other works that exist and should be reported correctly) and subjectively interesting.

If both criteria are met, I will resolve yes.

Get
Ṁ1,000
and
S1.00
Sort by:

I think there's potential for good* philosophy work following the recent Anthropic interpretability paper, even something simple such as analyzing concepts or offering surprising connections between them. Imagine focusing on a concept like "deceptiveness in humans". Yeah, I bet it could come up with things that are both true and interesting to read. I'm not sure if they'd actually be useful but that's not the bar here.

@YonatanCale Indeed the bar for this question is quite low. It should never hallucinate throughout the book and the stuff it writes should be meaningful and relatively interesting, while adhering to the style of Bachelard. I expect this to be doable by 2030 bar a major setback in AI development due to either regulation, financial disruption or war.