Will someone refute Angus Fletcher's proof that computers cannot read (or write) literature by 2024?
14
200
450
resolved Dec 28
Resolved
N/A

Link to the full paper:

https://muse.jhu.edu/article/778252

The logical proof in the paper:

1. Literature has a rhetorical function.

2. Literature's full rhetorical function depends on narrative elements.

3. Narrative elements rely on causal reasoning.

4. Causal reasoning cannot be performed by machine-learning algorithms because those algorithms run on the CPU's Arithmetic Logic Unit, which is designed to run symbolic logic, and symbolic logic can only process correlation.

QED: Computers cannot perform the causal reasoning necessary for learning to use literature.

The following seems to be the most relevant part of the paper:

"""[T]here's one feature of human learning that computers are incapable of copying: the power of our synapses to control the direction of our ideas. That control is made possible by the fact that our neurons fire in only one direction, from dendrite to synapse. So when our c synapse creates a connection between neuron A and neuron C, the connection is a one-way A → C route that establishes neuron A as a (past) cause and neuron C as a (future) effect. It's our brain thinking: "A causes C."

This physiological mechanism is the source of our human powers of causal reasoning. And it cannot be mimicked by the computer Arithmetic Logic Unit. That unit (as we saw above) is composed of syllogistic logic gates that run mathematical equations of the form of "A = C." And unlike the A → C connections of our synapses, the A = C connections of the Arithmetic Logic Unit are not one-way routes. They can be reversed without altering their meaning: "A = C" means exactly the same as "C = A," just as "2 + 2 = 4" means exactly the same as "4 = 2 + 2," or "Bob is that man over there" means exactly the same as "That man over there is Bob."

Such reversibility is incompatible with causal reasoning. A → C is not interchangeable with C → A any more than fire causes smoke is interchangeable with smoke causes fire. The first is an established rule of physics; the second, a wizard's recipe. And so it is that, as the Turing Award–winning computer scientist Judea Pearl has shown in The Book of Why, the closest that the A = C brains of computers can get to causal reasoning is "if-then" statements:

If Bob bought this toothpaste, then he will buy that toothbrush.

If this route has a traffic jam, then the other route will be faster.

If this chess move is played, then ninety-five percent of possible outcomes are victory.

If-then statements like these make up the bulk of Artificial Intelligence. And they do a good job of simulating casual reasoning. So good, in fact, that we humans tend to conflate the two in our ordinary speech. When we say, "if you're a smoker, then you're more likely to get lung cancer," we usually mean that smoking causes cancer. We're using "if-then" as a synonym for "cause-and-effect."

But cause-and-effect and if-then are not synonyms. Cause-and-effect encodes the why of causation, while if-then encodes the that-without-why of correlation. To take the example above, Bob buying toothpaste is correlated with him buying a toothbrush. But it doesn't cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth."""

Clarifications:

1. As I understand it, the claim is more precisely about some literature (although in practice almost all), not all literature. E.g. conceivably some allegory could just encode a logical argument, but that can't count as a counterexample.

2. The claim is that AI cannot learn to increase its efficiency at generating text that humans recognize as coherent characters and plots insofar as this requires causal thinking irreducible to correlations - short enough or plagiarized texts might not require this but they don't count as counterexamples.

Resolution criteria: my judgement whether any argument presented in the comments demonstrates that the argument in the paper is unsound. I might resolve N/A if I decide I'm unable to adequately judge some proposed argument.

Feb 28, 5:45pm: Will someone refute Angus Fletcher's proof that computers cannot read (or write) literature? → Will someone refute Angus Fletcher's proof that computers cannot read (or write) literature by 2024?

Get Ṁ600 play money
Sort by:

There's a lot to unpack to explain the proof, address the comments, and then I have my own issues with the argument. At the end, I think most likely the resolution would not be clear-cut. Resolving NA.

I don't think Angus Fletcher really understands causal inference as well as he pretends to do. To refute this, we need to take a deeper look into the difference between causal reasoning and if-thens.

First, nothing in the way cpus work gives them fundamentally different theoretical (rather than practical) limitations on what kind of computations they can do compared to neurons.

[T]here's one feature of human learning that computers are incapable of copying: the power of our synapses to control the direction of our ideas. That control is made possible by the fact that our neurons fire in only one direction, from dendrite to synapse. So when our c synapse creates a connection between neuron A and neuron C, the connection is a one-way A → C route that establishes neuron A as a (past) cause and neuron C as a (future) effect. It's our brain thinking: "A causes C

This literally describes an if-then logic gate. If A fires, then C will fire. You can do the exact same thing in a computer. Fletcher conflates two things about this synaptic connection. The first is that we can say that "A caused C", which is true. The second is "This connection means that you brain thinks A caused C", which is not. Your brain might be encoding a correlation, or just something completely false. A computer works the same way. If I have a binary bit B, and then I flip another binary bit D based on B's state, I can say that "B caused D", but I can't say "This means that the computer things B caused D" because as Fletcher points out, if-thens and causal statements are not the same. So what is missing for an if-then statement to because causal?

What's missing is the counterfactual. The standard theory of causal inference defines the effect of A on C as (A|C = 1) - (A|C = 0). Note that this is still something that is easily encodable in a computer. I just tell it to add x to A or set the value of A to y if C has occurred. Let me try to rephrase that, because this point is important. I can use ifs to encode any kind of causal relationship. That's because ifs, while not necessarily causal, can be causal if they correspond to a counterfactual.

Encoding causal statements is easy. The tricky part in causality is causal inference. Actually reasoning and learning about counterfactuals turns out to be quite hard, because of the fundamental problem of causal inference and all that. However, it is possible, and humans nowadays almost exclusively use computers to do this causal reasoning nowadays. Check out this library as an example. It is absolutely possible to do proper causal reasoning on a computer. It does require humans to code some assumptions into the computer, but I don't see how that changes this fact that computers can do this.

After reading the market description, I really don't understand why

Causal reasoning cannot be performed by machine-learning algorithms because those algorithms run on the CPU's Arithmetic Logic Unit, which is designed to run symbolic logic, and symbolic logic can only process correlation.

would be true. I believe any reasoning I can do, can also be done by computers (in theory). All this talk about how neurons process information just sounds totally irrelevant. It's like saying "Our brain doesn't contain cakes, so I therefore cannot reason about cakes". It makes no sense.

It's also a little confusing how you first write

QED: Computers cannot perform the causal reasoning necessary for learning to use literature.

As if it's something impossible to do, but then write

The claim is that AI cannot learn to increase its efficiency at generating text that humans recognize as coherent characters and plots insofar as this requires causal thinking irreducible to correlations

Why would increasing it's efficiency be relevant? I thought the question was about whether it can reason about causality at all? Humans can't increase the efficiency of their own brain much either, but I think they can do causal thinking.

bought Ṁ60 of YES

How will you evaluate a proposed refutation for the purposes of resolution? To me this seems a pretty amateurish argument that hinges on a distinction without a difference (causality IS persistent correlation) but I expect a refutation to be something more than this

@mariopasquato Also, will you take empirical proof (e.g. a sonnet written by chatGPT) as a refutation?

@mariopasquato Sorry, I didn't see the comments before.

The description answers your question about empirical proof (no).

Causality is not persistent correlation. There are two equivalent definitions of causality:

  1. Counterfactual dependence: A variable A is said to cause another variable B if changing A, while holding all other factors constant, would lead to a change in B.

  2. Intervention: A causes B if an intervention that changes A while holding everything else constant results in a change in B.

@na_pewno I guess we have a philosophical disagreement about causality here :-). I am not sure we want to get into discussing that as it may not be relevant to the question. But I can’t resist so… counterfactuals are always imagined, by definition they are not factual, they don’t really happen - we experience reality exactly once; similarly “intervention” is just an arbitrary name we attach to certain events that leads us to image a counterfactual (because if I intervene out of “free will” I might have chosen not to intervene instead). Metaphysics isn’t bad per se, so it’s fine to do this; but we must recognize that causality is additional structure we impose onto the data, there is no causality in the data, only persistent correlations - for all we know the Sun may rise from the West tomorrow. At any rate if you and I can make the jump from experiential data to the abstraction of causality (by e.g. dreaming up counterfactuals) I don’t see why an automated system shouldn’t. There is lots of literature in machine learning on imposing causal constraints and on causality in general, and certainly there is no consensus that this should be impossible.

@na_pewno Do you think his argument implies that a conventional computer cannot simulate biological neurons? I would say so based on a first reading. Is this suggested to be a fundamental limitation (e.g. physics being non-computable)?

More related questions