.
EuanMcLean
What mistakes has the AI safety movement made?
Deconfusing Landauer’s Principle
Searching for phenomenal consciousness in LLMs: Perceptual reality monitoring and introspective confidence
Big Picture AI Safety: Introduction
Two flavors of computational functionalism
What will the first human-level AI look like, and how might things go wrong?
Computational functionalism probably can’t explain phenomenal consciousness
What should AI safety be trying to achieve?
Do simulacra dream of digital sheep?
Is the mind a program?
Yes, perfect causal closure is technically impossible, so it comes in degrees. My argument is that the degree of causal closure of possible abstractions in the brain is less than one might naively expect.
Are there any measures of approximate simulation that you think are useful here?
I am yet to read this but I expect it will be very relevant! https://arxiv.org/abs/2402.09090
Especially if it’s something as non-committal as “this mechanism could maybe matter”. Does that really invalidate the neuron doctrine?
I agree each of the “mechanisms that maybe matter” are tenuous by themselves, the argument I’m trying to make here is hits-based. There are so many mechanisms that maybe matter, the chances of one of them mattering in a relevant way is quite high.
Sorry for the delay. As both you and TheMcDouglas have mentioned; yea, this relies on $H(C|X) = 0$. The way I’ve worded it above is somewhere between misleading and wrong, have modified. Thanks for pointing this out!
Yea, I think you’re hitting on a weird duality between setting and erasing here. I think I agree that setting is more fundamental than erasing. I suppose when talking about energy expenditure of computation, each set bit must be erased in the long run, so they’re interchangeable in that sense.
I’m not saying anything about MCMC. I’m saying random noise is not what I care about, the MCMC example is not capturing what I’m trying to get at when I talk about causal closure.
I don’t disagree with anything you’ve said in this comment, and I’m quite confused about how we’re able to talk past each other to this degree.
The most obvious examples are sensory inputs—vision, sounds, etc. I’m not sure why you don’t mention those.
Obviously algorithms are allowed to have inputs, and I agree that the fact that the brain takes in sensory input (and all other kinds of inputs) is not evidence against practical CF. The way I’m defining causal closure is that the algorithm is allowed to take in some narrow band of inputs (narrow relative to, say, the inputs being the dynamics of all the atoms in the atmosphere around the neurons, or whatever). My bad for not making this more explicit, I’ve gone back and edited the post to make it clearer.
Computer chips have a clear sense in which they exhibit causal closure (even though they are allowed to take in inputs through narrow channels). There is a useful level of abstraction of the chip: the charges in the transistors. We can fully describe all the computations executed by the chip at that level of abstraction plus inputs, because that level of abstraction is causally closed from lower-level details like the trajectories of individual charges. If it wasn’t so, then that level of abstraction would not be helpful for understanding the behavior of the computer—executions would branch conditional on specific charge trajectories, and it would be a rubbish computer.
random noise enters in
I think this is a big source of the confusion, another case where I haven’t been clear enough. I agree that algorithms are allowed to receive random noise. What I am worried about is the case where the signals entering the from smaller length scales are systematic rather than random.
If the information leaking into the abstraction can be safely averaged out (say, we just define a uniform temperature throughout the brain as an input to the algorithm), then we can just consider this a part of the abstraction: a temperature parameter you define as an input or whatever. Such an abstraction might be able to create consciousness on a practical classical computer.
But imagine instead that (for sake of argument) it turned out that high-resolution details of temperature fluctuations throughout the brain had a causal effect on the execution of the algorithm such that the algorithm doesn’t do what it’s meant to do if you just take the average of those fluctuations. In that case, the algorithm is not fully specified on that level of abstraction, and whatever dynamics are important for phenomenal consciousness might be encoded in the details of temperature fluctuations, not be captured by your abstraction.
If you believe there exists “a map between the third-person properties of a physical system and whether or not it has phenomenal consciousness” you believe you can define consciousness with a computation.
I’m not arguing against the claim that you could “define consciousness with a computation”. I am arguing against the claim that “consciousness is computation”. These are distinct claims.
So, most people who take the materialist perspective believe the material world comes from a sort of “computational universe”, e.g. Tegmark IV.
Massive claim, nothing to back it up.
This person’s thinking is very loosey-goosey and someone needed to point it out.
when you define the terms properly (i.e. KL-divergence from the firings that would have happened)
I think I have a sense of what’s happening here. You don’t consider an argument precise enough unless I define things in more mathematical terms. I’ve been reading a lot more philosophy recently so I’m a lot more of a wordcell than I used to be. You are only comfortable with grounding everything in maths and computation, which is chill. But my view is that maths and computation are not the only symbols upon which constructive discussion can be built.
If you actually want to know the answer: when you define the terms properly (i.e. KL-divergence from the firings that would have happened), the entire paradox goes away.
I’d be excited to actually see this counterargument. Is it written down anywhere that you can link to?
Could you recommend any good (up-to-date) reading defending the neuron doctrine?
Is this the passage you’re referring to that means I’m “fundamentally misunderstanding computation”?
suppose we actually wanted to use a waterfall to help us calculate chess moves. [...] I conjecture that, given any chess-playing algorithm A that accesses a “waterfall oracle” W, there is an equally-good chess-playing algorithm A0, with similar time and space requirements, that does not access W. If this conjecture holds, then it gives us a perfectly observer-independent way to formalize our intuition that the “semantics” of waterfalls have nothing to do with chess.
This boils down to the Chalmers response. He isn’t arguing that the waterfall couldn’t implement a single pass through of a chess game, but it couldn’t robustly play many different chess games. I discuss the Chalmers response in the appendix and why I think it doesn’t fix the issue.
Thanks for clarifying :)
Yes.
Yes, the seed has a causal effect on the execution of the algorithm by my definition. As was talked about in the comments of the original post, causal closure comes in degrees, and in this case the MCMC algorithm is somewhat causally closed from the seed. An abstract description of the MCMC system that excludes the value of the seed is still a useful abstract description of that system—you can reason about what the algorithm is doing, predict the output within the error bars, etc.
In contrast, the algorithm is not very causally closed to, say, idk, some function f() that is called a bunch of times on each iteration of the MCMC. If we leave f() out of our abstract description of the MCMC system, we don’t have a very good description of that system, we can’t work out much about what the output would be given an input.
If the ‘mental software’ I talk about is as causally closed to some biophysics as the MCMC is causally closed to the seed, then my argument in that post is weak. If however it’s only as causally closed to biphysics as our program is to f(), then it’s not very causally closed, and my argument in that post is stronger.
Hmm, yea this is a good counterexample to my limited “just take the average of those fluctuations” claim.
If it’s important that my algorithm needs a pseudorandom float between 0 and 1, and I don’t have access to the particular PRNG that the algorithm calls, I could replace it with a different PRNG in my abstract description of the MCMC. It won’t work exactly the same, but it will still run MCMC and give out a correct answer.
To connect it to the brain stuff: say I have a candidate abstraction of the brain that I hope explains the mind. Say temperatures fluctuate in the brain between 38°C and 39°C. Here are 3 possibilities of how this might effect the abstraction:
Maybe in the simulation, we can just set the temperature to 38.5°C, and the simulation still correctly predicts the important features of the output. In this case, I consider the abstraction causally closed to the details temperature fluctuations.
Or maybe temperature is an important source of randomness for the mind algorithm. In the simulation, we need to set the temperature to 38+x°C where, in the simulation, I just generate x as a PRN between 0 and 1. In this case, I still consider the abstraction causally closed to the details of the temperature fluctuations.
Or maybe even doing the 38+x°C replacement makes the simulation totally wrong and just not do the functions its meant to do. The mind algorithm doesn’t just need randomness, it needs systematic patterns that are encoded in the temperature fluctuations. In this case, to simulate the mind, we need to constantly call a function temp() which simulates the finer details of the currents of heat etc throughout the brain. In this case, in my parlance, I’d say the abstraction is not causally closed to the temperature fluctuations.