causal structure of a Turing machine simulating a human brain is very different from an actual human brain.
This statement contravenes universal computability, and is therefore false. A universal computer can instantiate any other causal structure. Remember: the causal structure at the substrate level is irrelevant due to the universality of computation. Causal structures can be embedded within other causal structures (multiple realizability).
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
It can be simulated because anything can be simulated!
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
are there any empirical predictions where your viewpoint disagrees with functionalism?
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
I predict that within a decade or two, computers with about 10^14 ops will run human mind simulations, and these sims will pass any and all objective tests for human intelligence, self-awareness, consciousness, etc.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
People will just accept that sims are conscious/self-aware for the exact same reasons that we reject solipsism.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Neural analog computational systems can be simulated perfectly in a probabilistic sense
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.
My statement does not contravene universal computability since I’m assuming a Turing machine can simulate a human brain. Let me try another approach: Look at the space-time diagram of a Turing machine adding two numbers and compare with the space-time diagram of a neuron performing a similar summation. The causal structures in the space-time diagrams are very different. Yes, you can simulate a causal structure, but this is not the same thing as the causal structure of the underlying physical substrate performing the simulation.
Anything can be simulated imperfectly. Take the weather or C. elegans nervous system.
I’m just exhibiting skepticism over claims from machine functionalism relating to Turing (and related) machine consciousness. I’m not promoting a specific viewpoint.
There are no objective tests for consciousness. Of course you can re-define it in terms of self-awareness but this is not the same.
Have we rejected solipsism? Certainly panpsychism is consistent with it and this appears untouched in consciousness research.
Well, if you assume that, then you are already most of the way to functionalism, but I suspect we may be talking about different types of simulations.
Neurons perform analog summation, so the space-time diagram or causal structure is stochastic/statistical rather than deterministic (addition over real-number distributions rather than digital addition) . My use of the term ‘simulation’ encompasses probabilistic simulation which entails matching the statistical distribution over state transitions rather than deterministic simulation.
Neural analog computational systems can be simulated perfectly in a probabilistic sense when you can recreate the exact conditional probability distributions that govern spike events. You can’t necessarily predict the exact actions the brain will output (due to noise effects), but you can—in theory—predict actions from the exact correct distribution. At the limits of simulation we can predict exact samples from our multiverse distribution, rather than predict the exact future of our particular (unknowable) branch.
Simulation of intelligent minds is fundamentally different than weather simulation—for the weather we are interested in the exact outcome in our specific universe. That would be comparable to simulating the exact thoughts of a particular human mind in some situation—which in general is computationally intractable (and unimportant for AI).
Science is concerned with objective reality. A definition of consciousness which precludes objective testing is outside the realm of scientific inquiry at best, and pseudo-science at worse.
In common usage the term consciousness refers to objective reality. Sentences of the form ” I was conscious of X”, or “Y rendered Bob unconscious”, or “Perhaps at a subconscious level” all suggest a common meaning involving objectively verifiable computations.
We know that consciousness is the particular mental state arising from various computations coordinated across some hundreds of major brain regions. We know that certain drugs can cause loss of consciousness even while neural activity persists. Consciousness depends on precise synchronized coordination between major brain circuits—a straightforward result of the brain being an hybrid digital/analog computer.
We aren’t so far away from being able to objectively detect consciousness via brain scanning and some form of statistical inference—see this interesting work for example (using a clever compressibility or k-complexity perturbation measure).
Surely you realize that quibbling over the use of analog vs digital neural summation in my toy example does not address my main argument.
Anything can be simulated perfectly (and trivially) in a probabilistic sense.
If we knew the basis for consciousness, we would have objective tests. It’s possible that studying the brain’s structural and connectional organization in detail will provide the clues we need to develop better informed opinions about the basis of consciousness.
This is my final post and I would like to thank everyone for the discussion. If anyone is interested in developing autotracing and autosegmentation programs for connectomics and neural circuit reconstruction in whole-brain volume electron microscopy datasets, please email me at brainmaps at gmail dot com or visit http://connectomes.org for more information. Thanks again.