Sure, “everything is a cluster” or “everything is a list” is as right as “everything is emergent”. But what’s the actual justification for pruning that neuron? You can prune everything like that.
Great! This text by Yudkowsky has convinced me that the Philosophical Zombie thought experiment leads only to epiphenomenalism and must be avoided at all costs.
Do you mean that the original argument that uses zombies leads only to epiphenomenalism, or that if zombies were real that would mean consciousness is epiphenomenal, or what?
Sure, “everything is a cluster” or “everything is a list” is as right as “everything is emergent”. But what’s the actual justification for pruning that neuron? You can prune everything like that.
The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler. The two hypotheses are possible, but you want to go with the simpler hypothesis, and a world with only (physical properties) is simpler than a world with (physical properties + mental properties).
I would be curious to know what you know about my box trying to solve the meta-problem.
Do you mean that the original argument that uses zombies leads only to epiphenomenalism, or that if zombies were real that would mean consciousness is epiphenomenal, or what?
The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler. The two hypotheses are possible, but you want to go with the simpler hypothesis, and a world with only (physical properties) is simpler than a world with (physical properties + mental properties).
Argument needed! You cannot go from “H1 asserts the existence of more stuff than H2” to “H1 is more complex than H2″. Complexity is measured as the length of the program that implements a hypothesis, not as the # of objects created by the hypothesis.
The argument goes through for Epiphenomenalism specifically (bc you can just get rid of the code that creates mental properties) but not in general.
I would be curious to know what you know about my box trying to solve the meta-problem.
Sounds unethical. At least don’t kill them afterwards.
Any conclusions would raise usual questions about how much AI’s reasoning is about real things and how much it is about extrapolating human discourse. The actual implementation of this reasoning in AI could be interesting, especially given that AI would have different assumptions about its situation. But it wouldn’t be necessary the same as in a human brain.
Philosophically I mostly don’t see how is that different from introspecting your sensations and thoughts and writing isomorphic Python program. I guess Chalmers may agree that we have as much evidence of AIs’ consciousness as of other humans’, but would still ask why the thing that implements this reasoning is not a zombie?
But the most fun to think about are cases where it wouldn’t apparently solve the problem: like if the reasoning was definitely generated by a simple function over relevant words, but you still couldn’t find where it differs from human reasoning. Or maybe the actual implementation would be so complex, that humans couldn’t comprehend it on lower level, than what we have now.
The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler.
Yeah, but can you? Your story ended on stating the meta problem, so until it’s actually solved, you can’t explain everything. So how did you actually check that you would be able to explain everything once it’s solved? Just stating the meta problem of consciousness is like stating the meta problem of why people talk about light and calling the idea of light “a virus”.
Sure, “everything is a cluster” or “everything is a list” is as right as “everything is emergent”. But what’s the actual justification for pruning that neuron? You can prune everything like that.
Do you mean that the original argument that uses zombies leads only to epiphenomenalism, or that if zombies were real that would mean consciousness is epiphenomenal, or what?
The justification for pruning this neuron seems to me to be that if you can explain basically everything without using a dualistic view, it is so much simpler. The two hypotheses are possible, but you want to go with the simpler hypothesis, and a world with only (physical properties) is simpler than a world with (physical properties + mental properties).
I would be curious to know what you know about my box trying to solve the meta-problem.
Both
Explaining everything involves explaining phenomenal consciousness, so it’s literally solving the Hard Problem, as opposed to dissolving it.
Argument needed! You cannot go from “H1 asserts the existence of more stuff than H2” to “H1 is more complex than H2″. Complexity is measured as the length of the program that implements a hypothesis, not as the # of objects created by the hypothesis.
The argument goes through for Epiphenomenalism specifically (bc you can just get rid of the code that creates mental properties) but not in general.
Sounds unethical. At least don’t kill them afterwards.
Any conclusions would raise usual questions about how much AI’s reasoning is about real things and how much it is about extrapolating human discourse. The actual implementation of this reasoning in AI could be interesting, especially given that AI would have different assumptions about its situation. But it wouldn’t be necessary the same as in a human brain.
Philosophically I mostly don’t see how is that different from introspecting your sensations and thoughts and writing isomorphic Python program. I guess Chalmers may agree that we have as much evidence of AIs’ consciousness as of other humans’, but would still ask why the thing that implements this reasoning is not a zombie?
But the most fun to think about are cases where it wouldn’t apparently solve the problem: like if the reasoning was definitely generated by a simple function over relevant words, but you still couldn’t find where it differs from human reasoning. Or maybe the actual implementation would be so complex, that humans couldn’t comprehend it on lower level, than what we have now.
Yeah, but can you? Your story ended on stating the meta problem, so until it’s actually solved, you can’t explain everything. So how did you actually check that you would be able to explain everything once it’s solved? Just stating the meta problem of consciousness is like stating the meta problem of why people talk about light and calling the idea of light “a virus”.
Let’s put aside ethics for a minute.
Yes, this wouldn’t be the same as the human brain; it would be like the Swiss cheese pyramid that I described in the post.
Take a look at my answer to Kaj Sotala and tell me what you think.