We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow.
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate?
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.