I don’t get it- tier 1, 2, and 3 are all computable, so by turing they can emulate each other with perfect fidelity- does this approach say if a tier 1 emulates a conscious tier 3, it just makes a p zombie?
Think about the PageRank step. As you increase the size of the SCC that falls within the topological container where the holistic step is applied, a Tier 1 emulator will take longer and longer to compute the next step. To generate this step “all at once” you will need additional accounting mechanisms, like stopping the advance of the network everywhere else except in the topological container, have additional memory slots to store partially computed steps, and have to deal with an increasingly larger number of steps until convergence. Is it possible to do this? In one sense, yes: you can carefully, intelligently, and deliberately design a Tier 1 system to do this. Thus, this has computational large unforeseen costs at the point of execution in addition to requiring intelligent design or simply enormous luck to somehow hit on the precise system that happens to do this (shades of Boltzmann brain show through). This alone distinguishes Tier 3 qualitatively and makes it a much better candidate to explain our unified experience (phenomenal binding specifically). There are additional issues:
In Tier 1 systems that patterns never really come together with real, objective, and causally significant boundaries. Not even the simulated boundaries programmed to emulate a Tier 3 system would have that property.
In Tier 1 systems, the boundaries would have no reason to be selected for by an evolutionary process that is taking place in that system. If simulating the boundaries requires complex accounting processes like stopping the simulation elsewhere to make them look like Tier 3 and waiting till convergence, etc. why would an evolved organism bother to use these bounded states for anything, much less information processing? Our experience is in many ways clearly isomorphic to some key aspect of the computation our brain is recruiting it for.
And perhaps most important of all, who or what (and how?) would a pattern in a Tier 1 system ever be witnessed if you lack any real boundary around it? You can say it is “witnessed by other patterns in Tier 1 as well”. But such patterns also lack a real boundary. In reality, the “boundary” of any patterns in such systems is tiny, the size of a single bucket (or at most, bucket + neighborhood + ruleset applied to it, still far smaller than any experience we have).
In a strict sense, even a simulated Tier 3 within a Tier 1 system wouldn’t quite reach the status of a p. zombie—because the functional organization and causal structure remains that of the Tier 1, and the apparent causal structure that emulates the Tier 3 system is something a real phenomenal observer would have to interpret by knowing how to read the patterns in the Tier 1 system in just the right ways (“skip all of these updates, and treat them as if they were just one big update, ignore what happens here, take a snapshot at this point, etc.”). This interpretation is, from the Tier 1 system’s “point of view” (if we call it that) arbitrary.
A somewhat similar case, though not exact, we can use as an intuition pump is “is a lookup table that has the same input-output function of your brain within such and such sensory parameters a p.zombie?”. In some restricted sense yes. But not really. Because it only looks like it within certain parameters and to a specific observer. It doesn’t look like it from the point of view of the processes in the brain that would normally give rise to such outputs: they’re non-existent in the case of the lookup table. Similarly the other considerations apply: why would such a lookup table ever evolve in this world, if it needs to luck out to be just right, or have an intelligent designer, plus being widely inefficient? The “simulated” Tier 3 system within a Tier 1 system has very similar issues.
I don’t get it- tier 1, 2, and 3 are all computable, so by turing they can emulate each other with perfect fidelity- does this approach say if a tier 1 emulates a conscious tier 3, it just makes a p zombie?
Think about the PageRank step. As you increase the size of the SCC that falls within the topological container where the holistic step is applied, a Tier 1 emulator will take longer and longer to compute the next step. To generate this step “all at once” you will need additional accounting mechanisms, like stopping the advance of the network everywhere else except in the topological container, have additional memory slots to store partially computed steps, and have to deal with an increasingly larger number of steps until convergence. Is it possible to do this? In one sense, yes: you can carefully, intelligently, and deliberately design a Tier 1 system to do this. Thus, this has computational large unforeseen costs at the point of execution in addition to requiring intelligent design or simply enormous luck to somehow hit on the precise system that happens to do this (shades of Boltzmann brain show through). This alone distinguishes Tier 3 qualitatively and makes it a much better candidate to explain our unified experience (phenomenal binding specifically). There are additional issues:
In Tier 1 systems that patterns never really come together with real, objective, and causally significant boundaries. Not even the simulated boundaries programmed to emulate a Tier 3 system would have that property.
In Tier 1 systems, the boundaries would have no reason to be selected for by an evolutionary process that is taking place in that system. If simulating the boundaries requires complex accounting processes like stopping the simulation elsewhere to make them look like Tier 3 and waiting till convergence, etc. why would an evolved organism bother to use these bounded states for anything, much less information processing? Our experience is in many ways clearly isomorphic to some key aspect of the computation our brain is recruiting it for.
And perhaps most important of all, who or what (and how?) would a pattern in a Tier 1 system ever be witnessed if you lack any real boundary around it? You can say it is “witnessed by other patterns in Tier 1 as well”. But such patterns also lack a real boundary. In reality, the “boundary” of any patterns in such systems is tiny, the size of a single bucket (or at most, bucket + neighborhood + ruleset applied to it, still far smaller than any experience we have).
In a strict sense, even a simulated Tier 3 within a Tier 1 system wouldn’t quite reach the status of a p. zombie—because the functional organization and causal structure remains that of the Tier 1, and the apparent causal structure that emulates the Tier 3 system is something a real phenomenal observer would have to interpret by knowing how to read the patterns in the Tier 1 system in just the right ways (“skip all of these updates, and treat them as if they were just one big update, ignore what happens here, take a snapshot at this point, etc.”). This interpretation is, from the Tier 1 system’s “point of view” (if we call it that) arbitrary.
A somewhat similar case, though not exact, we can use as an intuition pump is “is a lookup table that has the same input-output function of your brain within such and such sensory parameters a p.zombie?”. In some restricted sense yes. But not really. Because it only looks like it within certain parameters and to a specific observer. It doesn’t look like it from the point of view of the processes in the brain that would normally give rise to such outputs: they’re non-existent in the case of the lookup table. Similarly the other considerations apply: why would such a lookup table ever evolve in this world, if it needs to luck out to be just right, or have an intelligent designer, plus being widely inefficient? The “simulated” Tier 3 system within a Tier 1 system has very similar issues.