Imagine sufficiently strange aliens were peeking into our low-dimensional slice of totality. They’d see matter/energy states which change, matter/energy states which stay the same, change at different rates. They wouldn’t prima facie find “bipeds walking around” as any more special-consideration-worthy than “bubbles in a pond”, it wouldn’t trigger any “sentience alarms” (maybe their intuition rests on a nano scale).
Consider they were searching for something interesting, maybe approaching whatever life-analogues they defined. Systematically zooming through different processes at different scales. Now, data in itself is nothing without the interpretation that allows you to see the information represented by the data.
Consider such a strange alien looked at a computer: Only a miniscule fraction of the total processes going on—well below Bremermann’s limit—corresponds to even the physical layer in the stack. The relevant processes (not knowing which are relevant, and if there even is anything “relevant”) have to be isolated just to have the specific data for which an interpretation can be found (low “voltage” in this “gate” = “0″ in a binary system, or whatever). All just to unlock a preliminary step towards eventually, maybe, understanding that there is a game of minesweeper running on that computer. Would an alien linger long enough to make such a conclusion, before prematurely concluding the processes are probably on the same order of importance as the computation inherent in rain splashing on the ground?
Imagine two such aliens were in a contest, a race: One trying to find meaning (an interpretation under which information can be gleaned which indicates something interesting—agent-y, replicator-y) in a pile of matter/energy we’d call a desktop PC, the other alien trying to find such meaning within what happens in the sun, with its constant and fast reactions (cue “the suns are sentient” sci fi novels).
Think of the amount of “computation” which is happening when an avalanche is crushing down a hillside. Scenarios such as Douglas Adams’ Earth as a supercomputer to solve a problem may not even be that far fetched—all we need is an
interpretation so that the computation that is going on all around us anyways can be interpreted for something useful.
We can certainly surmise that there seems to be no computation going on which we associate with sentience as we understand it, the strictly biological criteria of life aren’t met. But isn’t that like a civilization of bacteria ruling their host body to not be “sentient”, since its individual cells—specialized in their function—have trouble thriving on their own? We see complex computations all around us, how certain should we be that there is no interpretation we lack under which we’d find that the sun is actually computing interesting stuff—or having thoughts and concepts, for sufficiently strange definitions of thoughts and concepts?
Maybe similarly to how “information theoretic death” sensibly extends the old and narrow categories of “brain activity stopped, time to bury the body”, we should define something along the lines of “candidate process for having an interesting interpretation”, which could contain criteria such as “capacity to store information, delta of state change, amplitude of state change”, and so on.
My remembering of a standard exposition (found the source, see edit) goes like this: a beautiful waterfall is a complicated dynamic system, containing many more atoms than a human brain, all in motion. Were one clever, one could map the motion of water in part of the waterfall to motion of atoms and charge in a human brain. Then the waterfall is a person, thinking thoughts as it burbles. Except there is a problem, where each waterfall has many possible mappings, and thus spans the whole range of brains!
If one then asks the question “so why aren’t you a waterfall?” this is a sort of epistemological analogue to the Boltzmann brain hypothesis.
I seem to recall the original argument going a different place: “are waterfalls on-average blissful or suffering, and by how much do billions of waterfalls encoding all possible minds outweigh our petty human concerns?”
EDIT: Ah, found the source (ctrl+f “waterfall”), which references Putnam and Searle, and is worth a read in its entirety. A little discussion of ethical implications on LW can be found here.
Imagine sufficiently strange aliens were peeking into our low-dimensional slice of totality. They’d see matter/energy states which change, matter/energy states which stay the same, change at different rates. They wouldn’t prima facie find “bipeds walking around” as any more special-consideration-worthy than “bubbles in a pond”, it wouldn’t trigger any “sentience alarms” (maybe their intuition rests on a nano scale).
Consider they were searching for something interesting, maybe approaching whatever life-analogues they defined. Systematically zooming through different processes at different scales. Now, data in itself is nothing without the interpretation that allows you to see the information represented by the data.
Consider such a strange alien looked at a computer: Only a miniscule fraction of the total processes going on—well below Bremermann’s limit—corresponds to even the physical layer in the stack. The relevant processes (not knowing which are relevant, and if there even is anything “relevant”) have to be isolated just to have the specific data for which an interpretation can be found (low “voltage” in this “gate” = “0″ in a binary system, or whatever). All just to unlock a preliminary step towards eventually, maybe, understanding that there is a game of minesweeper running on that computer. Would an alien linger long enough to make such a conclusion, before prematurely concluding the processes are probably on the same order of importance as the computation inherent in rain splashing on the ground?
Imagine two such aliens were in a contest, a race: One trying to find meaning (an interpretation under which information can be gleaned which indicates something interesting—agent-y, replicator-y) in a pile of matter/energy we’d call a desktop PC, the other alien trying to find such meaning within what happens in the sun, with its constant and fast reactions (cue “the suns are sentient” sci fi novels).
Think of the amount of “computation” which is happening when an avalanche is crushing down a hillside. Scenarios such as Douglas Adams’ Earth as a supercomputer to solve a problem may not even be that far fetched—all we need is an interpretation so that the computation that is going on all around us anyways can be interpreted for something useful.
We can certainly surmise that there seems to be no computation going on which we associate with sentience as we understand it, the strictly biological criteria of life aren’t met. But isn’t that like a civilization of bacteria ruling their host body to not be “sentient”, since its individual cells—specialized in their function—have trouble thriving on their own? We see complex computations all around us, how certain should we be that there is no interpretation we lack under which we’d find that the sun is actually computing interesting stuff—or having thoughts and concepts, for sufficiently strange definitions of thoughts and concepts?
Maybe similarly to how “information theoretic death” sensibly extends the old and narrow categories of “brain activity stopped, time to bury the body”, we should define something along the lines of “candidate process for having an interesting interpretation”, which could contain criteria such as “capacity to store information, delta of state change, amplitude of state change”, and so on.
My remembering of a standard exposition (found the source, see edit) goes like this: a beautiful waterfall is a complicated dynamic system, containing many more atoms than a human brain, all in motion. Were one clever, one could map the motion of water in part of the waterfall to motion of atoms and charge in a human brain. Then the waterfall is a person, thinking thoughts as it burbles. Except there is a problem, where each waterfall has many possible mappings, and thus spans the whole range of brains!
If one then asks the question “so why aren’t you a waterfall?” this is a sort of epistemological analogue to the Boltzmann brain hypothesis.
I seem to recall the original argument going a different place: “are waterfalls on-average blissful or suffering, and by how much do billions of waterfalls encoding all possible minds outweigh our petty human concerns?”
EDIT: Ah, found the source (ctrl+f “waterfall”), which references Putnam and Searle, and is worth a read in its entirety. A little discussion of ethical implications on LW can be found here.