There’s nothing special about humans, save for their capability to cause a technological singularity, which is what is being selected for here. This does rule out the overwhelming majority of non-human life forms, though: turtles, for instance, are clearly not going to construct an artificial superintelligence any time soon. They can’t, because they don’t have the neurological means to construct and transmit the encyclopedias of knowledge that are required to build one, or the physiological means to achieve the industrial capacity required to manufacture the programmable computational substrates that are required to build one. They aren’t capable of technological evolution.
(The human form in particular is especially suitable for technological evolution—we are social animals with a fully general communications channel in the form of speech and brains gigantic enough to make use of this generality, which allows us to form, encode, and transmit abstract ideas across societies and generations; we stand upright and have incredible visual acuity, which makes it incredibly easy to construct and use tools; we have very long lifespans and few natural predators, which allows individual humans to undergo profound and prolonged intellectual development; and so on. So conditioning on the presence of a singularity dramatically amplifies human experiences in particular. It could well be that there are other possible histories in which turtles evolved into weird reptoids with similar features, that were simply not sampled).
And, well, maybe you are only important insofar as your experience is required to compute something that’ll have a causal effect on a more important distant descendant of yours. If the computing Thing is not so aggressive in its compression, these are the kinds of things that would be worth following backwards in order to figure out their generative mechanisms and general statistics and so on. If it’s more aggressive, it might not consider this effect to be worth trying to generate in detail, in which case it just asserts (as in Fn. 3) that the specific details of its history are such that the outcome on the distant descendant ends up being xyz.
i mean, i plainly disagree. it seems a failure of imagination that octopus, algae mats, raptors, ant colonies, bees, elephants, etc couldn’t, with a little teleological oomph, build a universe-colonizing technology. so this narrative does not help me locate myself as a human, rather than as any of those.
(“The human form in particular is especially suitable for technological evolution”—i feel that, were i for example an octopus, i could easily make a similar argument about why any high technology would be contingent on intelligent creatures with tentacles. so again, this does not help me locate myself as human.)
overall, it seems to me that if the teleological mushrooms are most interested in simulating powerful artificial minds, and can spookily determine certain events, they could easily find a faster route to “the good stuff”! so i’m still left wondering: why is there something?
And, well, maybe you are only important insofar as your experience is required to compute something that’ll have a causal effect on a more important distant descendant of yours.
right, but my point is that, for all i know, we are not yet close to a singularity. small details of subjective experience many hundreds or thousands of years prior could be “remembered” in the sense that the simulated instant depends on them. so this metaphysics does not help me locate myself temporally near the singularity, either.
it just asserts (as in Fn. 3) that the specific details of its history are such that the outcome on the distant descendant ends up being xyz.
perhaps this point is critical to our disagreement. i don’t expect that there’s a meaningful difference (from the perspective of one of the simulation’s denizens) between reifying a moment for which my current subjective experience is a logical necessity, and reifying the moments in which the subjective experience is more traditionally thought to be taking place.
in other words, the glider experiences all the time between its start and end, even if the metamind moves ahead in leaps and bounds.
There’s nothing special about humans, save for their capability to cause a technological singularity, which is what is being selected for here. This does rule out the overwhelming majority of non-human life forms, though: turtles, for instance, are clearly not going to construct an artificial superintelligence any time soon. They can’t, because they don’t have the neurological means to construct and transmit the encyclopedias of knowledge that are required to build one, or the physiological means to achieve the industrial capacity required to manufacture the programmable computational substrates that are required to build one. They aren’t capable of technological evolution.
(The human form in particular is especially suitable for technological evolution—we are social animals with a fully general communications channel in the form of speech and brains gigantic enough to make use of this generality, which allows us to form, encode, and transmit abstract ideas across societies and generations; we stand upright and have incredible visual acuity, which makes it incredibly easy to construct and use tools; we have very long lifespans and few natural predators, which allows individual humans to undergo profound and prolonged intellectual development; and so on. So conditioning on the presence of a singularity dramatically amplifies human experiences in particular. It could well be that there are other possible histories in which turtles evolved into weird reptoids with similar features, that were simply not sampled).
And, well, maybe you are only important insofar as your experience is required to compute something that’ll have a causal effect on a more important distant descendant of yours. If the computing Thing is not so aggressive in its compression, these are the kinds of things that would be worth following backwards in order to figure out their generative mechanisms and general statistics and so on. If it’s more aggressive, it might not consider this effect to be worth trying to generate in detail, in which case it just asserts (as in Fn. 3) that the specific details of its history are such that the outcome on the distant descendant ends up being xyz.
i mean, i plainly disagree. it seems a failure of imagination that octopus, algae mats, raptors, ant colonies, bees, elephants, etc couldn’t, with a little teleological oomph, build a universe-colonizing technology. so this narrative does not help me locate myself as a human, rather than as any of those.
(“The human form in particular is especially suitable for technological evolution”—i feel that, were i for example an octopus, i could easily make a similar argument about why any high technology would be contingent on intelligent creatures with tentacles. so again, this does not help me locate myself as human.)
overall, it seems to me that if the teleological mushrooms are most interested in simulating powerful artificial minds, and can spookily determine certain events, they could easily find a faster route to “the good stuff”! so i’m still left wondering: why is there something?
right, but my point is that, for all i know, we are not yet close to a singularity. small details of subjective experience many hundreds or thousands of years prior could be “remembered” in the sense that the simulated instant depends on them. so this metaphysics does not help me locate myself temporally near the singularity, either.
perhaps this point is critical to our disagreement. i don’t expect that there’s a meaningful difference (from the perspective of one of the simulation’s denizens) between reifying a moment for which my current subjective experience is a logical necessity, and reifying the moments in which the subjective experience is more traditionally thought to be taking place.
in other words, the glider experiences all the time between its start and end, even if the metamind moves ahead in leaps and bounds.