they’re perfectly compatible, they don’t even say anything about each other [edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion “why does [universe-aka-self] exist”, and then there’s a separate additional question of the surprise any highly efficient chemical processing system has at having information entering it, a rare thing made rarer still by the level of specifity and coherence we meat-piloting skin-encased neural systems called humans seem to find occurring in our brains.
there’s no need to assert that we are separately existenceful and selfful from the walls, or the chair, or the energy burn in the screen displaying—they are also physical objects. their physical shapes don’t encode as much fact about the world around them though; our senses are, at present, much better integrators of knowledge. and it is the knowledge that defines our agency as systems that encodes our moral worth. none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?
That means without resorting to any particular first-person perspective, nor using words such as “I” “now” “here”, or putting them in a unique logical position.
none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn’t that entail that we should be Halfers in the SBP? Not saying it’s wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.
they’re perfectly compatible,
they don’t even say anything about each other[edit: invalidated]. anthropics is just a question of what systems are likely. illusionism is a claim about whether systems have an ethereal self that they expose themselves to by acting; I am viciously agnostic about anything epiphenomenal like that, I would instead assert that all epiphenomenal confusions seem to me to be the confusion “why does [universe-aka-self] exist”, and then there’s a separate additional question of the surprise any highly efficient chemical processing system has at having information entering it, a rare thing made rarer still by the level of specifity and coherence we meat-piloting skin-encased neural systems called humans seem to find occurring in our brains.there’s no need to assert that we are separately existenceful and selfful from the walls, or the chair, or the energy burn in the screen displaying—they are also physical objects. their physical shapes don’t encode as much fact about the world around them though; our senses are, at present, much better integrators of knowledge. and it is the knowledge that defines our agency as systems that encodes our moral worth. none of this requires seperate privileged existence different from the environment around us; it is our access consciousness that makes us special, not our hard consciousness.
Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?
That means without resorting to any particular first-person perspective, nor using words such as “I” “now” “here”, or putting them in a unique logical position.
That sounds like a plausible theory. But, if we reject that there is a separate 1st person perspective, doesn’t that entail that we should be Halfers in the SBP? Not saying it’s wrong. But it does seem to me like illusionism/elimitivism has anthropic consequences.
hmm. it seems to me that the sleeping mechanism problem is missing a perspective—there are more types of question you could ask the sleeping mechanism that are of interest. I’d say the measure increased by waking is not able to make predictions about what universe it is; but that, given waking, the mechanism should estimate the average of the two universe’s wake counts, and assume the mechanism has 1.5 wakings of causal impact on the environment around the awoken mechanism. In other words, it seems to me that the decision-relevant anthropic question is how many places a symmetric process exists; inferring the properties of the universe around you, it is invalid to update about likely causal processes based on the fact that you exist; but on finding out you exist, you can update about where your actions are likely to impact, a different measure that does not allow making inferences about, eg, universal constants.
if, for example, the sleeping beauty problem is run ten times, and each time the being wakes, it is written to a log; after the experiment, there will be on average 1.5x as many logs as there are samples. but the agent should still predict 50%, because the predictive accuracy score is a question of whether the bet the agent makes can be beaten by other knowledge. when the mechanism wakes, it should know it has more action weight in one world than the other, but that doesn’t allow it to update about what bet most accurately predicts the most recent sample. two thirds of the mechanism’s actions occur in one world, one third in the other, but the mechanism can’t use that knowledge to infer about the past.
I get the sense that I might be missing something here. the thirder position makes intuitive sense on some level. but my intuition is that it’s conflating things. I’ve encountered the sleeping beauty problem before and something about it unsettles me—it feels like a confused question, and I might be wrong about this attempted deconfusion.
but this explanation matches my intuition that simulating a billion more copies of myself would be great, but not make me more likely to have existed.