A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer’s idea of optimization power. This is all probably stupid and wrong but it’s interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)
I’m going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to ‘normality’, whatever your average rationalist thinks ‘normality’ is. Some of the implications of decision theory really are legitimately weird.
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn’t care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one’s measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, ‘probability as preference’, and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I’m trying to convey?
“You don’t come across as ontologically fundamental IRL.” Ha, I was kind of trolling there, but something along the lines of ‘I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse’. It’s one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one’s philosophical foot off. I don’t take any of my ideas too seriously, but collectively, I feel like they’re representative of a confusion that not only I have.
How serious 0-10, and what’s a decision theoretic zombie?
A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer’s idea of optimization power. This is all probably stupid and wrong but it’s interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)
I’m going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to ‘normality’, whatever your average rationalist thinks ‘normality’ is. Some of the implications of decision theory really are legitimately weird.
What do you mean by decision theoretic and information theoretic measure? You don’t come across as ontologically fundamental IRL.
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn’t care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one’s measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, ‘probability as preference’, and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I’m trying to convey?
“You don’t come across as ontologically fundamental IRL.” Ha, I was kind of trolling there, but something along the lines of ‘I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse’. It’s one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one’s philosophical foot off. I don’t take any of my ideas too seriously, but collectively, I feel like they’re representative of a confusion that not only I have.