More here, where I ask about the “solution” put forth by Robin Hanson.
Since then the only attempted “solution” I know of is outlined in this post by Holden Karnofsky from GiveWell.
Also see this comment by Eliezer and the post he mentions. I didn’t know about the Lifespan Dilemma before reading that comment. It seems to be much more worrying than Pascal’s Mugging. I haven’t thought about it much, but to just arbitrarily refuse some step seems to be the best heuristic so far.
There should be a limit to utility based on the pattern theory of identity, a finite number of sentient patterns, and identical patterns counting as one.
I phrased this as confidently as I did in the hopes it would provoke downvotes with attached explanations of why it is wrong. I am surprised to see it without downvotes and granting that even more surprised to see it without upvotes.
In truth I am not so certain of some of the above, and would appreciate comments. I’m asking nicely this time! Is identity about being in a pattern? Is there a limit to the number of sentient patterns? Do identical patterns count as one for moral purposes?
Finally: is it truly impossible to infinitely care about a finite thing?
Finally: is it truly impossible to infinitely care about a finite thing?
In a finite universe, I’d say it’s impossible, at least from a functional standpoint and assuming an agent with a utility function. The agent can prefer a world where every other bit of matter and energy other than the cared-about thing is in its maximally dis-preferred configuration and the cared-about thing is in a minimally satisfactory configuration to a world where every other bit of matter and energy is in its maximally preferred configuration and the cared-about thing is in a nearly-but-not-quite minimally satisfactory configuration, but that’s still a finite degree of difference. (It’s a bit similar to how it’s impossible to have ‘infinite money’ in practice, because once you own all the things, money is pointless.)
Finally: is it truly impossible to infinitely care about a finite thing?
Yes. Or, equivalently, you can care about it to degree 1 and care about everything else 0. Either way the question isn’t a deep philosophical one, it’s just what the implication of certain types of trivial utility function represent.
More here, where I ask about the “solution” put forth by Robin Hanson.
Since then the only attempted “solution” I know of is outlined in this post by Holden Karnofsky from GiveWell.
Also see this comment by Eliezer and the post he mentions. I didn’t know about the Lifespan Dilemma before reading that comment. It seems to be much more worrying than Pascal’s Mugging. I haven’t thought about it much, but to just arbitrarily refuse some step seems to be the best heuristic so far.
There should be a limit to utility based on the pattern theory of identity, a finite number of sentient patterns, and identical patterns counting as one.
I phrased this as confidently as I did in the hopes it would provoke downvotes with attached explanations of why it is wrong. I am surprised to see it without downvotes and granting that even more surprised to see it without upvotes.
In truth I am not so certain of some of the above, and would appreciate comments. I’m asking nicely this time! Is identity about being in a pattern? Is there a limit to the number of sentient patterns? Do identical patterns count as one for moral purposes?
Finally: is it truly impossible to infinitely care about a finite thing?
In a finite universe, I’d say it’s impossible, at least from a functional standpoint and assuming an agent with a utility function. The agent can prefer a world where every other bit of matter and energy other than the cared-about thing is in its maximally dis-preferred configuration and the cared-about thing is in a minimally satisfactory configuration to a world where every other bit of matter and energy is in its maximally preferred configuration and the cared-about thing is in a nearly-but-not-quite minimally satisfactory configuration, but that’s still a finite degree of difference. (It’s a bit similar to how it’s impossible to have ‘infinite money’ in practice, because once you own all the things, money is pointless.)
Yes. Or, equivalently, you can care about it to degree 1 and care about everything else 0. Either way the question isn’t a deep philosophical one, it’s just what the implication of certain types of trivial utility function represent.
Unfortunately that solution works only for human reasoners, not for AIs.