g: killing 3^^^^^3 puppies doesn’t feel much worse than killing 3^^^^3 puppies
...
..........................
I hereby award G the All-Time Grand Bull Moose Prize for Non-Extensional Reasoning and Scope Insensitivity.
Clough: On the contrary, I think it is not only that weak but actually far weaker. If you are willing to consider the existance of things like 3^^^3 units of disutility without considering the existence of chances like 1/4^^^4 then I believe that is the problem that is causing you so much trouble.
I’m certainly willing to consider the existence of chances like that, but to arrive at such a calculation, I can’t be using Solomonoff induction.
Consider the plight of the first nuclear physicists, trying to calculate whether an atomic bomb could ignite the atmosphere. Yes, they had to do this calculation! Should they have not even bothered, because it would have killed so many people that the prior probability must be very low? The essential problem is that the universe doesn’t care one way or the other and therefore events do not in fact have probabilities that diminish with increasing disutility.
Likewise, physics does not contain a clause prohibiting comparatively small events from having large effects. Consider the first replicator in the seas of ancient Earth.
Tiiba: You don’t want an AI to think like this because you don’t want it to kill you. Meanwhile, to a true altruist, it would make perfect sense.
So you’re biting the bullet and saying that, faced with a Pascal’s Mugger, you should give him the five dollars?
Would any commenters care to mug Tiiba? I can’t quite bring myself to do it, but it needs doing.
Krishnaswami: Utility functions have to be bounded basically because genuine martingales screw up decision theory—see the St. Petersburg Paradox for an example.
One deals with the St. Petersburg Paradox by observing that the resources of the casino are finite; it is not necessary to bound the utility function itself when you can bound the game within your world-model.
g: killing 3^^^^^3 puppies doesn’t feel much worse than killing 3^^^^3 puppies
...
..........................
I hereby award G the All-Time Grand Bull Moose Prize for Non-Extensional Reasoning and Scope Insensitivity.
Clough: On the contrary, I think it is not only that weak but actually far weaker. If you are willing to consider the existance of things like 3^^^3 units of disutility without considering the existence of chances like 1/4^^^4 then I believe that is the problem that is causing you so much trouble.
I’m certainly willing to consider the existence of chances like that, but to arrive at such a calculation, I can’t be using Solomonoff induction.
Consider the plight of the first nuclear physicists, trying to calculate whether an atomic bomb could ignite the atmosphere. Yes, they had to do this calculation! Should they have not even bothered, because it would have killed so many people that the prior probability must be very low? The essential problem is that the universe doesn’t care one way or the other and therefore events do not in fact have probabilities that diminish with increasing disutility.
Likewise, physics does not contain a clause prohibiting comparatively small events from having large effects. Consider the first replicator in the seas of ancient Earth.
Tiiba: You don’t want an AI to think like this because you don’t want it to kill you. Meanwhile, to a true altruist, it would make perfect sense.
So you’re biting the bullet and saying that, faced with a Pascal’s Mugger, you should give him the five dollars?
Would any commenters care to mug Tiiba? I can’t quite bring myself to do it, but it needs doing.
Krishnaswami: Utility functions have to be bounded basically because genuine martingales screw up decision theory—see the St. Petersburg Paradox for an example.
One deals with the St. Petersburg Paradox by observing that the resources of the casino are finite; it is not necessary to bound the utility function itself when you can bound the game within your world-model.