Correct. In fact, I probably confused things here by using the word “discount” for what I am suggesting here. Let me try to summarize the situation with regard to “discounting”.
Time discounting means counting distant future utility as less important than near future utility. EY, in the cited posting, argues against time discounting. (I disagree with EY, for what it is worth.)
“Space discounting” is a locally well-understood idea that utility accruing to people distant from the focal agent is less important than utility accruing to the focal agent’s friends, family, and neighbors. EY presumably disapproves of space discounting. (My position is a bit complicated. Distance in space is not the relevant parameter, but I do approve of discounting using a similar ‘remoteness’ parameter.)
The kind of ‘discounting’ of large utilities that I recommended in the great-grandparent probably shouldn’t be called ‘discounting’. I would sloganize it as “utilities are not additive.” The parent used the phrase ‘diminishing returns’. That is not right either, though it is probably better than ‘discounting’. Another phrase that approximates what I was suggesting is ‘bounded utility’. (I’m pretty sure I disagree with EY on this one too.)
The fact that I disagree with EY on discounting says absolutely nothing about whether I agree with EY on AI risk, reductionism, exercise, and who writes the best SciFi. That shouldn’t need to be said, but sometimes it seems to be necessary in your (XiXiDu’s) case.
That was about time discounting, not diminishing returns.
I see, thanks. But I am clueless about the important difference between time discounting and diminishing returns. If I can save people stretched over time or space, why is it wrong to discount people over time but rational to apply diminishing returns to the amount of people you save at a certain time? I mean, you would value to have 5 sports cars for your own use but wouldn’t care to have thousands. Likewise you would care to have 5 sports cars in a year but wouldn’t care to have them in a hundred years. You discount the number of cars for your own use because you can’t drive a thousand cars. You discount the time until you get to drive the cars because you don’t expect to value cars in a hundred years.
The main argument put forth in the article I linked to is that you shouldn’t discount due to the risk of being inconsistent. But the same can be said about diminishing returns, i.e. space discounting. Because the only reason why you don’t care about even more of the same is that you are only able to value so much at the same time. You don’t care about another friend if you already got a thousand friends only because friend number 1001 doesn’t fit into your memory space. That is, if you knew friend 1001 you wouldn’t want to miss him. But you can’t imagine having yet another friend, just like you can’t imagine driving sports cars in a hundred years. But if you change the context, either by learning about friend 1001, or by traveling into the future and learning that vintage sports cars are highly valued, you also change your preferences. That is, time discounting and diminishing returns will lead to the same kind of preference reversal if you view yourself stretched over time or space as one and the same agent. Such preference reversals seem to be rather context-dependent than to be related to either the number of items or number of years.
So this is the explanation of my confusion that caused me to believe that the arguments put forth in the article also apply to diminishing returns, not just time discounting.
Doesn’t Eliezer say we shouldn’t discount?
That was about time discounting, not diminishing returns.
Correct. In fact, I probably confused things here by using the word “discount” for what I am suggesting here. Let me try to summarize the situation with regard to “discounting”.
Time discounting means counting distant future utility as less important than near future utility. EY, in the cited posting, argues against time discounting. (I disagree with EY, for what it is worth.)
“Space discounting” is a locally well-understood idea that utility accruing to people distant from the focal agent is less important than utility accruing to the focal agent’s friends, family, and neighbors. EY presumably disapproves of space discounting. (My position is a bit complicated. Distance in space is not the relevant parameter, but I do approve of discounting using a similar ‘remoteness’ parameter.)
The kind of ‘discounting’ of large utilities that I recommended in the great-grandparent probably shouldn’t be called ‘discounting’. I would sloganize it as “utilities are not additive.” The parent used the phrase ‘diminishing returns’. That is not right either, though it is probably better than ‘discounting’. Another phrase that approximates what I was suggesting is ‘bounded utility’. (I’m pretty sure I disagree with EY on this one too.)
The fact that I disagree with EY on discounting says absolutely nothing about whether I agree with EY on AI risk, reductionism, exercise, and who writes the best SciFi. That shouldn’t need to be said, but sometimes it seems to be necessary in your (XiXiDu’s) case.
How about: “Large utilities are not additive for humans”.
I see, thanks. But I am clueless about the important difference between time discounting and diminishing returns. If I can save people stretched over time or space, why is it wrong to discount people over time but rational to apply diminishing returns to the amount of people you save at a certain time? I mean, you would value to have 5 sports cars for your own use but wouldn’t care to have thousands. Likewise you would care to have 5 sports cars in a year but wouldn’t care to have them in a hundred years. You discount the number of cars for your own use because you can’t drive a thousand cars. You discount the time until you get to drive the cars because you don’t expect to value cars in a hundred years.
The main argument put forth in the article I linked to is that you shouldn’t discount due to the risk of being inconsistent. But the same can be said about diminishing returns, i.e. space discounting. Because the only reason why you don’t care about even more of the same is that you are only able to value so much at the same time. You don’t care about another friend if you already got a thousand friends only because friend number 1001 doesn’t fit into your memory space. That is, if you knew friend 1001 you wouldn’t want to miss him. But you can’t imagine having yet another friend, just like you can’t imagine driving sports cars in a hundred years. But if you change the context, either by learning about friend 1001, or by traveling into the future and learning that vintage sports cars are highly valued, you also change your preferences. That is, time discounting and diminishing returns will lead to the same kind of preference reversal if you view yourself stretched over time or space as one and the same agent. Such preference reversals seem to be rather context-dependent than to be related to either the number of items or number of years.
So this is the explanation of my confusion that caused me to believe that the arguments put forth in the article also apply to diminishing returns, not just time discounting.