So, I’m very late into this game, and not through all the sequences (where the answer might already be given), but still, I am very interested in your positions (probably nobody answers, but who knows):
Is there a natural number N for which you’d kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
Do you “integrate” utility over time (or “experience-moments”, as per timeless bla), or is it better to just maximize the “final” point, however one got there?
Does breaking up the utility function into several categories really allow dutch-booking, as is indicated in one of the comments? (I hope you understand what I mean with the categories; you’ve a total strict-order for them, with no two identical, elements within categories “add up”, but not even an infinite number of “bad” things in one category can add up to a single one in the next higher one)
If “no” for 3, then: For a (current) human we only have neurons, and a real break-point can probably not be determined; but a re-engineered person could implement such a thing. Is it then preferable?
I expect “yes” for 1, and I have to expect “yes” for 3 (I personally do not see this, but I’m bad at math, and have to trust the comments anyway). If “no” for 3, I still expect “no” for 4, per simplicity-argument, retold many times.
I’m very curious for answer on question 2. Once Eliezer quoted “the end does not justify the means”, but this sentence is so very much re-interpretable that it’s worthless (even if he said otherwise). But as per updating: why should the order of when information is revealed change the final result? Whatever.
When the answers of these questions are somewhere in the sequences, just ignore this, I will sooner or later get to them.
Is there a natural number N for which you’d kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
I don’t think this question (or one discussed in the OP) admit meaningful answers. It seems a pity to just ‘pour cold water over them’ but I don’t know what else to say—whatever ‘moral truths’ there are in the world simply don’t reach as far as such absurd scenarios.
Do you “integrate” utility over time (or “experience-moments”, as per timeless bla), or is it better to just maximize the “final” point, however one got there?
Depends what game you’re playing, surely. If you’re playing ‘Invest For Retirement’ and the utility function measures the size of your retirement fund, then naturally the ‘final’ point is what matters.
On the other hand, if you’re playing ‘Enjoy Your Retirement’ and the utility function measures how much money you have to spend on a monthly basis, then what’s important is the “integrated” utility.
Two points of interest here:
(1) Final utility in ‘Invest for retirement’ equals integrated utility in ‘Enjoy your retirement’ (modulo some faffing around with discount rates).
(2) The game of ‘Enjoy your retirement’ is notable insofar as it’s a game with a guaranteed final utility of zero (or -infinity if you prefer).
So, I’m very late into this game, and not through all the sequences (where the answer might already be given), but still, I am very interested in your positions (probably nobody answers, but who knows):
Is there a natural number N for which you’d kill one person vs. giving N people one-single dust-speck? (I assume this depends on whether one expects an ever-lasting universe.)
Do you “integrate” utility over time (or “experience-moments”, as per timeless bla), or is it better to just maximize the “final” point, however one got there?
Does breaking up the utility function into several categories really allow dutch-booking, as is indicated in one of the comments? (I hope you understand what I mean with the categories; you’ve a total strict-order for them, with no two identical, elements within categories “add up”, but not even an infinite number of “bad” things in one category can add up to a single one in the next higher one)
If “no” for 3, then: For a (current) human we only have neurons, and a real break-point can probably not be determined; but a re-engineered person could implement such a thing. Is it then preferable?
I expect “yes” for 1, and I have to expect “yes” for 3 (I personally do not see this, but I’m bad at math, and have to trust the comments anyway). If “no” for 3, I still expect “no” for 4, per simplicity-argument, retold many times.
I’m very curious for answer on question 2. Once Eliezer quoted “the end does not justify the means”, but this sentence is so very much re-interpretable that it’s worthless (even if he said otherwise). But as per updating: why should the order of when information is revealed change the final result? Whatever.
When the answers of these questions are somewhere in the sequences, just ignore this, I will sooner or later get to them.
I don’t think this question (or one discussed in the OP) admit meaningful answers. It seems a pity to just ‘pour cold water over them’ but I don’t know what else to say—whatever ‘moral truths’ there are in the world simply don’t reach as far as such absurd scenarios.
Depends what game you’re playing, surely. If you’re playing ‘Invest For Retirement’ and the utility function measures the size of your retirement fund, then naturally the ‘final’ point is what matters.
On the other hand, if you’re playing ‘Enjoy Your Retirement’ and the utility function measures how much money you have to spend on a monthly basis, then what’s important is the “integrated” utility.
Two points of interest here:
(1) Final utility in ‘Invest for retirement’ equals integrated utility in ‘Enjoy your retirement’ (modulo some faffing around with discount rates).
(2) The game of ‘Enjoy your retirement’ is notable insofar as it’s a game with a guaranteed final utility of zero (or -infinity if you prefer).