That’s an interesting way of thinking about it. My take on it is the opposite. If an accurate copy of me was made after my death, I am pretty sure the copy wouldn’t care if it was me or not, just as I don’t care if I am as my past self wished me to be. If the copy was convinced it was me, there would be no problem. If it was convinced it wasn’t, than it wouldn’t think of my death as any more important than the deaths of everyone else throughout history.
Mestroyer
Well, there’s valuing money at more utility per dollar when you have less money and less utility per dollar when you have more money, which makes perfect sense. But that’s not the same as egalitarianism as part of utility.
Holden Karnofsky thinks superintelligences with utility functions are made out of programs that list options by rank without making any sort of value judgement (basically answer a question), and then pick the one with the most utility.
Eliezer Yudkowsky thinks that a superintelligence that would answer a question would have to have a question-answering utility function making it decide to answer the question, or to pick paths that would lead to getting the answer to the question and answer it.
Says Allison: All digital logic is made of NOR gates!
Says Bruce: Nonsense, it’s all made of NAND gates!
Allison: Look, A NAND B is really just ((A NOR A) NOR (B NOR B)) NOR ((A NOR A) NOR (B NOR B))
Bruce: Look, A NOR B is really just ((A NAND A) NAND (B NAND B)) NAND ((A NAND A) NAND (B NAND B))
(Edited because my lines of text got run together)
Edited again: I’m not trying to say either is a workable path to AI-completeness, just that showing that you can make some category of device X classified by ultimate function, and ignoring internal workings, out of devices of category Y, doesn’t mean that Xs have to be made out of Ys
I must be falling to the dark side because I read this and thought “so this is how I can convince people of things: give them a dollar to say they agree with me.”
You could just split the money among a whole bunch of different charities. That way no one in particular is shamed by the news stories that result.
If you’re still eating pseudo-vegetarian diet and motivated by the cruelty argument, you should probably check out: http://www.utilitarian-essays.com/suffering-per-kg.html
and
It seems eating eggs might be much worse than eating most kinds of meat (except fish).
Can I just smash the AI? If I am in the box, then “smash the AI” is the output of my algorithm, and the real copy of me will do the same. I’d take the death of several million of me over a thousand subjective years of torture each, and also over letting that AI have its way with its light cone.
Can’t altruistic rationalists who want to be right as individuals, and want the group to be right more than they want to be known to be right avoid information cascades? All they have to do is form their own private opinion, and even if they are swayed toward the majority opinion by the evidence of the other rationalists’ opinions, pretend to be contrarians, because of the consequences that will have for the rest of the group (providing them more information)? Or at least say something like “I accept the majority opinion because the majority accepts it and they are unlikely to be wrong, but here are some arguments against that opinion I came up with anyway?”
It’s not CDT-rational to bid one dollar for 20 dollars if there is a high probability that others will be bidding as well, because you are unlikely to actually make that $19 profit. You are likely to actually get $0 for your $1. And if you know in advance that you would make the decision to pour more money in when you are being outbid, then the expected utility of bidding $1 is even lower, because you will be paying even more for nothing.
The Doubling Box
If your utility function is bounded and you discount the future, then pick an amount of time after now, epsilon, such that the discounting by then is negligible. Then imagine that the box disappears if you don’t open it by then. at t = now + epsilon * 2^-1, the utilons double. At 2^-2, they double again. etc.
But if your discounting is so great that you do not care about the future at all, I guess you’ve got me.
This isn’t the St. Petersburg paradox (though I almost mentioned it) because in that, you make your decision once at the beginning.
If it’s impossible to win, because your opponent always picks second, then every choice is optimal.
If you pick simultaneously, picking the highest number you can describe is optimal, so that’s another situation where there is no optimal solution for an infinite mind, but for a finite mind, there is an optimal solution.
P(t) = 0.
yeah, that’s what I meant. Also, instead of doubling, make it so they exponentially decay toward the bound.
What’s wrong with not having any more reason to live after you get the utilons?
But I do not choose my utility function as an means to get something. My utility function describes is what I want to choose means to get. And I’m pretty sure it’s unbounded.
better yet, every day count one more integer toward the highest number you can think of, when you reach it, flip the coins. If they don’t all come up heads, start over again.
So you can have infinite expected utility, but be guaranteed to have finite utility? That is weird.
I’m not sufficiently familiar with my own internal criteria of interesting-ness to explain to you why I find it interesting. Sorry you don’t as well.
I think you went wrong when you said:
because Omega doesn’t reward people for their choice to pick box B, he rewards them for being implementations of any of the many algorithms that would pick box B.
I think that the causal decision theory algorithm is the winning way for problems where your mind is not read (when you take into account that causal decision theory can be swayed to make choices so as deceive others about your real algorithm). Problems where your mind is read do not usually show up in real life. I think there is no winning way for conceivable universes in general, so I want to be an implementation of the winning algorithm for this universe, which seems to be causal decision theory.