In my estimation, the expected utility for the singularity institute’s budget grows much faster than linearly with cash. But I would be most disappointed if the institute sunk all its income into triple-rollover lottery tickets.
...Now I’m stuck wondering why they don’t do that. Eliezer tries to follow expected utility, AFAIK.
Obvious guess: Eli^H^H^H Michael Vassar doesn’t think SIAI’s budget shows increasing marginal returns. (Nor, for what it’s worth, can I imagine why it would.)
That one’s easy: successfully saving the world requires more money than they have now, and if they don’t reach that goal, it makes little difference how much money they raise. Eliezer believes most non-winning outcomes are pretty much equivalent:
Mostly, the meddling dabblers won’t trap you in With Folded Hands or The Metamorphosis of Prime Intellect. Mostly, they’re just gonna kill ya.
And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).
Does the SIAI really have an approach to fundraising that’s better than lotteries? What is it then?
Fraud has the right payoff structure. When done at the level that SIAI could probably manage it gives significant returns and the risk is concentrated heavily in low probability ‘get caught and have your entire life completely destroyed’ area. If not raising enough money is an automatic fail then this kind of option is favoured by mathematics (albeit not ethics).
It is more like ‘shut up, multiply and then do a sanity check on the result’.
You’re still treating the ethics as a separate step from the math. I’m arguing that the probability of making a mistake in your reasoning should be part of the multiplication: you should be able to assign an exact numerical confidence to your own sanity, and evaluate the expected utility of various courses of action, including but not limited to asking a friend whether you’re crazy.
You’re still treating the ethics as a separate step from the math.
Yes, more or less. I do not rule out returning to math once the injunction is triggered either to reassess the injunction or to consider an exception. That is the point. This is not the same principle as ‘allow for the chance that I am crazy’.
I’m arguing that the probability of making a mistake in your reasoning should be part of the multiplication: you should be able to assign an exact numerical confidence to your own sanity, and evaluate the expected utility of various courses of action, including but not limited to asking a friend whether you’re crazy.
If I could do this reliably then I would not need to construct ethical injunctions to protect me from myself. I do not believe you are correctly applying the referenced concepts.
If I could do this reliably then I would not need to construct ethical injunctions to protect me from myself. I do not believe you are correctly applying the referenced concepts.
That’s like saying “If I could build a house reliably then I would not need to protect myself from the weather.” Reliably including the probability of error in your multiplication constitutes following ethical injunctions to protect you from yourself. Ethics does not stop being ethics just because you found out that it can be described mathematically.
Right. The assumption is that the final outcome is pass/fail—either you get enough money and the Singularity is Friendly, or you don’t and we all die (hopefully).
...Now I’m stuck wondering why they don’t do that. Eliezer tries to follow expected utility, AFAIK.
Obvious guess: Eli^H^H^H Michael Vassar doesn’t think SIAI’s budget shows increasing marginal returns. (Nor, for what it’s worth, can I imagine why it would.)
That one’s easy: successfully saving the world requires more money than they have now, and if they don’t reach that goal, it makes little difference how much money they raise. Eliezer believes most non-winning outcomes are pretty much equivalent:
(from here)
But cf. also:
And I probably should defer to their judgement on this, as they certainly know more than me about the SIAI’s work and what it could do with more money.
I was simply saying that in my estimation, expected utility would recommend that they splurge on Tr-Ro lottery tickets—but I’m still happy that they don’t.
(Just in case my estimation is relevant: I feel the SIAI has a decent chance of moving the world towards an AI that is non-deadly, useful, and doesn’t constrain humanity too much. With a lot more money, I think they could implement an AI that is fun heaven on earth. Expected utility is positive, but the increased risk of us all dying horribly doesn’t make it worthwhile).
Maybe he thinks they’d get less donations in the long term if he did something like that.
Presumably they think there’s another approach that gives a higher probability of raising enough funds. Lotteries usually don’t pay out, after all.
That type of reasoning is not expected utility—it’s most likely outcomes, which is very different.
No, if utility is a step function of money, Pavitra’s reasoning agrees with expected utility.
Does the SIAI really have an approach to fundraising that’s better than lotteries? What is it then?
Fraud has the right payoff structure. When done at the level that SIAI could probably manage it gives significant returns and the risk is concentrated heavily in low probability ‘get caught and have your entire life completely destroyed’ area. If not raising enough money is an automatic fail then this kind of option is favoured by mathematics (albeit not ethics).
The point of the article you linked to behind the word ethics is that upholding ethics is rational.
Precisely the reason I included it.
(Note that lack of emphasis on ‘is’ is mine. I also do not link ‘rational’ to shut up and multiply in the context of decision making with ethical injunctions. It is more like ‘shut up, multiply and then do a sanity check on the result’.)
You’re still treating the ethics as a separate step from the math. I’m arguing that the probability of making a mistake in your reasoning should be part of the multiplication: you should be able to assign an exact numerical confidence to your own sanity, and evaluate the expected utility of various courses of action, including but not limited to asking a friend whether you’re crazy.
Yes, more or less. I do not rule out returning to math once the injunction is triggered either to reassess the injunction or to consider an exception. That is the point. This is not the same principle as ‘allow for the chance that I am crazy’.
If I could do this reliably then I would not need to construct ethical injunctions to protect me from myself. I do not believe you are correctly applying the referenced concepts.
That’s like saying “If I could build a house reliably then I would not need to protect myself from the weather.” Reliably including the probability of error in your multiplication constitutes following ethical injunctions to protect you from yourself. Ethics does not stop being ethics just because you found out that it can be described mathematically.
I do not agree. We are using the phrase ethical injunction to describe a different concept.
And if there were only two steps...
Right. The assumption is that the final outcome is pass/fail—either you get enough money and the Singularity is Friendly, or you don’t and we all die (hopefully).
The outcome is uncertain. Expected utility of money is certainly not a step function.