Along similar lines, should we consider Sam Altman, Dario Amodei, etc. to be more evil than Hitler, in terms of the expected number of people they will murder?
Murder is about intent. I think Dario believes that his actions reduce the chance of human extinction due to AI because Anthropic is doing a better job then competitors.
When it comes to Sam Altman, I don’t think he believes that OpenAI is likely going to kill humanity.
Facebook on the other hand, is intentionally and knowingly facilitating fraud because they think that the government is unlikely to punish them for it and try to make as much money as they think they can get away with without the government punishing them.
The expected number strongly depends on one’s model of the world. (It might well be negative, depending on one’s “P(doom)” (a hand-wave for a more correct consideration) and taking into account the chances to address the 100% mortality rate we observe for humans today.)
The real questions for that situation are:
How does one handle high variance situations with very high risks and very high rewards, regardless of expectation values (which we are not certain about)?
How does this depend on the degree of centralization for the decision making (especially when disagreements are sharp and there is no trend towards broad consensus)?
No, it’s not possible for it to be negative. You’re not allowed to murder people even if you save an equal or greater number. If you invented a machine that had a 49% chance of killing me and a 51% chance of making me immortal, and you pointed it at me without permission, you would be committing a heinous crime and I’d be perfectly justified in self-defense. AI CEOs are doing the same thing at a much larger scale.
Well, observe that vaccinations have non-zero mortality and that they are often given to people who can’t meaningfully consent. (Actually, this is applicable to many childhood medical interventions; meanwhile the society does not differentiate between the right to life for children and for adults.)
Many other decisions have environmental trade-offs and other safety trade-offs which can have mortality implications, and they are taken without unanimous consent.
So, while your position is a possible position one can take, the current practices of human societies are not in agreement with that position, they are more nuanced.
PS. Since you referenced WWII, obviously the allies did not take the position that they were under obligation to fully refrain from inflicting civilian deaths either, to say the least.
Along similar lines, should we consider Sam Altman, Dario Amodei, etc. to be more evil than Hitler, in terms of the expected number of people they will murder?
Murder is about intent. I think Dario believes that his actions reduce the chance of human extinction due to AI because Anthropic is doing a better job then competitors.
When it comes to Sam Altman, I don’t think he believes that OpenAI is likely going to kill humanity.
Facebook on the other hand, is intentionally and knowingly facilitating fraud because they think that the government is unlikely to punish them for it and try to make as much money as they think they can get away with without the government punishing them.
Do we actually have proof that it is intentional?
The expected number strongly depends on one’s model of the world. (It might well be negative, depending on one’s “P(doom)” (a hand-wave for a more correct consideration) and taking into account the chances to address the 100% mortality rate we observe for humans today.)
The real questions for that situation are:
How does one handle high variance situations with very high risks and very high rewards, regardless of expectation values (which we are not certain about)?
How does this depend on the degree of centralization for the decision making (especially when disagreements are sharp and there is no trend towards broad consensus)?
No, it’s not possible for it to be negative. You’re not allowed to murder people even if you save an equal or greater number. If you invented a machine that had a 49% chance of killing me and a 51% chance of making me immortal, and you pointed it at me without permission, you would be committing a heinous crime and I’d be perfectly justified in self-defense. AI CEOs are doing the same thing at a much larger scale.
Well, observe that vaccinations have non-zero mortality and that they are often given to people who can’t meaningfully consent. (Actually, this is applicable to many childhood medical interventions; meanwhile the society does not differentiate between the right to life for children and for adults.)
Many other decisions have environmental trade-offs and other safety trade-offs which can have mortality implications, and they are taken without unanimous consent.
So, while your position is a possible position one can take, the current practices of human societies are not in agreement with that position, they are more nuanced.
PS. Since you referenced WWII, obviously the allies did not take the position that they were under obligation to fully refrain from inflicting civilian deaths either, to say the least.