Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.
edit: and it’s not even a dichotomy. There’s the hypothetical AIs which implement some moral absolute that is good for all cultures, possible cultures, and everyone, which we would invent, aliens would invent, whatever we evolve into could invent, etc. If those do not exist, then what exists that isn’t to some extent culturally specific to h. Sapiens circa today?
The Unobtrusive Guardian. An FAI that concludes that humanity’s aversion to being ‘straightjacketed’ is such that it is never ok for it to interfere with what humans do themselves. It proceeds to navigate itself out of the way and wait until it spots an external threat like a comet or hostile aliens. It then destroys those threats.
(The above is not a recommended FAI design. It is a refutation by example of an absolute claim that would exclude the above.)
didn’t i myself describe it and outline how this one also limits opportunities normally available to evolution for instance? It’s to very little extent a straitjacket to life, as it does very little.
Name 3 things in the middle.
(examples chosen for being at different points in the spectrum between the two options, not for being likely)
Moral Universalism could be true in some sense, but not automatically compelling, and the AI would need to be programmed to find and/or follow it.
There could be a uniquely specified human morality that fulfills much of the same purpose Moral Universalism does for humans.
It might be possible to specify what we want in a more dynamic way than freezing in current customs.
My original post had this possibility. Where you make the AI that develops much of the morality (which it would really have to). edit: note that the AI in question may be just a theorem prover which tries to find some universal moral axioms, but is not itself moral or compelled to implement anything in real world.
What’s in 10 millions years? 100 millions? A straitjacket for intelligent life.
We would still want some limits from our values right now, e.g. so that the society wouldn’t steer itself to suicide somehow. Even rules like ’it is good if 99% of people agree with it” can steer us into some really nasty futures over the time. Other issue is the possibility of de-evolution of human intelligence. We would not want to lock in all the customs, but some of the values of the today, would get frozen in.
The second two exceptions would clearly not be required for the purpose of rejecting a dichotomy.
Name 1 then.
edit: and it’s not even a dichotomy. There’s the hypothetical AIs which implement some moral absolute that is good for all cultures, possible cultures, and everyone, which we would invent, aliens would invent, whatever we evolve into could invent, etc. If those do not exist, then what exists that isn’t to some extent culturally specific to h. Sapiens circa today?
The Unobtrusive Guardian. An FAI that concludes that humanity’s aversion to being ‘straightjacketed’ is such that it is never ok for it to interfere with what humans do themselves. It proceeds to navigate itself out of the way and wait until it spots an external threat like a comet or hostile aliens. It then destroys those threats.
(The above is not a recommended FAI design. It is a refutation by example of an absolute claim that would exclude the above.)
didn’t i myself describe it and outline how this one also limits opportunities normally available to evolution for instance? It’s to very little extent a straitjacket to life, as it does very little.