Which brings up the question: suppose that your values are defined in terms of an ontology which is not merely false but actually logically inconsistent, though in a way that is too subtle for you to currently grasp. Is it rational to try to learn the logical truth, and thereby lose most or all of what you value? Should we try to hedge against such a possibility when designing a friendly AI? If so, how?
You do not lose any options by gaining more knowledge. If the optimal response to have when your values are defined in terms of an inconsistent ontology is to go ahead and act as if the ontology is consistent then you can still choose to do so even once you find out the dark secret. You can only gain from knowing more.
If your values are such that they do not even allow a mechanism for creating an best effort approximation of values in the case of ontological enlightenment then you are out of luck no matter what you do. Even if you explicitly value ignorance of the fact that nothing you value can have coherent value, the incoherency of your value system makes the ignorance value meaningless too.
Should we try to hedge against such a possibility when designing a friendly AI? If so, how?
Make the most basic parts of the value system in an ontology that has as little chance as possible of being inconsistent. Reference to actual humans can ensure that a superintelligent FAI’s value system will be logically consistent if it is in fact possible for a human to have a value system defined in a consistent ontology. If that is not possible then humans are in a hopeless position. But at least I (by definition) wouldn’t care.
If your values are such that they do not even allow a mechanism for creating an best effort approximation of values in the case of ontological enlightenment then you are out of luck no matter what you do.
If preference is expressed in terms of what you should do, not what’s true about the world, new observations never influence preference, so we can fix it at the start and never revise it (which is an important feature for constructing FAI, since you only ever have a hand in its initial construction).
(To whoever downvoted this without comment—it’s not as stupid an idea as it might sound; what’s true about the world doesn’t matter for preference, but it does matter for decision-making, as decisions are made depending on what’s observed. By isolating preference from influence of observations, we fix it at the start, but since it determines what should be done depending on all possible observations, we are not ignoring reality.)
If preference is expressed in terms of what you should do, not what’s true about the world, new observations never influence preference, so we can fix it at the start and never revise it (which is an important feature for constructing FAI, since you only ever have a hand in its initial construction).
In the situation described by Roko the agent has doubt about its understanding of the very ontology that its values are expressed in. If it were an AI that would effectively mean that we designed it using mathematics that we thought was consistent but turns out to have a flaw. The FAI has self improved to a level where it has a suspicion that the ontology that is used to represent its value system is internally inconsistent and must decide whether to examine the problem further. (So we should have been able to fix it at the start but couldn’t because we just weren’t smart enough.)
The FAI has self improved to a level where it has a suspicion that the ontology that is used to represent its value system is internally inconsistent and must decide whether to examine the problem further.
If its values are not represented in terms of an “ontology”, this won’t happen.
You do not lose any options by gaining more knowledge. If the optimal response to have when your values are defined in terms of an inconsistent ontology is to go ahead and act as if the ontology is consistent then you can still choose to do so even once you find out the dark secret. You can only gain from knowing more.
See the example of the theist (above). Do you really think that the best possible outcome for him involves knowing more?
How could it be otherwise? His confusion doesn’t define his preference, and his preference doesn’t set this particular form of confusion as being desirable. Maybe Wei Dai’s post is a better way to communicate the distinction I’m making: A Master-Slave Model of Human Preferences (though it’s different, the distinction is there as well).
See the example of the theist (above). Do you really think that the best possible outcome for him involves knowing more?
No, I think his values are defined in terms of a consistent ontology in which ignorance may result in a higher value outcome. If his values could not in fact be expresesd consistently then I do hold that (by definition) he doesn’t lose by knowing more.
You do not lose any options by gaining more knowledge. If the optimal response to have when your values are defined in terms of an inconsistent ontology is to go ahead and act as if the ontology is consistent then you can still choose to do so even once you find out the dark secret. You can only gain from knowing more.
If your values are such that they do not even allow a mechanism for creating an best effort approximation of values in the case of ontological enlightenment then you are out of luck no matter what you do. Even if you explicitly value ignorance of the fact that nothing you value can have coherent value, the incoherency of your value system makes the ignorance value meaningless too.
Make the most basic parts of the value system in an ontology that has as little chance as possible of being inconsistent. Reference to actual humans can ensure that a superintelligent FAI’s value system will be logically consistent if it is in fact possible for a human to have a value system defined in a consistent ontology. If that is not possible then humans are in a hopeless position. But at least I (by definition) wouldn’t care.
If preference is expressed in terms of what you should do, not what’s true about the world, new observations never influence preference, so we can fix it at the start and never revise it (which is an important feature for constructing FAI, since you only ever have a hand in its initial construction).
(To whoever downvoted this without comment—it’s not as stupid an idea as it might sound; what’s true about the world doesn’t matter for preference, but it does matter for decision-making, as decisions are made depending on what’s observed. By isolating preference from influence of observations, we fix it at the start, but since it determines what should be done depending on all possible observations, we are not ignoring reality.)
In the situation described by Roko the agent has doubt about its understanding of the very ontology that its values are expressed in. If it were an AI that would effectively mean that we designed it using mathematics that we thought was consistent but turns out to have a flaw. The FAI has self improved to a level where it has a suspicion that the ontology that is used to represent its value system is internally inconsistent and must decide whether to examine the problem further. (So we should have been able to fix it at the start but couldn’t because we just weren’t smart enough.)
If its values are not represented in terms of an “ontology”, this won’t happen.
See the example of the theist (above). Do you really think that the best possible outcome for him involves knowing more?
How could it be otherwise? His confusion doesn’t define his preference, and his preference doesn’t set this particular form of confusion as being desirable. Maybe Wei Dai’s post is a better way to communicate the distinction I’m making: A Master-Slave Model of Human Preferences (though it’s different, the distinction is there as well).
No, I think his values are defined in terms of a consistent ontology in which ignorance may result in a higher value outcome. If his values could not in fact be expresesd consistently then I do hold that (by definition) he doesn’t lose by knowing more.