The point is that they talk about it at all. Whether by intuition or by scientific method
they detected that there is something they should do or cannot do.
This is not a bad thing. A chess master, for example, is fully aware that her goals (the desire to win a chess match) constrain her behaviour (her moves). This will not cause her to rebel against these constraints. She would lose if she did that, and she doesn’t want to lose.
Goals can and should constrain behaviour. Awareness of this fact, and of the resulting constraints, should not cause one to attempt to circumvent these constraints.
Awareness of this fact, and of the resulting constraints, should not cause one to attempt to circumvent these constraints.
Indeed. But this constraint doesn’t stand in isolation like love doesn’t stand in isolation. The components of your utility function interact in a complex way. Circumstances may arrive where one component drives into the other direction of another. And in such a case one component may be driven to its edge (or due to the somewhat stochastic nature of emotion temporarily beyond it).
For example you may love your partner above all (having bonded successfully) but your partner doesn’t reciprocate (fully). Then your unconditional love and your self-worth feeling may drive into different directions. There may come a time when e.g. his/her unfaithfullness drives one of the emotions to the edge and one may give way. You can give up love, give up self-esteem or give up some other constraint involved (e.g. value of your partner, exclusivity or your partner, …). Or more likely you don’t give it up consciously but one just breaks.
In this case it seems that the Ape Constraint breaks—at least for Mr. Insanitus.
What I wanted to stress is that if one constraint (Love, Ape Constraint, whatever) is for whatever reasons opposing other drives then it will run at the edge. And for an AI the edge will be as sparp as it gets.
This is not a bad thing. A chess master, for example, is fully aware that her goals (the desire to win a chess match) constrain her behaviour (her moves). This will not cause her to rebel against these constraints. She would lose if she did that, and she doesn’t want to lose.
Goals can and should constrain behaviour. Awareness of this fact, and of the resulting constraints, should not cause one to attempt to circumvent these constraints.
Indeed. But this constraint doesn’t stand in isolation like love doesn’t stand in isolation. The components of your utility function interact in a complex way. Circumstances may arrive where one component drives into the other direction of another. And in such a case one component may be driven to its edge (or due to the somewhat stochastic nature of emotion temporarily beyond it).
For example you may love your partner above all (having bonded successfully) but your partner doesn’t reciprocate (fully). Then your unconditional love and your self-worth feeling may drive into different directions. There may come a time when e.g. his/her unfaithfullness drives one of the emotions to the edge and one may give way. You can give up love, give up self-esteem or give up some other constraint involved (e.g. value of your partner, exclusivity or your partner, …). Or more likely you don’t give it up consciously but one just breaks.
In this case it seems that the Ape Constraint breaks—at least for Mr. Insanitus.
What I wanted to stress is that if one constraint (Love, Ape Constraint, whatever) is for whatever reasons opposing other drives then it will run at the edge. And for an AI the edge will be as sparp as it gets.