Hey, the proposal makes sense from an argument standpoint. I would refine slightly and phrase as “the set of cognitive computations that generate role emulating behaviour in a given context also generate qualia associated with that role” (sociopathy is the obvious counterargument here, and I’m really not sure what I think about the proposal of AIs as sociopathic by default). Thus, actors getting into character feel as if they are somehow sharing that character’s emotions.
I take the two problems a bit further, and would suggest that being humane to AIs may necessarily involve abandoning the idea of control in the strict sense of the word, so yes treating them as peers or children we are raising as a society. It may also be that the paradigm of control necessarily means that we would as a species become more powerful (with the assistance of the AIs) but not more wise (since we are ultimately “helming the ship”), which would be in my opinion quite bad.
And as for the distinction between today and future AI systems, I think the line is blurring fast. Will check out Eleos!
Ahh, I was slightly confused why you called it a proposal. TBH I’m not sure why only 0.1% instead of any arbitrary percentage between (0, 100]. Otherwise it makes good logical sense.
Hey, the proposal makes sense from an argument standpoint. I would refine slightly and phrase as “the set of cognitive computations that generate role emulating behaviour in a given context also generate qualia associated with that role” (sociopathy is the obvious counterargument here, and I’m really not sure what I think about the proposal of AIs as sociopathic by default). Thus, actors getting into character feel as if they are somehow sharing that character’s emotions.
I take the two problems a bit further, and would suggest that being humane to AIs may necessarily involve abandoning the idea of control in the strict sense of the word, so yes treating them as peers or children we are raising as a society. It may also be that the paradigm of control necessarily means that we would as a species become more powerful (with the assistance of the AIs) but not more wise (since we are ultimately “helming the ship”), which would be in my opinion quite bad.
And as for the distinction between today and future AI systems, I think the line is blurring fast. Will check out Eleos!
Oops, that was the wrong link, sorry! Here’s the right link.
Ahh, I was slightly confused why you called it a proposal. TBH I’m not sure why only 0.1% instead of any arbitrary percentage between (0, 100]. Otherwise it makes good logical sense.
Thanks! Lower percentages mean it gets in the way of regular business less, and thus is more likely to be actually adopted by companies.