This a very relevant discussion which should be the backbone of the decision making of anyone being part of the general effort towards AI alignment.
It is important to point out however that this is maybe just an instance of a more general problem which is related to the fact that in general any form of power or knowledge can be used for good or bad purposes, and advancing power and knowledge becomes always a double edged sword.
Doesn’t seem to me like there is an escape to the moral variance that humans experience as part of their natural + developed proclivities.
The only chance we have against such great power and knowledge as AI enables is to really enlighten with clarity how is it that these technical pieces can come together for one edge or another.
This a very relevant discussion which should be the backbone of the decision making of anyone being part of the general effort towards AI alignment.
It is important to point out however that this is maybe just an instance of a more general problem which is related to the fact that in general any form of power or knowledge can be used for good or bad purposes, and advancing power and knowledge becomes always a double edged sword.
Doesn’t seem to me like there is an escape to the moral variance that humans experience as part of their natural + developed proclivities.
The only chance we have against such great power and knowledge as AI enables is to really enlighten with clarity how is it that these technical pieces can come together for one edge or another.
Bringing the problem to light is our best chance.