What are the chances that we get lucky and acting in an altruistic manner towards other sentient beings is also a convergent drive? My guess is most people here on LessWrong would say close to epsilon, but I wonder what the folks at DeepMind would say…
(The convergent drive would be to tit-for-tat until you observe enough to solve the POMDP of them, betraying/exploiting them maximally the instant you gather enough info to decide that is more rewarding...)
Paperclip maximizers aren’t necessarily sentient, and Demis explicitly says in his episode that it’d be best to avoid creating sentient AI at least initially to avoid the ethical issues surrounding that.
What are the chances that we get lucky and acting in an altruistic manner towards other sentient beings is also a convergent drive? My guess is most people here on LessWrong would say close to epsilon, but I wonder what the folks at DeepMind would say…
(The convergent drive would be to tit-for-tat until you observe enough to solve the POMDP of them, betraying/exploiting them maximally the instant you gather enough info to decide that is more rewarding...)
Paperclip maximizers aren’t necessarily sentient, and Demis explicitly says in his episode that it’d be best to avoid creating sentient AI at least initially to avoid the ethical issues surrounding that.