Well they seem to suffer from many of the same safety problems, but in even more severe forms
Isn’t this the best case scenario, though? If the future belongs to the best long-horizon planners, and deep learning is excellent at everything except long-horizon planning (because we can necessarily train short-horizon tasks before long ones), that vastly increases the cognitive capabilities humans can wield for our purposes before we get disempowered by our own creations. If human-like cognitive skills necessarily came with long-horizon competence (which seemed plausible ten years ago) such that the kind of AI we have now would definitely already be planning to kill us, that would be a much worse situation.
If the future belongs to the best long-horizon planners,
If you’re not very strategically competent in an absolute sense, there are many ways to lose the future besides being taken over by a better long-horizon planner. For example you could build or fail to prevent/control some highly physically destructive technology, like grey goo or mirror life or doomsday nuclear device.
But I’m more worried about less legible risks, the most salient ones right now being that we lose our minds or values in some way (like AI psychosis but more powerful and widespread, or just the ordinary kind of cultural drift that Robin Hanson talks about), or derail philosophical progress (e.g. by building AIs that are bad at philosophical reasoning but good as persuasion, or lock in our values with AI help) such that we fail to eventually converge to correct conclusions about moral truths or our actual values.
If human-like cognitive skills necessarily came with long-horizon competence (which seemed plausible ten years ago) such that the kind of AI we have now would definitely already be planning to kill us, that would be a much worse situation.
Yes we’re lucky in this way, but what are we using our reprieve for? What can we use our reprieve for, that would actually lead to a good long term outcome? (My OP is basically saying “I don’t know” to this question.)
I think we should be using it to make better plans. I agree that we’re bad at it but I think that has a lot to do with not actually spending that much time on planning.
Whose values are you referring to here? Shared human values or particular values?
I guess my values, and other people’s values to the extent that I should care about them (with “should” having a to-be-determined meaning, pending a solution to metaethics).
Are you implying moral realism when you say this? (not a critique, just trying to follow.) Thanks.
Isn’t this the best case scenario, though? If the future belongs to the best long-horizon planners, and deep learning is excellent at everything except long-horizon planning (because we can necessarily train short-horizon tasks before long ones), that vastly increases the cognitive capabilities humans can wield for our purposes before we get disempowered by our own creations. If human-like cognitive skills necessarily came with long-horizon competence (which seemed plausible ten years ago) such that the kind of AI we have now would definitely already be planning to kill us, that would be a much worse situation.
If you’re not very strategically competent in an absolute sense, there are many ways to lose the future besides being taken over by a better long-horizon planner. For example you could build or fail to prevent/control some highly physically destructive technology, like grey goo or mirror life or doomsday nuclear device.
But I’m more worried about less legible risks, the most salient ones right now being that we lose our minds or values in some way (like AI psychosis but more powerful and widespread, or just the ordinary kind of cultural drift that Robin Hanson talks about), or derail philosophical progress (e.g. by building AIs that are bad at philosophical reasoning but good as persuasion, or lock in our values with AI help) such that we fail to eventually converge to correct conclusions about moral truths or our actual values.
Yes we’re lucky in this way, but what are we using our reprieve for? What can we use our reprieve for, that would actually lead to a good long term outcome? (My OP is basically saying “I don’t know” to this question.)
I think we should be using it to make better plans. I agree that we’re bad at it but I think that has a lot to do with not actually spending that much time on planning.
Whose values are you referring to here? Shared human values or particular values?
Are you implying moral realism when you say this? (not a critique, just trying to follow.) Thanks.
(EDIT: woops this reply is to the wrong comment, oh well).
I guess my values, and other people’s values to the extent that I should care about them (with “should” having a to-be-determined meaning, pending a solution to metaethics).
No, see my metaethical position/uncertainty in this post.