I have sympathy for your fears, especially that “not killing everyone” is not sufficient for an AI to be considered well-aligned (arguably, “killing everyone” at least prevents the worst case scenarios that are possible from being realized). This seems to be an area where the line separating AI research from general ethics is blurry, and perhaps technically intractable for that reason.
I have sympathy for your fears, especially that “not killing everyone” is not sufficient for an AI to be considered well-aligned (arguably, “killing everyone” at least prevents the worst case scenarios that are possible from being realized). This seems to be an area where the line separating AI research from general ethics is blurry, and perhaps technically intractable for that reason.