Yes they do have a separate names, “the singularity” this post here pins a lot of faith in “after the singularity” a lot of utopic things being possible that seems to be what you’re confusing with alignment—the assumption here is there will be a point where AIs are so “intelligent” that they are capable of remarkable things (and in that post it is hoped, these utopic things as a result of that wild increase in intelligence). While here “alignment” more generally to making a system (including but not limited to an AI) fine-tuned to achieve some kind of goal.
Let’s start with the simplest kind of system for which it makes sense to talk about “alignment” at all: a system which has been optimized for something, or is at least well compressed by modeling it as having been optimized.
Later on he repeats
The simplest pattern for which “alignment” makes sense at all is a chunk of the environment which looks like it’s been optimized for something. In that case, we can ask whether the goal-it-looks-like-the-chunk-has-been-optimized-for is “aligned” with what we want, versus orthogonal or opposed.
The “problem” is that “what we want” bit which is discussed at length
Yes they do have a separate names, “the singularity” this post here pins a lot of faith in “after the singularity” a lot of utopic things being possible that seems to be what you’re confusing with alignment—the assumption here is there will be a point where AIs are so “intelligent” that they are capable of remarkable things (and in that post it is hoped, these utopic things as a result of that wild increase in intelligence). While here “alignment” more generally to making a system (including but not limited to an AI) fine-tuned to achieve some kind of goal.
Later on he repeats
The “problem” is that “what we want” bit which is discussed at length