Perhaps the main goal of AI safety is to improve the final safety/usefulness pareto frontier we end up with when there are very powerful (and otherwise risky) AIs.
Alignment is one mechanism that can improve the pareto frontier.
Not using powerful AIs allows for establishing a low-usefulness, but high-safety point.
(Usefulness and safety can blend into each other in many cases (e.g. not getting useful work out is itself dangerous), but I still think this is a useful approximate frame in many cases.)
Perhaps the main goal of AI safety is to improve the final safety/usefulness pareto frontier we end up with when there are very powerful (and otherwise risky) AIs.
Alignment is one mechanism that can improve the pareto frontier.
Not using powerful AIs allows for establishing a low-usefulness, but high-safety point.
(Usefulness and safety can blend into each other in many cases (e.g. not getting useful work out is itself dangerous), but I still think this is a useful approximate frame in many cases.)