Philosophy, psychology and ethics are useful but cannot solve AI safety by themselves.
AI Alignment is aligning the AI’s goals to human goals. That is to say, with human psychology and ethics. So the Outer Alignment problem is psychology and ethics. Yes, we also need to solve the Inner Alignment problem, so you’re correct that it’s not “by themselves”, but they are a big deal. Anthropic seem to believe the answer to Inner Alignment is Constitutional AI. Some people also think LLM Psychology is important.
AI Alignment is aligning the AI’s goals to human goals. That is to say, with human psychology and ethics. So the Outer Alignment problem is psychology and ethics. Yes, we also need to solve the Inner Alignment problem, so you’re correct that it’s not “by themselves”, but they are a big deal. Anthropic seem to believe the answer to Inner Alignment is Constitutional AI. Some people also think LLM Psychology is important.