One thing though I kept thinking: Why doesn’t the article mention AI Safety research much?
Because almost all of current AI safety research can’t make future agentic ASI that isn’t already aligned with human values safe, as everyone who has looked at the problem seems to agree. And the Doomers certainly have been clear about this, even as most of the funding goes to prosaic alignment.
Because almost all of current AI safety research can’t make future agentic ASI that isn’t already aligned with human values safe, as everyone who has looked at the problem seems to agree. And the Doomers certainly have been clear about this, even as most of the funding goes to prosaic alignment.