I think the endorsed answer is “QACI as self-contained field of research is seeking which goal is safe, not how to get AI pursue this goal in robust way”. Also, if you can create AI which makes correct guesses about galaxy-brained universe simulations, you can also create AI which makes correct guesses about nanotech design, which is kinda exfohazardous.
I think the endorsed answer is “QACI as self-contained field of research is seeking which goal is safe, not how to get AI pursue this goal in robust way”. Also, if you can create AI which makes correct guesses about galaxy-brained universe simulations, you can also create AI which makes correct guesses about nanotech design, which is kinda exfohazardous.