In the title you say AI was “aligned by default”, which to me makes it sound like any sufficiently advanced AI is automatically moral, but in the story you have a particular mechanism—explicit simulation of an aligned AI, which bootstraps that AI into being. Did I misinterpret the title?
You didn’t really misinterpret it. I was using the term in a looser way than most would, to mean that you don’t need a fine-grained technical solution, and just a very basic trick is enough for alignment. I realize most use the term differently though, so I’ll change the wording.
This is potentially a follow-up to my AI 2027 forecast, An “Optimistic” AI Timeline, depending on how hard people roast me for this lol.
In the title you say AI was “aligned by default”, which to me makes it sound like any sufficiently advanced AI is automatically moral, but in the story you have a particular mechanism—explicit simulation of an aligned AI, which bootstraps that AI into being. Did I misinterpret the title?
You didn’t really misinterpret it. I was using the term in a looser way than most would, to mean that you don’t need a fine-grained technical solution, and just a very basic trick is enough for alignment. I realize most use the term differently though, so I’ll change the wording.