Cool, yeah. I mean, I can’t rule this out confidently, but I do pretty strongly object to summarizing this state of affairs as:
Of course the most central old debate was over whether MIRI’s plan, to build a Friendly AI to take over the world in service of reducing x-risks, was a good one.
Like, at least in my ethics there is a huge enormous gulf between trying to take over the world, and saying that it would be a good idea for someone, ideally someone with as much legitimacy as possible, who is going to build extremely powerful AI systems anyways, to do this:
upload humans and run them at speeds more comparable to those of an AI
prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)
design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)
I go around and do the latter all the time, and think more people should do so! I agree I can’t rule out from the above that MIRI was maybe also planning to build such systems themselves, but I don’t currently find it that likely, and object to people referring to it as a fact of common knowledge.
Cool, yeah. I mean, I can’t rule this out confidently, but I do pretty strongly object to summarizing this state of affairs as:
Like, at least in my ethics there is a huge enormous gulf between trying to take over the world, and saying that it would be a good idea for someone, ideally someone with as much legitimacy as possible, who is going to build extremely powerful AI systems anyways, to do this:
I go around and do the latter all the time, and think more people should do so! I agree I can’t rule out from the above that MIRI was maybe also planning to build such systems themselves, but I don’t currently find it that likely, and object to people referring to it as a fact of common knowledge.