Point 3 seems to be about the difficulty of box-escapes and dominating all of humanity. I’m reading that as disagreeing with the general jump that lesswrong types usually do to go from “human-level AI” to “strictly stronger than all of humanity combined”.
The problem is that I think I might agree that “slightly smarter-than-human AGI in a box managed by trained humans” really doesn’t have as easy a way out as EY might think. But that’s also not what’s going to happen if things are only left to complete “don’t sweat it” techno-optimists. What’s going to happen is the AGI gets deployed as a virtual assistant in every copy of Windows 13 or whatever. Carefulness is important especially if AIs aren’t as powerful as Yud thinks, because that’s when it can make the difference between survival and defeat. Besides, if even after that we keep recursively improving on the thing, there’s only so far we can push our luck.
The problem is that I think I might agree that “slightly smarter-than-human AGI in a box managed by trained humans” really doesn’t have as easy a way out as EY might think. But that’s also not what’s going to happen if things are only left to complete “don’t sweat it” techno-optimists. What’s going to happen is the AGI gets deployed as a virtual assistant in every copy of Windows 13 or whatever. Carefulness is important especially if AIs aren’t as powerful as Yud thinks, because that’s when it can make the difference between survival and defeat. Besides, if even after that we keep recursively improving on the thing, there’s only so far we can push our luck.