I don’t think we know convergent instrumentality will happen, necessarily. If the human level AI understands that this is wrong AND genuinely cares (as it does now) and we augment its intelligence, it’s pretty likely that it won’t do it.
It could change its mind, sure, we’d like a bit more assurance than that.
I guess I’ll have to find a way to avoid twiddling thumbs till we do know what AGI will look like.
I don’t think we know convergent instrumentality will happen, necessarily. If the human level AI understands that this is wrong AND genuinely cares (as it does now) and we augment its intelligence, it’s pretty likely that it won’t do it.
It could change its mind, sure, we’d like a bit more assurance than that.
I guess I’ll have to find a way to avoid twiddling thumbs till we do know what AGI will look like.