Will a strong AI, if created, necessarily be unfriendly?
It’s very likely, but not necessary.
Will it necessarily be able to take control of human society (likely meaning exponentially self-improving)?
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.
It’s very likely, but not necessary.
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.