Do you believe that there’s truly no chance a powerful AI wouldn’t immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
Will a strong AI, if created, necessarily be unfriendly?
It’s very likely, but not necessary.
Will it necessarily be able to take control of human society (likely meaning exponentially self-improving)?
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.
The answers to your questions are, in order:
Who does the AI want to be held accountable for the crime?
Who does the AI want to get credit for the invention?
Does the AI want the court to recognize their contract?
This is presuming, of course, that all humans have not been made into paperclips.
Do you believe that there’s truly no chance a powerful AI wouldn’t immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
It’s very likely, but not necessary.
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.