A software agent with enough optimizing power to make this question relevant will do whatever it wants (i.e., has been programmed to want). Worrying about ownership at that point seems misplaced.
Suppose a powerful AI commits a serious crime, and the reason it wanted to commit that crime wasn’t because it was explicitly programmed to commit it, but instead emerged as a result of completely benign-appearing learning rules it was given. Would the AI be held legally liable in court like a person, or just disabled and the creators held liable? Are the creators liable for the actions of an unfriendly AI, even if they honestly and knowledgeably attempted to make it friendly?
Or, say that same powerful AI designs something, by itself, that could be patented. Can the creators completely claim that patent, or would it be shared, or would the AI get total credit?
If this strong AI enters a contract with a human (or another strong AI) for whatever reason, would/should a court of law recognize that contract?
These are all questions that seem relevant to the broader concept of ownership.
Do you believe that there’s truly no chance a powerful AI wouldn’t immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
Will a strong AI, if created, necessarily be unfriendly?
It’s very likely, but not necessary.
Will it necessarily be able to take control of human society (likely meaning exponentially self-improving)?
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.
A software agent with enough optimizing power to make this question relevant will do whatever it wants (i.e., has been programmed to want). Worrying about ownership at that point seems misplaced.
Suppose a powerful AI commits a serious crime, and the reason it wanted to commit that crime wasn’t because it was explicitly programmed to commit it, but instead emerged as a result of completely benign-appearing learning rules it was given. Would the AI be held legally liable in court like a person, or just disabled and the creators held liable? Are the creators liable for the actions of an unfriendly AI, even if they honestly and knowledgeably attempted to make it friendly?
Or, say that same powerful AI designs something, by itself, that could be patented. Can the creators completely claim that patent, or would it be shared, or would the AI get total credit?
If this strong AI enters a contract with a human (or another strong AI) for whatever reason, would/should a court of law recognize that contract?
These are all questions that seem relevant to the broader concept of ownership.
The answers to your questions are, in order:
Who does the AI want to be held accountable for the crime?
Who does the AI want to get credit for the invention?
Does the AI want the court to recognize their contract?
This is presuming, of course, that all humans have not been made into paperclips.
Do you believe that there’s truly no chance a powerful AI wouldn’t immediately dominate human society? Or restated: will a strong AI, if created, necessarily be unfriendly and also able to take control of human society (likely meaning exponentially self-improving)?
It’s very likely, but not necessary.
If it’s substantially smarter than humans, yes, whether or not massively recursive self-improvement plays a role. By “substantially smarter”, I mean an intelligence such that the difference between Einstein and the average human looks like a rounding error in comparison.
What do you think a meaningful probability, if one can be assigned, would be for the first strong AI to exhibit both of those traits? (Not trying to “grill” you; I can’t even imagine a good order of magnitude to put on that probability)
I don’t think I can come up with numerical probabilities, but I consider “massively smarter than a human” and “unfriendly” to be the default values for those characteristics, and don’t expect the first AGI to differ from the default unless there is a massive deliberate effort to make it otherwise.