The basic idea is that if you pull a mind at random from design space then it will be unfriendly. I am not even sure if that is true. But it is the strongest argument they have. And it is completely bogus because humans do not pull AGI’s from mind design space at random.
I don’t have the energy to get into an extended debate, but the claim that this is “the basic idea” or that this would be “the strongest argument” is completely false. A far stronger basic idea is the simple fact that nobody has yet figured out a theory of ethics that would work properly, which means that even that AGIs that were specifically designed to be ethical are most likely to lead to bad outcomes. And that’s presuming that we even knew how to program them exactly.
I did skim through the last paper. I am going to review it thoroughly at some point.
On first sight one of the problems is the whole assumption of AI drives. On the one hand you claim that an AI is going to follow its code, is its code (as if anyone would doubt causality). On the other hand you talk about the emergence of drives like unbounded self-protection. And if someone says that unbounded self-protection does not need to be part of an AGI, you simply claim that your definition of AGI will have those drives. Which allows you to arrive at your desired conclusion of AGI being an existential risk.
Another problem is the idea that an AGI will be a goal executor (I can’t help but interpret that to be your position) when I believe that the very nature of artificial general intelligence implies the correct interpretation of “Understand What I Mean” and that “Do What I Mean” is the outcome of virtually any research. Only if you were to pull an AGI at random from mind design space could you possible arrive at “Understand What I Mean” without “Do What I Mean”.
To see why look at any software product or complex machine. Those products are continuously improved. Where “improved” means that they become better at “Understand What I Mean” and “Do What I Mean”.
There is no good reason to believe that at some point that development will suddenly turn into “Understand What I Mean” and “Go Batshit Crazy And Do What I Do Not Mean”.
There are other problems with the paper. I hope I will find some time to write a review soon.
One problem for me with reviewing such papers is that I doubt a lot of underlying assumptions like that there exists a single principle of general intelligence. As I see it there will never be any sudden jump in capability. I also think that intelligence and complex goals are fundamentally interwoven. An AGI will have to be hardcoded, or learn, to care about a manifold of things. No simple algorithm, given limited computational resources, will give rise to the drives that are necessary to undergo strong self-improvement (if that is possible at all).
I don’t have the energy to get into an extended debate, but the claim that this is “the basic idea” or that this would be “the strongest argument” is completely false. A far stronger basic idea is the simple fact that nobody has yet figured out a theory of ethics that would work properly, which means that even that AGIs that were specifically designed to be ethical are most likely to lead to bad outcomes. And that’s presuming that we even knew how to program them exactly.
This isn’t even something that you’d need to read a hundred blog posts for, it’s well discussed in both The Singularity and Machine Ethics and Artificial Intelligence as a Positive and Negative Factor in Global Risk. Complex Value Systems are Required to Realize Valuable Futures, too.
I did skim through the last paper. I am going to review it thoroughly at some point.
On first sight one of the problems is the whole assumption of AI drives. On the one hand you claim that an AI is going to follow its code, is its code (as if anyone would doubt causality). On the other hand you talk about the emergence of drives like unbounded self-protection. And if someone says that unbounded self-protection does not need to be part of an AGI, you simply claim that your definition of AGI will have those drives. Which allows you to arrive at your desired conclusion of AGI being an existential risk.
Another problem is the idea that an AGI will be a goal executor (I can’t help but interpret that to be your position) when I believe that the very nature of artificial general intelligence implies the correct interpretation of “Understand What I Mean” and that “Do What I Mean” is the outcome of virtually any research. Only if you were to pull an AGI at random from mind design space could you possible arrive at “Understand What I Mean” without “Do What I Mean”.
To see why look at any software product or complex machine. Those products are continuously improved. Where “improved” means that they become better at “Understand What I Mean” and “Do What I Mean”.
There is no good reason to believe that at some point that development will suddenly turn into “Understand What I Mean” and “Go Batshit Crazy And Do What I Do Not Mean”.
There are other problems with the paper. I hope I will find some time to write a review soon.
One problem for me with reviewing such papers is that I doubt a lot of underlying assumptions like that there exists a single principle of general intelligence. As I see it there will never be any sudden jump in capability. I also think that intelligence and complex goals are fundamentally interwoven. An AGI will have to be hardcoded, or learn, to care about a manifold of things. No simple algorithm, given limited computational resources, will give rise to the drives that are necessary to undergo strong self-improvement (if that is possible at all).