I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain
Maybe the problem is with the idea that something like that should have owners to begin with? In the “standard discussion model” we tend to use for these things, you’re talking about eternal control of the entire future. Giving that to a few thousand, or a few hundred thousand, people who happened to be stockholders at some critical time isn’t all that much better than giving it to a handful.
I don’t buy the idea that being the ones who built or funded a machine that took over the world should give you the right to run the world forever… not even if it took over through “non-force” means.
OpenAI seemed to be kind of going in the right direction at the beginning: “We’ll let you share in mundane profits, but if this thing FOOMs and remakes the world, then all bets are off. We are doing this for All Mankind(TM)”.
I think even they, like most people on Less Wrong, probably would have been unwilling to take what I think is the correct step after that: humans in general shouldn’t control such a thing, beyond setting its initial goals. But at least it seemed as though they were willing to explore the idea that a concept of ownership based on human effort becomes ridiculous in an economy that doesn’t run on human effort.
Maybe the problem is with the idea that something like that should have owners to begin with?
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.
Strongly agree that no human is fit to own an AI which has “eternal control of the future”. If there is going to be ownership though, better to be a broader group of people (which would represent a greater plurality of values if nothing else).
I also agree that in an economy which does not run on human effort, no one should own anything. But it seems hard to make that a reality, particularly in a way which applies to the most powerful people.
Maybe the problem is with the idea that something like that should have owners to begin with? In the “standard discussion model” we tend to use for these things, you’re talking about eternal control of the entire future. Giving that to a few thousand, or a few hundred thousand, people who happened to be stockholders at some critical time isn’t all that much better than giving it to a handful.
I don’t buy the idea that being the ones who built or funded a machine that took over the world should give you the right to run the world forever… not even if it took over through “non-force” means.
OpenAI seemed to be kind of going in the right direction at the beginning: “We’ll let you share in mundane profits, but if this thing FOOMs and remakes the world, then all bets are off. We are doing this for All Mankind(TM)”.
I think even they, like most people on Less Wrong, probably would have been unwilling to take what I think is the correct step after that: humans in general shouldn’t control such a thing, beyond setting its initial goals. But at least it seemed as though they were willing to explore the idea that a concept of ownership based on human effort becomes ridiculous in an economy that doesn’t run on human effort.
I would argue the problem is it being created at all. Suppose a new group called SocialAI builds an AGI that it intends to make entirely autonomous and independent once bootstrapped. The AGI then FOOMs and is aligned. This is a vastly better future than many other possibilities, but does that mean it is still ethically ok to create an intelligence, imbue it with your values, your choices and ideas, and then send it off to rule the world in a way that will make those values and choices and ideas live forever, more important than anything else?
It’s like going back in time to write the Bible, if the Bible was also actively able to go and force people to abide by its tenets.
Strongly agree that no human is fit to own an AI which has “eternal control of the future”. If there is going to be ownership though, better to be a broader group of people (which would represent a greater plurality of values if nothing else).
I also agree that in an economy which does not run on human effort, no one should own anything. But it seems hard to make that a reality, particularly in a way which applies to the most powerful people.
Disempower ’em?