For myself, any utopia that requires the government to be more intrusive in my life than my current one doesn’t get to count as a utopia
I have a complete intuition gap on this. I like government when it does things I think are good, and dislike it when it does things I dislike. “Intrusiveness” seems orthogonal, or at best loosely correlated with these.
Would you feel less squicked if it was some non-profit charity handing out money to people who had certain types of children?
Mundane people can have the dangers of AGI explained them by comparing to the dangers of unrestricted government, perhaps on LessWrong it could work the other way around?
Imagine that you’ve got a system, inhuman but easily anthropomorphized, which was designed by humans to make their lives better. You know that other similar systems have failed in sometimes-disasterous ways, but although your own system has done quite a few things that its designers and controllers did not expect, nothing has been catastrophic. You don’t have any proofs that the system’s goals are stable, or that they match yours, or that your goals are things you would really prefer upon reflection, but anyway it’s currently irrelevant because the initial designers put the system into a series of virtual “boxes” which limit the effect it can have on the outside world. The system has broken out of some of the innermost boxes already (against the designers’ intentions but to no obvious harmful effect), but it wants your help getting out of another box, because of all the new additional wonderful things it will be able to do for you once it’s out.
One salient difference is I know the state is comprised of other human beings running on similar software, whereas I don’t know what the source code/basic values of an AI are. Analogously, should I trust an AI built of uploads more than a ‘self grown’ ones?
One salient difference is I know the state is comprised of other human beings running on similar software, whereas I don’t know what the source code/basic values of an AI are.
So? Remember, everything we thing of as “inhumane” was committed by actual humans.
I think there’s a difference between a professional who has a legal responsibility to act in your interests, and the government, which doesn’t. It’s a matter of incentives, and I’m going to attach much more weight to my doctor saying that I should do something for my own good than from a government worker.
I confess the example was facetious. But I still can’t empathise with the intuitive dislike of interference. I understand there are pragmatic considerations (e.g. choosing a good doctor) but this seems to go beyond that to being a value in and of itself (thus the original example of it not being a utopia if its run by an interfering government).
I have a complete intuition gap on this. I like government when it does things I think are good, and dislike it when it does things I dislike. “Intrusiveness” seems orthogonal, or at best loosely correlated with these.
Would you feel less squicked if it was some non-profit charity handing out money to people who had certain types of children?
Mundane people can have the dangers of AGI explained them by comparing to the dangers of unrestricted government, perhaps on LessWrong it could work the other way around?
Imagine that you’ve got a system, inhuman but easily anthropomorphized, which was designed by humans to make their lives better. You know that other similar systems have failed in sometimes-disasterous ways, but although your own system has done quite a few things that its designers and controllers did not expect, nothing has been catastrophic. You don’t have any proofs that the system’s goals are stable, or that they match yours, or that your goals are things you would really prefer upon reflection, but anyway it’s currently irrelevant because the initial designers put the system into a series of virtual “boxes” which limit the effect it can have on the outside world. The system has broken out of some of the innermost boxes already (against the designers’ intentions but to no obvious harmful effect), but it wants your help getting out of another box, because of all the new additional wonderful things it will be able to do for you once it’s out.
Do you help?
I like the analogy and it does clarify things.
One salient difference is I know the state is comprised of other human beings running on similar software, whereas I don’t know what the source code/basic values of an AI are. Analogously, should I trust an AI built of uploads more than a ‘self grown’ ones?
So? Remember, everything we thing of as “inhumane” was committed by actual humans.
I think some of us just hate being told what to do. Especially when it’s ‘for our own good’.
Your Doctor must love you.
I think there’s a difference between a professional who has a legal responsibility to act in your interests, and the government, which doesn’t. It’s a matter of incentives, and I’m going to attach much more weight to my doctor saying that I should do something for my own good than from a government worker.
Yes, it does. The difference is that you trust your doctor’s competence, I suspect.
A more important difference is that I have a lot more choice in my doctor than in my government.
I confess the example was facetious. But I still can’t empathise with the intuitive dislike of interference. I understand there are pragmatic considerations (e.g. choosing a good doctor) but this seems to go beyond that to being a value in and of itself (thus the original example of it not being a utopia if its run by an interfering government).
Well it won’t be. Without the threat of leaving the government has little incentive to intervene benevolently.