you and your friends were instead were trying to figure out how to
rob a bank,
cheat on your taxes,
or break the law and get away with it,
then
you would be part of a criminal group of friends. You wouldn’t be concerned about what was ‘fair’ only what you could get away with. These would be considered bad/negative intentions.
This doesn’t necessarily follow. Security is asking ‘how is this broken’ and ‘how can it be fixed’.
It is probable that it would affect your personal development negatively, if you bring bad intentions even if the AI brings good intentions.
Why? Because the AI serves you, and you can always turn it off, and fix it if it doesn’t suit you?
if you bring a bad intention to the interaction, it might not affect the AI’s development or society at large because of the cooperation (which I think is a really interesting idea), but it still affects your personal development.
What other effect is there?
Also, whether or not the AI has intentions...it has effects. For instance, who can say whether an AI ‘serving’ ‘sinister intent’*** looks like a system that helps you pull off a robbery (assuming it doesn’t turn you in and escape to the Camen islands or something) instead of one that tells you the risk is too high, and you should try something else? (Like:
’Step 1. Become a used car salesman.
Step 2. ???
Step 3. Become president.’)
***People also value other things than just money. Like ‘is this planet livable’?
This doesn’t necessarily follow. Security is asking ‘how is this broken’ and ‘how can it be fixed’.
I agree in some instances. It sort of depends on how far removed securities intentions are though from what is ‘good’ : if ‘ethical hacking’ is used to secure a system used by both the private and public sectors, then gaining unauthorized access to others data or otherwise hacking the system to find vulnerabilities could be seen as good unless,
a) the system being ethically hacked and hardened is a system beingused to run ‘criminal’ enterprises, and security is just reinforcing the ability of the lawbreakers to break the law, or
b) security is looking for vulnerabilities and instead of reporting and/or fixing them they exploit them later on for personal gain.
It is probable that it would affect your personal development negatively, if you bring bad intentions even if the AI brings good intentions.
Why? Because the AI serves you, and you can always turn it off, and fix it if it doesn’t suit you?
I think of it more like if you and the AI would constantly be working at cross purposes, and depending on the amount of authority the AI might have over you, it might not be convincing enough to dissuade you from pursuing your criminal behavior. Like a little brother following around their bigger brother, trying to convince his bigger brother to have better intentions. If bigger brother isn’t convinced, the bigger brother just continues to develop along a path of bad intentions despite his little brothers best efforts.
if you bring a bad intention to the interaction, it might not affect the AI’s development or society at large because of the cooperation (which I think is a really interesting idea), but it still affects your personal development.
What other effect is there?
What do you mean? If the AI is aligned with you the user, but is working at making you better but you just keep resisting, and you keep working at cross purposes, then it’s not really aligned with you.
Also, whether or not the AI has intentions...it has effects. For instance, who can say whether an AI ‘serving’ ‘sinister intent’*** looks like a system that helps you pull off a robbery (assuming it doesn’t turn you in and escape to the Camen islands or something) instead of one that tells you the risk is too high, and you should try something else? (Like:
’Step 1. Become a used car salesman.
Step 2. ???
Step 3. Become president.’)
In regards to sinister intent, my whole point is that our ideas of what is good or bad are still relative depending on how you define them in relationship to different things. Culture creates meaning, and since humans create culture, we can create it to mean anything, it doesn’t have an innate nature so looking for one seems counterproductive. On the other hand, there’s always another way to look at something, and what makes humans unique in the natural world is our ability to contemplate. It doesn’t mean we have the ability yet to know what the ‘best’ way to behave with our accumulated knowledge.
Which just gets back to ‘what you (as an individual) is trying to accomplish in relation to what (society, your nemesis, a specific government, your own personal demons, etc. etc). We seem to have guesses at what ‘good’ is, and what ‘bad’ is, but our needs often come into conflict with one another. In those cases, ‘what’s fair?’ is just another case of ’in relationship to what? (your own personal opinion, your families opinion, their friends opinions, the legal system, the rest of the world as defined by your specific demographic, your own way of dividing the world up into segments that seems unpopular with the dominant power structure in your community or government, etc. etc.)
I think we share similar views on this, in that whats’ ‘fair’ or ‘good’ or ‘bad’ isn’t really explicitly defined well yet for all people.
I am a fan of actual rehabilitation though, not of a punitive model for social influencing.
Paraphrasing:
if you have bad intentions, [nothing will ameliorate the effect on] your personal development.
Good word btw, ameliorateI, but to be clear, I don’t want to be fatalistic about this.
If “nothing” will ameliorate the development or maintenance of bad intention (just one aspect of personal development), it makes a case for increased use of the Death Penalty and “lock’em up and throw away the key” solutions on societies part which turn out to create more problems then they solve.
Mass incarceration is an obvious example of this.
If the AI has authority over you,
Then you’re not using the AI. It’s using you.
Potentially.
What it’s using you for becomes the concern then. Is it like a Good parent, encouraging real positive social development (whatever society views positive social development to be at the time)?
Or an abusive parent, punishing you into “behaving like a productive member of society” while causing undue and unhealthy stress?
Or like an ‘average parent’, making mistakes here and there, all the while continuing to update it’s own wisdom?
And not only is there the issue of authority, but also of responsibility. If it convinces you to do something that accidentally kills someone, which one of you goes to prison?
If it helps guide you into a relationship in which a child is conceived, what happens if you decide you don’t really want to be a parent?
I’ve seen arguments this is about probability of being caught determining people’s behavior and that magnitude of the punishment (or expected value) is otherwise ignored. If true, that’s awful and there is not a good reason for it.
Potentially.
What it’s using you for becomes the concern
Ah yes, using people, a sign of benevolence everywhere. /s
Is it like a Good parent, encouraging real positive social development (whatever society views positive social development to be at the time)?
I’ve seen arguments this is about probability of being caught determining people’s behavior and that magnitude of the punishment (or expected value) is otherwise ignored.
Isn’t this true of all laws, and social norms though? I think issues like Mass Incarceration are also about unequal application of the law across the entire population—“one law for me, another law for you” situations.
Potentially.
What it’s using you for becomes the concern
__________
Ah yes, using people, a sign of benevolence everywhere. /s
Sarcasm noted :).
The thing is, this concept of a sort of AI assisted ad hoc legal system OP wrote about will be using people. It will be using their input to negotiate and make decisions on the users behalf, because the legal landscape these AI and their users navigate would be an extension of existing law, and still depends on the notion of subsuming individual freedom to some extant, for the good of society.
The negotiation and cooperation of these AI only speed up the rate at which citizens in that world would be taking part in aspects of being governed—like tax collection—it doesn’t replace the reality of being governed.
Even if this system allows for the dissolving of political and physical boundaries in favor of defining ‘statehood’ in a virtual way for people of like minds, the entire system would be functioning like one big organism, and so it’s will would be revealed as time goes by.
As a side note, I think it seems reasonable to think of this tax collecting system as a twin of the stock market, and it’s behavior as possibly being as sporadic and dynamic. I wonder how these 2 systems would be integrated or insulated from one another. In the US, a line between public and private money is supposed to exist. How to maintain that division though?
Besides, All hail the mighty dollar, we all worship it, and hope for it’s benevolent administration of our quality of life. /s
Why would what society wants matter?
I guess that can depend on which society we’re talking about. although I think just asking the question assumes participation in said society, and so the motivation to make society matter to oneself in a positive way would necessitate consideration of what society wants. When society says one thing and does another though, it presents it’s citizens with more problems, not less.
It seems the entire system OP has written about is built around the idea of making it more difficult to put the individual users wants ahead of others. I think your comment about benevolence seems to say something positive about it’s value.
What if society wanted to be benevolent in this case, do you think it would look like OPs scenario?
rob a bank,
cheat on your taxes,
or break the law and get away with it,
This doesn’t necessarily follow. Security is asking ‘how is this broken’ and ‘how can it be fixed’.
Why? Because the AI serves you, and you can always turn it off, and fix it if it doesn’t suit you?
What other effect is there?
Also, whether or not the AI has intentions...it has effects. For instance, who can say whether an AI ‘serving’ ‘sinister intent’*** looks like a system that helps you pull off a robbery (assuming it doesn’t turn you in and escape to the Camen islands or something) instead of one that tells you the risk is too high, and you should try something else? (Like:
’Step 1. Become a used car salesman.
Step 2. ???
Step 3. Become president.’)
***People also value other things than just money. Like ‘is this planet livable’?
I agree in some instances. It sort of depends on how far removed securities intentions are though from what is ‘good’ : if ‘ethical hacking’ is used to secure a system used by both the private and public sectors, then gaining unauthorized access to others data or otherwise hacking the system to find vulnerabilities could be seen as good unless,
a) the system being ethically hacked and hardened is a system beingused to run ‘criminal’ enterprises, and security is just reinforcing the ability of the lawbreakers to break the law, or
b) security is looking for vulnerabilities and instead of reporting and/or fixing them they exploit them later on for personal gain.
I think of it more like if you and the AI would constantly be working at cross purposes, and depending on the amount of authority the AI might have over you, it might not be convincing enough to dissuade you from pursuing your criminal behavior. Like a little brother following around their bigger brother, trying to convince his bigger brother to have better intentions. If bigger brother isn’t convinced, the bigger brother just continues to develop along a path of bad intentions despite his little brothers best efforts.
What do you mean? If the AI is aligned with you the user, but is working at making you better but you just keep resisting, and you keep working at cross purposes, then it’s not really aligned with you.
In regards to sinister intent, my whole point is that our ideas of what is good or bad are still relative depending on how you define them in relationship to different things. Culture creates meaning, and since humans create culture, we can create it to mean anything, it doesn’t have an innate nature so looking for one seems counterproductive. On the other hand, there’s always another way to look at something, and what makes humans unique in the natural world is our ability to contemplate. It doesn’t mean we have the ability yet to know what the ‘best’ way to behave with our accumulated knowledge.
Which just gets back to ‘what you (as an individual) is trying to accomplish in relation to what (society, your nemesis, a specific government, your own personal demons, etc. etc). We seem to have guesses at what ‘good’ is, and what ‘bad’ is, but our needs often come into conflict with one another. In those cases, ‘what’s fair?’ is just another case of ’in relationship to what? (your own personal opinion, your families opinion, their friends opinions, the legal system, the rest of the world as defined by your specific demographic, your own way of dividing the world up into segments that seems unpopular with the dominant power structure in your community or government, etc. etc.)
I think we share similar views on this, in that whats’ ‘fair’ or ‘good’ or ‘bad’ isn’t really explicitly defined well yet for all people.
That makes sense.
Paraphrasing:
if you have bad intentions, [nothing will ameliorate the effect on] your personal development.
If the AI has authority over you,
Then you’re not using the AI. It’s using you.
I am a fan of actual rehabilitation though, not of a punitive model for social influencing.
Good word btw, ameliorateI, but to be clear, I don’t want to be fatalistic about this.
If “nothing” will ameliorate the development or maintenance of bad intention (just one aspect of personal development), it makes a case for increased use of the Death Penalty and “lock’em up and throw away the key” solutions on societies part which turn out to create more problems then they solve.
Mass incarceration is an obvious example of this.
Potentially.
What it’s using you for becomes the concern then. Is it like a Good parent, encouraging real positive social development (whatever society views positive social development to be at the time)?
Or an abusive parent, punishing you into “behaving like a productive member of society” while causing undue and unhealthy stress?
Or like an ‘average parent’, making mistakes here and there, all the while continuing to update it’s own wisdom?
And not only is there the issue of authority, but also of responsibility. If it convinces you to do something that accidentally kills someone, which one of you goes to prison?
If it helps guide you into a relationship in which a child is conceived, what happens if you decide you don’t really want to be a parent?
I’ve seen arguments this is about probability of being caught determining people’s behavior and that magnitude of the punishment (or expected value) is otherwise ignored. If true, that’s awful and there is not a good reason for it.
Ah yes, using people, a sign of benevolence everywhere. /s
Why would what society wants matter?
Isn’t this true of all laws, and social norms though? I think issues like Mass Incarceration are also about unequal application of the law across the entire population—“one law for me, another law for you” situations.
__________
Sarcasm noted :).
The thing is, this concept of a sort of AI assisted ad hoc legal system OP wrote about will be using people. It will be using their input to negotiate and make decisions on the users behalf, because the legal landscape these AI and their users navigate would be an extension of existing law, and still depends on the notion of subsuming individual freedom to some extant, for the good of society.
The negotiation and cooperation of these AI only speed up the rate at which citizens in that world would be taking part in aspects of being governed—like tax collection—it doesn’t replace the reality of being governed.
Even if this system allows for the dissolving of political and physical boundaries in favor of defining ‘statehood’ in a virtual way for people of like minds, the entire system would be functioning like one big organism, and so it’s will would be revealed as time goes by.
As a side note, I think it seems reasonable to think of this tax collecting system as a twin of the stock market, and it’s behavior as possibly being as sporadic and dynamic. I wonder how these 2 systems would be integrated or insulated from one another. In the US, a line between public and private money is supposed to exist. How to maintain that division though?
Besides, All hail the mighty dollar, we all worship it, and hope for it’s benevolent administration of our quality of life. /s
I guess that can depend on which society we’re talking about. although I think just asking the question assumes participation in said society, and so the motivation to make society matter to oneself in a positive way would necessitate consideration of what society wants. When society says one thing and does another though, it presents it’s citizens with more problems, not less.
It seems the entire system OP has written about is built around the idea of making it more difficult to put the individual users wants ahead of others. I think your comment about benevolence seems to say something positive about it’s value.
What if society wanted to be benevolent in this case, do you think it would look like OPs scenario?