People pursuing a positive Singularity, with the right intentions, who understand the gravity of the problem, take it seriously, and do it on behalf of humanity rather than some smaller group.
I haven’t offered a rigorous definition, and I’m not going to, but I think you know what I mean.
Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that “good guys” would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like “good guys” is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.
To prevent this impression, if you’re going to use the term “good guys”, then at or before the place where you first use it, you should probably put an explanation, like
I haven’t offered a rigorous definition, and I’m not going to, but I think you know what I mean.
I might have some inkling of what you want to mean, but on this forum, you ought to be able to define your terms to be taken seriously. I suspect that if you honestly try defining “good guys”, you will find that it is harder than it looks and not at all obvious.
I’m not saying that the definition is obvious—I’m saying that it’s besides the point. It was clearly detracting from the quality of the conversation, though, so I’ve removed the term.
What do the good guys look like? Do they look like a cabal with government sanction that performs research in secret facilities offshore, control the asteroid deflection system (and therefore the space program), and prohibit anyone else from using the most effective mind (and presumably quality-of-life) enhancing techniques?
Basically, should one of the very first thing a Friendly AI does be to wipe out the group of people who succeed in creating the first FAI?
What are those “good guys” you speak of?
People pursuing a positive Singularity, with the right intentions, who understand the gravity of the problem, take it seriously, and do it on behalf of humanity rather than some smaller group.
I haven’t offered a rigorous definition, and I’m not going to, but I think you know what I mean.
Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that “good guys” would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like “good guys” is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.
To prevent this impression, if you’re going to use the term “good guys”, then at or before the place where you first use it, you should probably put an explanation, like
Okay, I’m convinced. I think I will just remove the term altogether, because it’s confusing the issue.
well said.
I might have some inkling of what you want to mean, but on this forum, you ought to be able to define your terms to be taken seriously. I suspect that if you honestly try defining “good guys”, you will find that it is harder than it looks and not at all obvious.
I’m not saying that the definition is obvious—I’m saying that it’s besides the point. It was clearly detracting from the quality of the conversation, though, so I’ve removed the term.
What do the good guys look like? Do they look like a cabal with government sanction that performs research in secret facilities offshore, control the asteroid deflection system (and therefore the space program), and prohibit anyone else from using the most effective mind (and presumably quality-of-life) enhancing techniques?
Basically, should one of the very first thing a Friendly AI does be to wipe out the group of people who succeed in creating the first FAI?