However, whereas the concept of an unaligned general intelligence has the advantage of being a powerful, general abstraction, the HMS concept has the advantage of being much easier to explain to non-experts.
The trouble with the choice of phrase “hyperintelligent machine sociopath” is that it gives the other side of the argument and easy rebuttal, namely, “But that’s not what we are trying to do: we’re not trying to create a sociopath”. In contrast, if the accusation is that (many of) the AI labs are trying to create a machine smarter than people, then the other side cannot truthfully use the same easy rebuttal. Then our side can continue with, “and they don’t have a plan for how to control this machine, at least not any plan that stands up to scrutiny”. The phrase “unaligned superintelligence” is an extremely condensed version of the argument I just outlined (where the verb “control” has been replaced with “align” to head off the objection that control would not even be desirable because people are not wise enough and not ethical enough to be given control over something so powerful).
I can see what you mean. However, I would say that just claiming “that’s not what we are trying to do” is not a strong rebuttal. For example, we would not accept such a rebuttal from a weapons company, which was seeking to make weapons technology widely available without regulation. We would say—it doesn’t matter how you are trying to use the weapons, it matters how others are, with your technology.
In the long term, it does seem correct to me that the greater concern is issues around superintelligence. However, in the near term it seems the issue is we are making things that are not at all superintelligent, and that’s the problem. Smart at coding and language, but coupled e.g. with a crude directive to ‘make me as much money as possible,’ with no advanced machinery for ethics or value judgement.
The trouble with the choice of phrase “hyperintelligent machine sociopath” is that it gives the other side of the argument and easy rebuttal, namely, “But that’s not what we are trying to do: we’re not trying to create a sociopath”. In contrast, if the accusation is that (many of) the AI labs are trying to create a machine smarter than people, then the other side cannot truthfully use the same easy rebuttal. Then our side can continue with, “and they don’t have a plan for how to control this machine, at least not any plan that stands up to scrutiny”. The phrase “unaligned superintelligence” is an extremely condensed version of the argument I just outlined (where the verb “control” has been replaced with “align” to head off the objection that control would not even be desirable because people are not wise enough and not ethical enough to be given control over something so powerful).
I can see what you mean. However, I would say that just claiming “that’s not what we are trying to do” is not a strong rebuttal. For example, we would not accept such a rebuttal from a weapons company, which was seeking to make weapons technology widely available without regulation. We would say—it doesn’t matter how you are trying to use the weapons, it matters how others are, with your technology.
In the long term, it does seem correct to me that the greater concern is issues around superintelligence. However, in the near term it seems the issue is we are making things that are not at all superintelligent, and that’s the problem. Smart at coding and language, but coupled e.g. with a crude directive to ‘make me as much money as possible,’ with no advanced machinery for ethics or value judgement.