We humans also align with each other via organic alignment.
This kind of “organic alignment” can fail in catastrophic ways, e.g., produce someone like Stalin or Mao. (They’re typically explained by “power corrupts” but can also be seen as instances of “deceptive alignment”.)
Another potential failure mode is that “organically aligned” AIs start viewing humans as parasites instead of important/useful parts of its “greater whole”. This also has plenty of parallels in biological systems and human societies.
Both of these seem like very obvious risks/objections, but I can’t seem to find any material by Softmax that addresses or even mentions them. @emmett
This kind of “organic alignment” can fail in catastrophic ways, e.g., produce someone like Stalin or Mao. (They’re typically explained by “power corrupts” but can also be seen as instances of “deceptive alignment”.)
Another potential failure mode is that “organically aligned” AIs start viewing humans as parasites instead of important/useful parts of its “greater whole”. This also has plenty of parallels in biological systems and human societies.
Both of these seem like very obvious risks/objections, but I can’t seem to find any material by Softmax that addresses or even mentions them. @emmett