My own feeling is that the chance of success of of building FAI, assuming current human intelligence distribution, is low (even if given unlimited financial resources), while the risk of unintentionally building or contributing to UFAI is high. I think I can explicate a part of my intuition this way: There must be a minimum level of intelligence below which the chances of successfully building an FAI is negligible. We humans seem at best just barely smart enough to build a superintelligent UFAI. Wouldn’t it be surprising that the intelligence threshold for building UFAI and FAI turn out to be the same?
What will construct advanced intelligent machines is slightly less advanced intelligent machines, in a symbiotic relationship with humans. It doesn’t much matter if the humans are genetically identical with the ones that barely managed to make flint axe heads—since they are not working on this task alone.
What will construct advanced intelligent machines is slightly less advanced intelligent machines, in a symbiotic relationship with humans. It doesn’t much matter if the humans are genetically identical with the ones that barely managed to make flint axe heads—since they are not working on this task alone.