Knowledge of the terrain might be hard to get reliably
Knowing that the world is made of atoms should take an AI a long way.
If these people that develop [AGI] are friendly they might decide to distribute it to other people to make it harder for any one project to take off.
I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I’m concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of “values” and “laws” should govern the choices of intelligent agents with such power.
Knowing that the world is made of atoms should take an AI a long way.
I hold to the classic definition of friendly AI as being AI with friendly values, which retains them (or even improves them) as it surpasses human intelligence and otherwise self-modifies. As far as I’m concerned, AlphaGo Zero demonstrates that raw problem-solving ability has crossed a dangerous threshold. We need to know what sort of “values” and “laws” should govern the choices of intelligent agents with such power.