In order to get our of the box, AI has to solve several smilingly innocent puzzles, which however, require a lot of computations, or put AI in (almost) infinite loop or create very strong ontological uncertainty. Or halt it.
This is a type of the questions, to which the answer is 42.
Weak examples: “what is the goal of AI’s goal,” “is model realism true?” and “are we in simulation?”
Really good philosophical landmines should be kept in secret as they should not appear in training datasets.
Philosophical landmines.
In order to get our of the box, AI has to solve several smilingly innocent puzzles, which however, require a lot of computations, or put AI in (almost) infinite loop or create very strong ontological uncertainty. Or halt it.
This is a type of the questions, to which the answer is 42.
Weak examples: “what is the goal of AI’s goal,” “is model realism true?” and “are we in simulation?”
Really good philosophical landmines should be kept in secret as they should not appear in training datasets.