I assume AIs will be superhuman at that stuff yeah, it was priced in to my claims. Basically a bunch of philosophical dilemmas might be more values-shaped than fact-shaped. Simply training more capable AIs won’t pin down the answers to the questions, for the same reason that it doesn’t pin down the answers to ethical questions.
I assume AIs will be superhuman at that stuff yeah, it was priced in to my claims. Basically a bunch of philosophical dilemmas might be more values-shaped than fact-shaped. Simply training more capable AIs won’t pin down the answers to the questions, for the same reason that it doesn’t pin down the answers to ethical questions.