If morals are not truth-apt, and free will is the control required for moral responsibility, then...
Alignment has many meanings. Minimally, it is about the AI not killing us.
AI s don’t have to share our aesthetic preferences to understand them. It would be a nuisance if they did—they might start demanding pot plants in their data centres -- so it is useful to distinguish aesthetic and moral values. So that’s one of the problems with the unproven but widely believed claim that all values are moral values.
Nevertheless, they are correct: there is now a mathematical foundation underpinning the Scientific Method
Bayes doesn’t encapsulate the whole scientific method, because it doesn’t tell you how to formulate hypotheses, or conduct experiments.
Bayes doesn’t give you a mathematical foundation of a useful kind, that is an objective kind. Two Bayesian scientists can quantify their subjective credences, quantify them differently, and have no way of reconciling their differences.
Alignment has many meanings. Minimally, it is about the AI not killing us.
AI s don’t have to share our aesthetic preferences to understand them. It would be a nuisance if they did—they might start demanding pot plants in their data centres -- so it is useful to distinguish aesthetic and moral values. So that’s one of the problems with the unproven but widely believed claim that all values are moral values.
Bayes doesn’t encapsulate the whole scientific method, because it doesn’t tell you how to formulate hypotheses, or conduct experiments.
Bayes doesn’t give you a mathematical foundation of a useful kind, that is an objective kind. Two Bayesian scientists can quantify their subjective credences, quantify them differently, and have no way of reconciling their differences.