(by which I think most of them do mean “shared human value” even if they don’t all bother to specify), and that I’m suggesting pointing Value Learning at.
I’m suggesting they should bother to specify.
(along with more basic things, like us being around flowers, parks, seashores, and temperature around 75°F) what I’m suggesting as a candidate definition for the “human values”
But are they relevant to ethics or alignment? a lot of tuem are aesthetic preferences that can be satisfied without public policy.
Shared genetics can lead to different blood and tissue types, so it can lead to different ethical types.
Politics indicates it’s more like 50-50, when you are talking about the kind of values that cannot be satisfied individually.
And “my tribe”. What you want is Universalism, but universalism is a late and strange development. It seems obvious to twenty first century Californians, by they are The weirdest of the WEIRD. Reading values out of evopsych is likely to push you in the direction of tribalism, so I don’t see how it helps.
On the Savannah, yes of course it does. In a world-spanning culture of eight billion people, quite a few of whom are part of nuclear-armed alliances, intelligence and the fact that extinction is forever suggests defining “tribe” ~= “species + our comensal pets”. And also noting and reflecting upon that the human default tendency to assume that tribes are around our Dunbar Number in size is now maladaptive, and has been for millennia.
There are technologically advanced tribalists destroying each other right now. It ’s not that simple.
It’s not the case that science boils down to Bayes alone,
Are you saying that there’s more to the Scientific Method that applied approximate Bayesiasm?
Yes. I learnt physics without ever learning Bayes. Science=Bayes is the extraordinary claim that needs justification.
or that science is the only alternative to philosophy. Alignment/control is more like engineering.
Engineering is applied Science, Science is applied Mathematics; from Philosophy’s point of view it’s all Naturalism. In the above, it kept turning out that Engineering methodology is exactly what Evolutionary Psychology says is the adaptive way for a social species to treat their extended phenotype.
Again, I would suggest using the word engineering, if engineering is what you mean.
So, In philosophy of science terminology, pholosophers have plenty of hypothesis generation, but very little falsifiability (beyond, as Gettier did, demonstarting an internal logical inconsistency), so the tendency it to increase the number of credible candidate answers, rather than decreasing them.
That’s still useful If you have some way of judging their correctness—it doesn’t have to be empiricism. To find the one true hypothesis, you need to consider all.of them,.and to approximate that , you need to do consider a lot of them.
The same thing occurs within science , because science isn’t pure empiricism. The panoply of interpretations of QM is an example.
But are they relevant to ethics or alignment? a lot of tuem are aesthetic preferences that can be satisfied without public policy.
Alignment is about getting our AIs do do what we want, and not other things. Them understanding and attempting to fit within human aesthetic and ergonomic preferences is part of that. Not a particularly ethically complicated part, but still, the reason for flowers in urban landscapes is that humans like flowers. Full stop (apart from the biological background on why that evolved, presumably because flowers correlate with good places to gather food). That’s a sufficient reason, and an AI urban planner needs to know and respect that.
I learnt physics without ever learning Bayes. Science=Bayes is the extraordinary claim that needs justification.
I think I’m going to leave that to other people on Less Wrong — they’re the ones who convinced me of this, and I also don’t see it as core to my argument.
Nevertheless, they are correct: there is now a mathematical foundation underpinning the Scientific Method, it’s not just an arbitrary set of mundanely-useful epistemological rules that were discovered by people like Roger Bacon and Karl Popper — we (later) figured out mathematically WHY that set of rules works so well: because they’re a computable approximation to Solomonoff Induction
Again, I would suggest using the word engineering, if engineering is what you mean.
There is a difference between “I personally suggest we just use engineering” and “Evolutionary theory makes a clear set of predictions of why it’s a very bad idea to do anything other than just use engineering”. You seem to agree with my advice, yet not want people to hear the part about why they should follow it and what will happen if they don’t. Glad to hear you agree with me, but some people need a little more persuading — and I’d rather they didn’t kill us all.
If morals are not truth-apt, and free will is the control required for moral responsibility, then...
Alignment has many meanings. Minimally, it is about the AI not killing us.
AI s don’t have to share our aesthetic preferences to understand them. It would be a nuisance if they did—they might start demanding pot plants in their data centres -- so it is useful to distinguish aesthetic and moral values. So that’s one of the problems with the unproven but widely believed claim that all values are moral values.
Nevertheless, they are correct: there is now a mathematical foundation underpinning the Scientific Method
Bayes doesn’t encapsulate the whole scientific method, because it doesn’t tell you how to formulate hypotheses, or conduct experiments.
Bayes doesn’t give you a mathematical foundation of a useful kind, that is an objective kind. Two Bayesian scientists can quantify their subjective credences, quantify them differently, and have no way of reconciling their differences.
I’m suggesting they should bother to specify.
But are they relevant to ethics or alignment? a lot of tuem are aesthetic preferences that can be satisfied without public policy.
Shared genetics can lead to different blood and tissue types, so it can lead to different ethical types.
Politics indicates it’s more like 50-50, when you are talking about the kind of values that cannot be satisfied individually.
There are technologically advanced tribalists destroying each other right now. It ’s not that simple.
Yes. I learnt physics without ever learning Bayes. Science=Bayes is the extraordinary claim that needs justification.
Again, I would suggest using the word engineering, if engineering is what you mean.
That’s still useful If you have some way of judging their correctness—it doesn’t have to be empiricism. To find the one true hypothesis, you need to consider all.of them,.and to approximate that , you need to do consider a lot of them.
The same thing occurs within science , because science isn’t pure empiricism. The panoply of interpretations of QM is an example.
Alignment is about getting our AIs do do what we want, and not other things. Them understanding and attempting to fit within human aesthetic and ergonomic preferences is part of that. Not a particularly ethically complicated part, but still, the reason for flowers in urban landscapes is that humans like flowers. Full stop (apart from the biological background on why that evolved, presumably because flowers correlate with good places to gather food). That’s a sufficient reason, and an AI urban planner needs to know and respect that.
I think I’m going to leave that to other people on Less Wrong — they’re the ones who convinced me of this, and I also don’t see it as core to my argument.
Nevertheless, they are correct: there is now a mathematical foundation underpinning the Scientific Method, it’s not just an arbitrary set of mundanely-useful epistemological rules that were discovered by people like Roger Bacon and Karl Popper — we (later) figured out mathematically WHY that set of rules works so well: because they’re a computable approximation to Solomonoff Induction
There is a difference between “I personally suggest we just use engineering” and “Evolutionary theory makes a clear set of predictions of why it’s a very bad idea to do anything other than just use engineering”. You seem to agree with my advice, yet not want people to hear the part about why they should follow it and what will happen if they don’t. Glad to hear you agree with me, but some people need a little more persuading — and I’d rather they didn’t kill us all.
Alignment has many meanings. Minimally, it is about the AI not killing us.
AI s don’t have to share our aesthetic preferences to understand them. It would be a nuisance if they did—they might start demanding pot plants in their data centres -- so it is useful to distinguish aesthetic and moral values. So that’s one of the problems with the unproven but widely believed claim that all values are moral values.
Bayes doesn’t encapsulate the whole scientific method, because it doesn’t tell you how to formulate hypotheses, or conduct experiments.
Bayes doesn’t give you a mathematical foundation of a useful kind, that is an objective kind. Two Bayesian scientists can quantify their subjective credences, quantify them differently, and have no way of reconciling their differences.