All the “contentiousness” evaporates as soon as we’ve fixed the definitions and got rid of the semantic confusion.
Of course not. Having clear semantics is a necessary condition for understanding the world, not a sufficient one. You have to look. Among other things.
You gather evidence about interpreting evidence
You can only gather theories about interpreting evidence. You can’t see how well such theories work by direct inspection. It isn’t looking.
This would work much better if you thought about it concretely. Alice says evidence includes introspections, subjective seemings; Bob says it is only ever objective. What do you do next?
When you are a result of successful replication of imperfect replicators in a competitive environment with limited resources there is quite a lot of evidence for some kind of “optimality”.
I doing see why a slug or wallaby is optimising anything, so why should I be? What makes humans the pinnacle of creation?
If the existed in some kind of separate magisterium where our common knowledge wouldn’t be applicable than yes
They exist in a separate magisterium where direct, sensory evidence isn’t applicable, because they are about the causes and meaning of whatever sensory evidence you happen to have. The interpretation of evidence.is a separate magisterium from.You gathering evidence , and not in a spooky way.
So you use indirect feedback
Which is what? Give concrete examples.
I can keep applying it to the “tricky cases
Applying what? You can’t keep applying evidence-gathering to solve the problem of interpreting evidence. It’s unclear whether you are talking about pure empiricism, or.some.kind of vaguely defined solution everything.
And this way I can aggregate more and more evidence
Which is not self interpreting, so you are just creating a bigger and bigger problem.
We can try multiple of them and see how these models predict new data
But they dont, in the trickiest. I’ve already addressed that point: the Ptolemaic model can by adjusted to fit any data.
Being already selected for intuitions related to surviving in the world
I’ve already addressed that point too: you don’t need ontological understanding to survive. You don’t get direct feedback about ontological understanding. So it’s a separate magisterium.
Okay I think I understand what is going on here. Are you under impression that I’m trying to bring back the old empiricism vs rationalism debate, arguing on the side of empiricism?
What’s “looking” if not empiricism?
I’m not arguing for rationalism over empiricism, or against never using empirucism. I’m arguing against pure empiricism as being able to solve all problems. Which is not to say there is something else that does. It’s a mixture of pluralism—there’s more than one kind of epistemic problem and solution—and scepticism—theres no guarantee of solving anything even using more tools than “looking”.
I already said that here :-
And Its also not the case that you have to make positive claims about apriori reasoning to point out the limitations of empiricism. And Its also not the case that noticing the limitations of empiricism is the same as refusing to use it at all.
Yes, it’s all probabilities all the way down, without perfect certainty
No, it’s worse than that. Probabilities require quantification of how true or likely something is. But there is no way of objectively quantifying that for ontological interpretation. And subjective probability leads to perennial disagreement , not convergence.
We can come up with adversarial examples where it means that we were completely duped, and our views are completely disentangled from “true reality” and were simply describing an “illusion”, but
But, that only allows us to reject N false theories , not home in on a single true one. Convergence is a problem as well as certainty.
Renaming “reality” to “illusion” doesn’t actually change anything of substance
If your beliefs are illusory, they are false. That might not make an difference instrumentally, to what you can predict, but you are not assuming instrumentalism and neither is Yudkowsky.
But generally, consider the fact that philosophy reasons in all direction and normality is only a relatively small space of all possible destinations
What’s normality? If you just mean “saving appearances”, rather than predicting something that is empirically disproveable, then most philosophy does that. What doesn’t? Illusionism? But that’s quite popular around here!
I also thought the robot:s answer missed the point quite badly …because it reduced the ought all the way down to an is—or rather a bunch of isses.
If you dismiss any reduction of ought to is,
I don’t. As I said:-
Reducing ethical normativity isn’t bad
Not to what one would. Your ethical module may not be directly connected to the behavioral one and so your decisions are based on other considerations, like desires unrelated to ethics.
Are you saying that’s the only problem? That the action you would have taken absent those issues is the right action, in an ultimate sense?
This doesn’t change the fact that what you ought to do is the output (or a certain generalization of multiple outputs) of the ethical module,
It’s not a fact. There are any number of ethical theories where what you should do is not necessarily what you would do. e.g. Utilitarianism, which is quite popular round here. When you think about maths , that’s neural activity, but it doesn’t follow that it defines mathematical correctness. Errors are neural activity as.well. The normative question is quite separate. Even if want to reduce it, it doesn’t follow that they only way to do so is to have eight billion correct answers.
which is a computation taking place in the real world, which can be observed.
That’s quite irrelevant. The fact that it takes neural activity to output an action tells you nothing about the ethics of the action. “Ought” and “ethical” aren’t just vacuous labels for anything you do or want to do.
there are potentially eight billion answers to what one ought to do.
Potentially but not actually.
Nothing hinges on having exactly right billion right answers. More than one right answer is enough of a problem.
Once again, when you look, turns out individual ethical views of people are not *that* different
Yes they are. Political divisions reflect profound ethical divisions.
There’s a consistent theme in rationalist writing on ethics, where the idea that everyone has basically the same values , or “brain algorithms”, is just assumed … but it needs to be based on evidence as much as anything else.
Not basically the same, but somewhat similar. And it’s not just assumed, it’s quite observable.
The differences are observable. Fraught debates are people disagreeing about the value of freedom.versus equality, etc.
I’m any case, the problem of subjectivism is that there are potentially multiple right answers.
Human ethical disagreements are mostly about edge cases. Like what is your objective claim here, that human values are not correlated at all?
No. I don’t accept that ethics “is” whether values you happen to have, or whatever decision you happen to make.
It’s social constructivism of morality. Which is rooted in our other knowledge about game theory and evolution.
If morality is socially constructed , the robot is wrong about metaethics. What the robot should do.is follow the social rules, and if its programming is something different, then it’s actions are object level wrong.
.
Yes, this is exactly my point. A lot of things, which are treated as “applied missing the point answers” are in fact legitimately philosophically potent. At the very least, we should be paying much more attention to them.
Is the robot missing the point or not?
Therefore it’s not just “by looking” but “pretty much by looking”. I completely agree about the necessity to abandon the notion of certainty
That’s just the start. The tricky question is now much else we need to abandon. In particular , it’s not clear whether convergence on a single most likely theory of everything is possible, even if you have abandoned certainty.
Of course not. Having clear semantics is a necessary condition for understanding the world, not a sufficient one. You have to look. Among other things.
You can only gather theories about interpreting evidence. You can’t see how well such theories work by direct inspection. It isn’t looking.
This would work much better if you thought about it concretely. Alice says evidence includes introspections, subjective seemings; Bob says it is only ever objective. What do you do next?
I doing see why a slug or wallaby is optimising anything, so why should I be? What makes humans the pinnacle of creation?
They exist in a separate magisterium where direct, sensory evidence isn’t applicable, because they are about the causes and meaning of whatever sensory evidence you happen to have. The interpretation of evidence.is a separate magisterium from.You gathering evidence , and not in a spooky way.
Which is what? Give concrete examples.
Applying what? You can’t keep applying evidence-gathering to solve the problem of interpreting evidence. It’s unclear whether you are talking about pure empiricism, or.some.kind of vaguely defined solution everything.
Which is not self interpreting, so you are just creating a bigger and bigger problem.
But they dont, in the trickiest. I’ve already addressed that point: the Ptolemaic model can by adjusted to fit any data.
I’ve already addressed that point too: you don’t need ontological understanding to survive. You don’t get direct feedback about ontological understanding. So it’s a separate magisterium.
What’s “looking” if not empiricism?
I’m not arguing for rationalism over empiricism, or against never using empirucism. I’m arguing against pure empiricism as being able to solve all problems. Which is not to say there is something else that does. It’s a mixture of pluralism—there’s more than one kind of epistemic problem and solution—and scepticism—theres no guarantee of solving anything even using more tools than “looking”.
I already said that here :-
No, it’s worse than that. Probabilities require quantification of how true or likely something is. But there is no way of objectively quantifying that for ontological interpretation. And subjective probability leads to perennial disagreement , not convergence.
But, that only allows us to reject N false theories , not home in on a single true one. Convergence is a problem as well as certainty.
If your beliefs are illusory, they are false. That might not make an difference instrumentally, to what you can predict, but you are not assuming instrumentalism and neither is Yudkowsky.
What’s normality? If you just mean “saving appearances”, rather than predicting something that is empirically disproveable, then most philosophy does that. What doesn’t? Illusionism? But that’s quite popular around here!
I don’t. As I said:-
Are you saying that’s the only problem? That the action you would have taken absent those issues is the right action, in an ultimate sense?
It’s not a fact. There are any number of ethical theories where what you should do is not necessarily what you would do. e.g. Utilitarianism, which is quite popular round here. When you think about maths , that’s neural activity, but it doesn’t follow that it defines mathematical correctness. Errors are neural activity as.well. The normative question is quite separate. Even if want to reduce it, it doesn’t follow that they only way to do so is to have eight billion correct answers.
That’s quite irrelevant. The fact that it takes neural activity to output an action tells you nothing about the ethics of the action. “Ought” and “ethical” aren’t just vacuous labels for anything you do or want to do.
Nothing hinges on having exactly right billion right answers. More than one right answer is enough of a problem.
Yes they are. Political divisions reflect profound ethical divisions.
The differences are observable. Fraught debates are people disagreeing about the value of freedom.versus equality, etc.
I’m any case, the problem of subjectivism is that there are potentially multiple right answers.
No. I don’t accept that ethics “is” whether values you happen to have, or whatever decision you happen to make.
If morality is socially constructed , the robot is wrong about metaethics. What the robot should do.is follow the social rules, and if its programming is something different, then it’s actions are object level wrong. .
Is the robot missing the point or not?
That’s just the start. The tricky question is now much else we need to abandon. In particular , it’s not clear whether convergence on a single most likely theory of everything is possible, even if you have abandoned certainty.