If one knows nothing about morality, what does the word “should” mean, at all?
If an agent is deciding what to do, then it is asking the “should” question. As with the burning orphanage, the question is always thrust upon it.
Not knowing any morality, and not knowing any way to find morality, or even any clue about how to go about finding morality, if it exists; none of that gets you out of having to decide what to do.
If you can’t decide what to do at all, because you have no base desires, then you’re just broken. You need to figure out how to figure out what to do.
A morally empty philosopher given newcomb’s problem can think about different strategies for agents who want more money, and consider agents that want totally different things. (Maybe a religious injunction against dealings with super entities.) An empty philosopher can decide that it’s better to one box than two box if you want money. It can in general think about how to make ‘should’ decisions without ever discovering something that it wants intrinsically.
You do have to do all this thinking with the brain you’ve got, but you don’t need any special moral knowledge. Moral thinking is not that different than general thinking. You can, sometimes, spot errors in your own thinking. You can also spot limitations, problems that are just too big for you now.
Since you are trying to figure out what to do, and you think you might want something, you should find ways to surpass those limitations and mitigate those errors, so that you do the right thing when you have some notion of wanting something.
Now, this only really applies to morally empty philosophers. I think there is a nonzero utility to improving one’s ability to think about utilities, but there’s no obvious way to insert that ‘nonzero’ into a primate brain, or into any agent that already wants something. I think joy would be a fine starting point.
In fact, I think even a morally empty philosopher on earth might consider joy and other evolved impulses as possible clues to something deeper, since we and other animals are the only concrete examples of agents it has.
If one knows nothing about morality, what does the word “should” mean, at all?
If an agent is deciding what to do, then it is asking the “should” question. As with the burning orphanage, the question is always thrust upon it. Not knowing any morality, and not knowing any way to find morality, or even any clue about how to go about finding morality, if it exists; none of that gets you out of having to decide what to do. If you can’t decide what to do at all, because you have no base desires, then you’re just broken. You need to figure out how to figure out what to do.
A morally empty philosopher given newcomb’s problem can think about different strategies for agents who want more money, and consider agents that want totally different things. (Maybe a religious injunction against dealings with super entities.) An empty philosopher can decide that it’s better to one box than two box if you want money. It can in general think about how to make ‘should’ decisions without ever discovering something that it wants intrinsically.
You do have to do all this thinking with the brain you’ve got, but you don’t need any special moral knowledge. Moral thinking is not that different than general thinking. You can, sometimes, spot errors in your own thinking. You can also spot limitations, problems that are just too big for you now. Since you are trying to figure out what to do, and you think you might want something, you should find ways to surpass those limitations and mitigate those errors, so that you do the right thing when you have some notion of wanting something.
Now, this only really applies to morally empty philosophers. I think there is a nonzero utility to improving one’s ability to think about utilities, but there’s no obvious way to insert that ‘nonzero’ into a primate brain, or into any agent that already wants something. I think joy would be a fine starting point.
In fact, I think even a morally empty philosopher on earth might consider joy and other evolved impulses as possible clues to something deeper, since we and other animals are the only concrete examples of agents it has.