I’m suggesting that maybe some of us lucked into a status game where we use “reason” and “deliberation” and “doing philosophy” to compete for status, and that somehow “doing philosophy” etc. is a real thing that eventually leads to real answers about what values we should have (which may or may not depend on who we are). Of course I’m far from certain about this, but at least part of me wants to act as if it’s true, because what other choice does it have?
I don’t think that’s a viable alternative, given that I don’t believe that egoism is certainly right (surely the right way to treat moral uncertainty can’t be to just pick something and “adopt it”?), plus I don’t even know how to adopt egoism if I wanted to:
I’m suggesting that maybe some of us lucked into a status game where we use “reason” and “deliberation” and “doing philosophy” to compete for status, and that somehow “doing philosophy” etc. is a real thing that eventually leads to real answers about what values we should have (which may or may not depend on who we are). Of course I’m far from certain about this, but at least part of me wants to act as if it’s true, because what other choice does it have?
The alternative is egoism. To the extent that we are allies, I’d be happy if you adopted it.
I don’t think that’s a viable alternative, given that I don’t believe that egoism is certainly right (surely the right way to treat moral uncertainty can’t be to just pick something and “adopt it”?), plus I don’t even know how to adopt egoism if I wanted to:
https://www.lesswrong.com/posts/Nz62ZurRkGPigAxMK/where-do-selfish-values-come-from
https://www.lesswrong.com/posts/c73kPDr8pZGdZSe3q/solving-selfishness-for-udt (which doesn’t really solve the problem despite the title)