are uploads conscious? What about AIs? Should we care about shrimp? What population ethics views should we have? What about acausal trade? What about pascal’s wager? What about meaning? What about diversity?
It sounds like you’re saying an AI has to get these questions right in order to count as aligned, and that’s part of the reason why alignment is hard. But I expect that many people in the AI industry don’t care about alignment in this sense, and instead just care about the ‘follow instructions’ sense of alignment.
It sounds like you’re saying an AI has to get these questions right in order to count as aligned, and that’s part of the reason why alignment is hard. But I expect that many people in the AI industry don’t care about alignment in this sense, and instead just care about the ‘follow instructions’ sense of alignment.