Consensus isn’t really a value to align to, it is the process of alignment itself. Any goal you specify will be the wrong goal. So we need to ask, what is left if you have alignment without goals?
I don’t think we differ that much regarding the answer. See also this follow up post: https://hiveism.substack.com/p/a-path-towards-solving-ai-alignment
Consensus isn’t really a value to align to, it is the process of alignment itself. Any goal you specify will be the wrong goal. So we need to ask, what is left if you have alignment without goals?
I don’t think we differ that much regarding the answer. See also this follow up post: https://hiveism.substack.com/p/a-path-towards-solving-ai-alignment