My reading of the text might be wrong, but it seems like bacteria count as living beings with goals? More speculatively, possible organisms that might exist somewhere in the universe also count for the consensus? Is this right?
If so, a basic disagreement is that I don’t think we should hand over the world to a “consensus” that is a rounding error away from 100% inhuman. That seems like a good way of turning the universe into ugly squiggles.
If the consensus mechanism has a notion of power, such that creatures that are disempowered have no bargaining power in the mind of the AI, then I have a different set of concerns. But I wasn’t able to quickly determine how the proposed consensus mechanism actually works, which is a bad sign from my perspective.
I believe a recursively aligned AI model would be more aligned and safe than a corrigible model, although both would be susceptible to misuse.
Why do you disagree with the above statement?
My reading of the text might be wrong, but it seems like bacteria count as living beings with goals? More speculatively, possible organisms that might exist somewhere in the universe also count for the consensus? Is this right?
If so, a basic disagreement is that I don’t think we should hand over the world to a “consensus” that is a rounding error away from 100% inhuman. That seems like a good way of turning the universe into ugly squiggles.
If the consensus mechanism has a notion of power, such that creatures that are disempowered have no bargaining power in the mind of the AI, then I have a different set of concerns. But I wasn’t able to quickly determine how the proposed consensus mechanism actually works, which is a bad sign from my perspective.