[Question] Does blockchain technology offer potential solutions to some AI alignment problems?

I am not an expert in blockchain technology nor AI, so I am asking this community this question as I imagine it is already much explored.

My understanding is that AI alignment is an existential risk because of the fear of an intelligence explosion, and the related paperclip maximization risks (among others). This appear to cause a substantial amount of fear in some people. However, my initial reaction is more sanguine, as it seems to me that we may have tools to avoid these risks—namely, the concentration of a far more capable power than us.

Specifically, two features of blockchain technologies that would appear to reduce the power of a hostile AI would be a) transaction costs and b) decentralization. Say the world interfaced using blockchain technology. Then the transaction costs of taking a series of adversarial action from this hostile AI would either bankrupt the AI, or stall enough to allow people to fork and abandon that AI. But the greater point may be decentralization. If the incentives are set up to minimize collusion, and constraints on concentration imposed, wouldn’t that weaken the power that any given AI could accumulate? That is, if we set up the system such that no one party can dominate, then even a superintelligent AI could not dominate.

That is not the cleanest way to express my intuition, so maybe this a better way: my understanding is that crypto is secured not by trust, guns, or rules, but by fundamental computational limits (regarding e.g., the factorization of large numbers for the cryptographic hash). Those fundamental limits imposed by nature would apply to an AI as well. By connecting money and power to the fundamental limits imposed by nature /​ math, the ability of any one actor (including AI) to gain arbitrary power without the consent of everyone else would be limited, as even arbitrary intelligence can’t do the physically impossible. That is, by imposing a constraint on the ability to accumulate power in general, that constraint is also imposed on AI.

Where does this logic fail me? I know the logic I presented here is not at all air tight—for example, I could imagine some AI exploiting some incentive structure, some weakness in some chain, or using collusion to make my point moot. These topics are deep, and I know I’m missing a lot of the nuance here. Still, it seems that many people have thought deeply on this subject, so I imagine there is an answer (likely a negative one, if my understanding of others’ sentiment is correct), and I was hoping this community could help me learn. Nonetheless, I am hopeful that my intuition is related to some sort of optimistic vision of the future, and I would welcome any readings linking these two subjects.

No comments.