Nice post, thanks for sharing it. In terms of a plan for fighting human disempowerment that’s compatible with the way things seem to be going, i.e., assuming we don’t pause/stop AI development, I think we should:
Not release any AGI/AGI+ systems without hardware-level, tamper-proof artificial conscience guardrails on board, with these consciences geared towards promoting human responsibility as a heuristic for promoting well-being
Avoid having humans living on universal basic incomes (UBI) with little to no motivation to keep themselves from becoming enfeebled—a conditional supplemental income (CSI) might be one way to do this
Does #1 have potential risks and pitfalls, and is it going to be difficult to figure out and implement in time? Yes, but more people focusing more effort on it would help. And AI’s that have conscience around disempowering humans seems like a good first step to help avoid disempowering humans.
#1 would also help against what I think is a more immediate threat: use of advanced AI’s by bad human actors to purposely or uncaringly cause destruction, such as in the pursuit of making money. Autonomous advanced defensive AI’s with artificial conscience guardrails could potentially limit collateral damage while preventing/defending against attacks. The speed of such attacks will likely be too great for humans to be in the loop on decisions made to defend against them.
Nice post, thanks for sharing it. In terms of a plan for fighting human disempowerment that’s compatible with the way things seem to be going, i.e., assuming we don’t pause/stop AI development, I think we should:
Not release any AGI/AGI+ systems without hardware-level, tamper-proof artificial conscience guardrails on board, with these consciences geared towards promoting human responsibility as a heuristic for promoting well-being
Avoid having humans living on universal basic incomes (UBI) with little to no motivation to keep themselves from becoming enfeebled—a conditional supplemental income (CSI) might be one way to do this
Does #1 have potential risks and pitfalls, and is it going to be difficult to figure out and implement in time? Yes, but more people focusing more effort on it would help. And AI’s that have conscience around disempowering humans seems like a good first step to help avoid disempowering humans.
#1 would also help against what I think is a more immediate threat: use of advanced AI’s by bad human actors to purposely or uncaringly cause destruction, such as in the pursuit of making money. Autonomous advanced defensive AI’s with artificial conscience guardrails could potentially limit collateral damage while preventing/defending against attacks. The speed of such attacks will likely be too great for humans to be in the loop on decisions made to defend against them.