I was surprised to not see much consideration, either here or in the original GD and IC essays, of the brute force approach of “ban development of certain forms of AI,” such as Anthony Aguirre proposes. Is that more (a) because it would be too difficult to enforce such a ban or (b) because those forms of AI are considered net positive despite the risk of human disempowerment?
Not commenting on here, but from my perspective, in very short form - bans and pauses have a big problem to overcome: being “incentive compatible” (it’s mostly not enforcement—stuff can be enforced by hard power—but why would actors agree?) - in some sense this is a coordination problem - my guess is most likely form how to overcome the coordination problem in good way involves some AI cognition helping humans to coordinate → suggests differential technological development - other viable forms of overcoming the coordination problem seems possible, but often unappealing for various reasons I don’t want to advocate atm
I was surprised to not see much consideration, either here or in the original GD and IC essays, of the brute force approach of “ban development of certain forms of AI,” such as Anthony Aguirre proposes. Is that more (a) because it would be too difficult to enforce such a ban or (b) because those forms of AI are considered net positive despite the risk of human disempowerment?
Not commenting on here, but from my perspective, in very short form
- bans and pauses have a big problem to overcome: being “incentive compatible” (it’s mostly not enforcement—stuff can be enforced by hard power—but why would actors agree?)
- in some sense this is a coordination problem
- my guess is most likely form how to overcome the coordination problem in good way involves some AI cognition helping humans to coordinate → suggests differential technological development
- other viable forms of overcoming the coordination problem seems possible, but often unappealing for various reasons I don’t want to advocate atm