I think making deals with AIs is applicable outside of just trading with autonomous AIs! In particular makingdeals with AIs to reveal their misalignment or do useful safety work despite being misaligned. It’s not about comparative advantage.
Relatedly, I fear that outside of Redwood and Forethought, the “AI Consciousness and Welfare” field is focused on the stuff in this post plus advocacy rather than stuff I like: (1) making deals with early schemers to reduce P(AI takeover) and (2) prioritizing by backchaining from considerations about computronium and von Neumann probes.
Edit: here’s a central example of a proposition in an important class of propositions/considerations that I expect the field basically just isn’t thinking about and lacks generators to notice:
In the long run, when we’re colonizing the galaxies, the crucial thing is that we fill the universe with axiologically-good minds. In the short run, what matters is more about being cooperative with the AIs (and maybe the small scale means deontology is more relevant); the AIs’ preferences, not scope-sensitive direct axiological considerations, are what matters.
I think making deals with AIs is applicable outside of just trading with autonomous AIs! In particular making deals with AIs to reveal their misalignment or do useful safety work despite being misaligned. It’s not about comparative advantage.
Relatedly, I fear that outside of Redwood and Forethought, the “AI Consciousness and Welfare” field is focused on the stuff in this post plus advocacy rather than stuff I like: (1) making deals with early schemers to reduce P(AI takeover) and (2) prioritizing by backchaining from considerations about computronium and von Neumann probes.
Edit: here’s a central example of a proposition in an important class of propositions/considerations that I expect the field basically just isn’t thinking about and lacks generators to notice:
In the long run, when we’re colonizing the galaxies, the crucial thing is that we fill the universe with axiologically-good minds. In the short run, what matters is more about being cooperative with the AIs (and maybe the small scale means deontology is more relevant); the AIs’ preferences, not scope-sensitive direct axiological considerations, are what matters.