Relatedly, I fear that outside of Redwood and Forethought, the “AI Consciousness and Welfare” field is focused on the stuff in this post plus advocacy rather than stuff I like: (1) making deals with early schemers to reduce P(AI takeover) and (2) prioritizing by backchaining from considerations about computronium and von Neumann probes.
Edit: here’s a central example of a proposition in an important class of propositions/considerations that I expect the field basically just isn’t thinking about and lacks generators to notice:
In the long run, when we’re colonizing the galaxies, the crucial thing is that we fill the universe with axiologically-good minds. In the short run, what matters is more about being cooperative with the AIs (and maybe the small scale means deontology is more relevant); the AIs’ preferences, not scope-sensitive direct axiological considerations, are what matters.
Relatedly, I fear that outside of Redwood and Forethought, the “AI Consciousness and Welfare” field is focused on the stuff in this post plus advocacy rather than stuff I like: (1) making deals with early schemers to reduce P(AI takeover) and (2) prioritizing by backchaining from considerations about computronium and von Neumann probes.
Edit: here’s a central example of a proposition in an important class of propositions/considerations that I expect the field basically just isn’t thinking about and lacks generators to notice:
In the long run, when we’re colonizing the galaxies, the crucial thing is that we fill the universe with axiologically-good minds. In the short run, what matters is more about being cooperative with the AIs (and maybe the small scale means deontology is more relevant); the AIs’ preferences, not scope-sensitive direct axiological considerations, are what matters.