Shrimp welfare is potentially analogous to human welfare in a superintelligent world, in that a shrimp could also be cryopreserved beyond current lifespans and eventually uplifted (with some direct assistance) to the level of a superintelligence. The analogy plausibly breaks somewhere between humans and shrimps (being too simple to centrally have minds the way humans do), but probably not between humans and pigs. Pigs in their natural state wouldn’t contribute to a human society as trade partners either, and neither would humans in a superintelligent world.
Pigs in their natural state wouldn’t contribute to a human society as trade partners either, and neither would humans in a superintelligent world.
Sure. We should not count on ASI coordination subsidies being passed onto humans too. It sounds like people should use their dominant power now to control what kind of ASI is built.
You can argue that resolving that vague bundle of norms and intuitions to “care about everything” moral framework would make it easier to point to that goal or something? Or to peer pressure ASI into adopting it? If this seriously would work, it would be desirable to do it, and moral to advocate for it. Or maybe not?? If it gets this universe tiled in most simple morally relevant beings having great time all the time or something.
Also, yeah, lottery for shrimp cryopreservation for uplift might be an interesting way to give them more share/negotiating power.
Shrimp welfare is potentially analogous to human welfare in a superintelligent world, in that a shrimp could also be cryopreserved beyond current lifespans and eventually uplifted (with some direct assistance) to the level of a superintelligence. The analogy plausibly breaks somewhere between humans and shrimps (being too simple to centrally have minds the way humans do), but probably not between humans and pigs. Pigs in their natural state wouldn’t contribute to a human society as trade partners either, and neither would humans in a superintelligent world.
Sure. We should not count on ASI coordination subsidies being passed onto humans too. It sounds like people should use their dominant power now to control what kind of ASI is built.
You can argue that resolving that vague bundle of norms and intuitions to “care about everything” moral framework would make it easier to point to that goal or something? Or to peer pressure ASI into adopting it? If this seriously would work, it would be desirable to do it, and moral to advocate for it. Or maybe not?? If it gets this universe tiled in most simple morally relevant beings having great time all the time or something.
Also, yeah, lottery for shrimp cryopreservation for uplift might be an interesting way to give them more share/negotiating power.