For example, if there’s a superintelligent AI capable of unilaterally transforming all matter in your light cone into paperclips, is there any sense in which you have enough power to enforce your ownership of anything independent of such an AI?
No, which is why I “invest” in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.
No, which is why I “invest” in making bad outcomes a tiny bit less likely with monthly donations to the EA long-term future fund, which funds AI safety research and other X-risk mitigation work.