> If people remained economically indispensable, even fairly serious misalignment could have non catastrophic outcomes.
Good point. Relatedly, even the most terribly misaligned governments mostly haven’t starved or killed a large fraction of their citizens. In this sense, we already survive misaligned superintelligence on a regular basis. But only when, as you say, people remain economically indispensable.
> Someone I was explaining it to described it as “indefinite pessimism”.
I think this is a fair criticism, in the sense that it’s not clear what could make us happy about the long-term future even in principle. But to me, this is just what being long-term agentic looks like! I don’t understand why so many otherwise-agentic people I know seem content to YOLO it post-AGI, or seem to be reassured that “the AGI will figure it out for us”.
I didn’t mean it as a criticism, more as the way I understand it. Misalignment is a “definite” reason for pessimism—and therefore somewhat doubtful about whether it will actually play out. Gradual disempowerment is less definite about what actual form problems may take, but also a more robust reason to think there is a risk.
Oh, makes sense. Kind of like Yudkowsky’s arguments about how you don’t know how a chess master will beat you, just that they will. We also can’t predict exactly how a civilization will disempower its least productive and sophisticated members. But a fool and his money are soon parted, except under controlled circumstances.
> If people remained economically indispensable, even fairly serious misalignment could have non catastrophic outcomes.
Good point. Relatedly, even the most terribly misaligned governments mostly haven’t starved or killed a large fraction of their citizens. In this sense, we already survive misaligned superintelligence on a regular basis. But only when, as you say, people remain economically indispensable.
> Someone I was explaining it to described it as “indefinite pessimism”.
I think this is a fair criticism, in the sense that it’s not clear what could make us happy about the long-term future even in principle. But to me, this is just what being long-term agentic looks like! I don’t understand why so many otherwise-agentic people I know seem content to YOLO it post-AGI, or seem to be reassured that “the AGI will figure it out for us”.
I didn’t mean it as a criticism, more as the way I understand it. Misalignment is a “definite” reason for pessimism—and therefore somewhat doubtful about whether it will actually play out. Gradual disempowerment is less definite about what actual form problems may take, but also a more robust reason to think there is a risk.
Oh, makes sense. Kind of like Yudkowsky’s arguments about how you don’t know how a chess master will beat you, just that they will. We also can’t predict exactly how a civilization will disempower its least productive and sophisticated members. But a fool and his money are soon parted, except under controlled circumstances.