You could be a longtermist and still regard a singleton as the most likely outcome. It would just mean that a human-aligned singleton is the only real chance for a human-aligned long-term future, and so you’d better make that your priority, however unlikely it may be. It’s apparent that a lot of the old-school (pre-LLM) AI-safety people think this way, when they talk about the fate of Earth’s future lightcone and so forth.
However, I’m not familiar with the balance of priorities espoused by actual self-identified longtermists. Do they typically treat a singleton as just a possibility rather than an inevitability?
You could be a longtermist and still regard a singleton as the most likely outcome. It would just mean that a human-aligned singleton is the only real chance for a human-aligned long-term future, and so you’d better make that your priority, however unlikely it may be. It’s apparent that a lot of the old-school (pre-LLM) AI-safety people think this way, when they talk about the fate of Earth’s future lightcone and so forth.
However, I’m not familiar with the balance of priorities espoused by actual self-identified longtermists. Do they typically treat a singleton as just a possibility rather than an inevitability?