Still no one has proposed, in my mind, any especially plausible trajectory in which human interests are respected post-AGI! It’s not obvious we’re doomed, but the best plan still seems to be basically “ask AI what to do and hope property rights hold”.
I suspect that a better plan simply doesn’t exist under SOTA alignment targets,[4] which likely cause a post-work future where capital is the only thing that matters.
However, I think that I have a potential nonstandard solution: the AI capabilities to which a human is allowed to have access should be tied not to the amount of money paid, but to something different. But it’s hard to implement or even agree on how the AIs should actually be used.
Including the Optimistic 2027 timeline which just ended too soon and this website on Advanced AI Possible Futures about which Zvi remarks that “Daniel also points us to [the website] as a good related activity and example of people thinking about the future in detail. I agree it’s good to do things like this, although the parts I saw on quick scan were largely dodging the most important questions.”
Ideally, I would also like Kokotajlo to notice my take on writing scenarios. But my disagreements with AI-2027 have the AIs become misaligned and collude with each other since Agent-2 for moral reasons similar to those described above, and the Agent-4 analogue is never caught. In addition, the AI-2027 forecast could also have underexplored AIs’ interaction with rivals and the government.
This is highly likely to be the reason why Kokotajlo tried to engage[1] even with critique that I would consider sloppy,[2] like Vitalik Buterin’s take, shanzson’s take,[3] AI as normal technology. Alas, the actually worthy Rogue Replication Scenario has yet to be noticed...
I suspect that a better plan simply doesn’t exist under SOTA alignment targets,[4] which likely cause a post-work future where capital is the only thing that matters.
However, I think that I have a potential nonstandard solution: the AI capabilities to which a human is allowed to have access should be tied not to the amount of money paid, but to something different. But it’s hard to implement or even agree on how the AIs should actually be used.
However, he did encourage me to just post my responce to SE Gyges’ critique.
Including the Optimistic 2027 timeline which just ended too soon and this website on Advanced AI Possible Futures about which Zvi remarks that “Daniel also points us to [the website] as a good related activity and example of people thinking about the future in detail. I agree it’s good to do things like this, although the parts I saw on quick scan were largely dodging the most important questions.”
Vitalik and shanzson were thanked by Kokotajlo.
Ideally, I would also like Kokotajlo to notice my take on writing scenarios. But my disagreements with AI-2027 have the AIs become misaligned and collude with each other since Agent-2 for moral reasons similar to those described above, and the Agent-4 analogue is never caught. In addition, the AI-2027 forecast could also have underexplored AIs’ interaction with rivals and the government.