Thanks for this critique! Besides the in-line comments above, I’d like to challenge you to sketch your own alternative scenario to AI 2027, depicting your vision. For example:
1 page on d/acc technologies and other prep that people can start working on today, that quietly build up momentum during the first part of AI 2027: Vitalik Version.
1 page on ‘the branching point’ where AI 2027: Vitalik Version starts to meaningfully diverge from original AI 2027
1-3 pages on what happens after that, depicting how e.g. OpenBrain and DeepCent’s misaligned AIs are unable to take over the world, despite having successfully convinced corporate and political leadership to trust them. (Or perhaps why they wouldn’t be able to convince them to trust them? You get to pick the branching point.) This section should end in 2035 or so, just like AI 2027.
I predict that if you try to write this, you’ll run into a bunch of problems and realize that your strategy is going to be difficult to pull off successfully. (e.g. you’ll start writing about how the analyst tools resist Consensus-1′s persuasion, but then you’ll be trying to write the part about how those analyst tools get built and deployed, and by whom, and no one has the technical capability to build them until 2028 but by that point OpenBrain+Agent5+ might already be working to undermine whoever is building them...) I hope I’m wrong.
Thanks for this critique! Besides the in-line comments above, I’d like to challenge you to sketch your own alternative scenario to AI 2027, depicting your vision. For example:
1 page on d/acc technologies and other prep that people can start working on today, that quietly build up momentum during the first part of AI 2027: Vitalik Version.
1 page on ‘the branching point’ where AI 2027: Vitalik Version starts to meaningfully diverge from original AI 2027
1-3 pages on what happens after that, depicting how e.g. OpenBrain and DeepCent’s misaligned AIs are unable to take over the world, despite having successfully convinced corporate and political leadership to trust them. (Or perhaps why they wouldn’t be able to convince them to trust them? You get to pick the branching point.) This section should end in 2035 or so, just like AI 2027.
I predict that if you try to write this, you’ll run into a bunch of problems and realize that your strategy is going to be difficult to pull off successfully. (e.g. you’ll start writing about how the analyst tools resist Consensus-1′s persuasion, but then you’ll be trying to write the part about how those analyst tools get built and deployed, and by whom, and no one has the technical capability to build them until 2028 but by that point OpenBrain+Agent5+ might already be working to undermine whoever is building them...) I hope I’m wrong.