Thanks for this. Discussions of things like “one time shifts in power between humans via mechanisms like states becoming more powerful” and personal AI representatives is exactly the sort of thing I’d like to hear more about. I’m happy to have finally found someone who has something substantial to say about this transition!
But over the last 2 years I asked a lot of people at the major labs about for any kind of details about a positive post-AGI future and almost no one had put anywhere close to as much thought into it as you have, and no one mentioned the things above. Most people clearly hadn’t put much thought into it at all. If anyone at the labs had much more of plan than “we’ll solve alignment while avoiding an arms race”, I managed to fail to even hear about its existence despite many conversations, including with founders.
The closest thing to a plan was Sam Bowman’s checklist: https://sleepinyourhat.github.io/checklist/ which is exactly the sort of thing I was hoping for, except it’s almost silent on issues of power, the state, and the role of post-AGI humans.
If you have any more related reading for the main “things might go OK” plan in your eyes, I’m all ears.
Yeah, people at labs are generally not thoughtful about AI futurism IMO, though of course most people aren’t thoughtful about AI futurism. And labs don’t really have plans IMO. (TBC, I think careful futurism is hard, hard to check, and not clearly that useful given realistic levels of uncertainty.)
If you have any more related reading for the main “things might go OK” plan in your eyes, I’m all ears.
I don’t have a ready to go list. You might be interested in this post and comments responding to it, though I’d note I disagree substantially with the post.
Thanks for this. Discussions of things like “one time shifts in power between humans via mechanisms like states becoming more powerful” and personal AI representatives is exactly the sort of thing I’d like to hear more about. I’m happy to have finally found someone who has something substantial to say about this transition!
But over the last 2 years I asked a lot of people at the major labs about for any kind of details about a positive post-AGI future and almost no one had put anywhere close to as much thought into it as you have, and no one mentioned the things above. Most people clearly hadn’t put much thought into it at all. If anyone at the labs had much more of plan than “we’ll solve alignment while avoiding an arms race”, I managed to fail to even hear about its existence despite many conversations, including with founders.
The closest thing to a plan was Sam Bowman’s checklist:
https://sleepinyourhat.github.io/checklist/
which is exactly the sort of thing I was hoping for, except it’s almost silent on issues of power, the state, and the role of post-AGI humans.
If you have any more related reading for the main “things might go OK” plan in your eyes, I’m all ears.
Yeah, people at labs are generally not thoughtful about AI futurism IMO, though of course most people aren’t thoughtful about AI futurism. And labs don’t really have plans IMO. (TBC, I think careful futurism is hard, hard to check, and not clearly that useful given realistic levels of uncertainty.)
I don’t have a ready to go list. You might be interested in this post and comments responding to it, though I’d note I disagree substantially with the post.