I feel like there is a lot of dystopian literature out there, but relatively little about telling a story where there is a plausible path to escaping things going horribly wrong that then works. So I’m right now intentionally trying to come up with stories that sell an utopian path while signal boosting ideas that are being put forward in FHI papers and other parts of the community as ways to get there. For example the project I’m right now the most excited about has the working title of The Windfall Clause. Also the sci fi project that I already have written that is in this context is exploring ideas about the repugnant conclusion in a far future hard sci fi setting which is organized like Scott Alexander’s archipelago, and where we managed to both get AI that did what we wanted, and then where we collectively didn’t use it to murder ourselves. (Link if anyone is interested)
I do welcome ideas about stories that people think it would be a good idea if someone wrote. Though if it is about something going horribly wrong, I’d probably try to find a way to write a story where that nearly happens, but we find a smart way to avoid it happening.
Also, honestly, I think that all of the countries would reinvest as much as they need to maintain a strategic balance, and that is the actual problem requiring coordination.
I don’t think that “we manage to find a smart way to avoid a disaster, though we almost lose anyway” implies “being smart automatically means that we win”.
I said nothing about smartness automatically meaning that we win. I point is more that the universe doesn’t care about whether you are smart. It’s the core of what Beyond the Reach of God is about. I’m used to contact with it once a year at the solstice.
For me it’s an important part of the core narrative of the solstice and that the world isn’t just letting the hero win because he comes up with a smart solution.
I think there’s a huge danger if people think that being smart and caring about AI safety is enough and then push forward projects like OpenAI that increase capabilities.
To the extend that fiction can teach narratives to people the Beyond the Reach of God narrative seems important.
I don’t think the intellectual work of finding a concrete way that’s likely makes humanity survive an AGI going foom is currently done. If there would be a concrete way, the problem would be a lot less problematic.
Hopefully, places like MIRI and FHI will do that work in the future. So I would expect people to take it seriously to support organizations like MIRI and FHI over OpenAI which pushes for capability increases.
Maybe.
I feel like there is a lot of dystopian literature out there, but relatively little about telling a story where there is a plausible path to escaping things going horribly wrong that then works. So I’m right now intentionally trying to come up with stories that sell an utopian path while signal boosting ideas that are being put forward in FHI papers and other parts of the community as ways to get there. For example the project I’m right now the most excited about has the working title of The Windfall Clause. Also the sci fi project that I already have written that is in this context is exploring ideas about the repugnant conclusion in a far future hard sci fi setting which is organized like Scott Alexander’s archipelago, and where we managed to both get AI that did what we wanted, and then where we collectively didn’t use it to murder ourselves. (Link if anyone is interested)
I do welcome ideas about stories that people think it would be a good idea if someone wrote. Though if it is about something going horribly wrong, I’d probably try to find a way to write a story where that nearly happens, but we find a smart way to avoid it happening.
Also, honestly, I think that all of the countries would reinvest as much as they need to maintain a strategic balance, and that is the actual problem requiring coordination.
That sounds to me like the story will teach the wrong thing. It will teach that it’s just a matter of being smart and then we will survive.
I don’t think that “we manage to find a smart way to avoid a disaster, though we almost lose anyway” implies “being smart automatically means that we win”.
I said nothing about smartness automatically meaning that we win. I point is more that the universe doesn’t care about whether you are smart. It’s the core of what Beyond the Reach of God is about. I’m used to contact with it once a year at the solstice.
For me it’s an important part of the core narrative of the solstice and that the world isn’t just letting the hero win because he comes up with a smart solution.
I think there’s a huge danger if people think that being smart and caring about AI safety is enough and then push forward projects like OpenAI that increase capabilities.
To the extend that fiction can teach narratives to people the Beyond the Reach of God narrative seems important.
Who specifically do you think should act differently, and in what concrete way because they are more aware of the Beyond the Reach of God narrative?
I don’t think the intellectual work of finding a concrete way that’s likely makes humanity survive an AGI going foom is currently done. If there would be a concrete way, the problem would be a lot less problematic.
Hopefully, places like MIRI and FHI will do that work in the future. So I would expect people to take it seriously to support organizations like MIRI and FHI over OpenAI which pushes for capability increases.