But when I think about it more, I guess I’m happy that they’re doing those things.
Hmm, I’m trying to remember why I initially felt dismissive. I guess I expected that the resulting essays would be implausible or incoherent, and that nobody would pay attention anyway, and thus it wouldn’t really move the needle in the big picture. (I think those expectations were reasonable, and those are still my expectations. [I haven’t read the essays in enough detail to confirm.]) Maybe my feelings are more like frustration than dismissiveness—frustration that progress is so hard. Again, yeah, I guess it’s good that people are trying that kind of thing.
Thanks, yeah, tbh I also felt dismissive about those projects. I’m one of the perhaps few people in this space who never liked scifi, and those projects felt like scifi exercises to me. Scifi feels a bit plastic to me, cheap, thin on the details, might as well be completely off. (I’m probably insulting people here, sorry about that, I’m sure there is great scifi. I guess these projects were also good, all considered.)
But if it’s real, rather than scifi, the future and its absurdities suddenly become very interesting. Maybe we should write papers with exploratory engineering and error bars rather than stories on a blog? I did like the work of Anders Sandberg for example.
What we want the future to be like, and not be like, necessarily has a large ethical component. I also have to say that ethics originating from the xrisk space, such as longtermism, tends to defend very non-mainstream ideas that I tend not to agree with. Longtermism has mostly been critiqued for its ASI claims, its messengers, and its lack of discounting factors, but I think the real controversial parts are its symmetric population ethics (leading to a necessity to quickly colonize the lightcone which I don’t necessarily share) and its debatable decision to count AI as valued population, too (leading to wanting to replace humanity with AI for efficiency reasons).
I disagree with these ideas, so ethically, I’d trust a kind of informed public average more than many xriskers. I’d be more excited about papers trying their best to map possible futures, and using mainstream ethics (and fields like political science, sociology, psychology, art and aesthetics, economics, etc.) to 1) map and avoid ways to go extinct, 2) map and avoid major dystopias, and 3) try to aim for actually good futures.
I agree with pretty much everything you wrote.
Anecdote: I recall feeling a bit “meh” when I heard about the Foresight AI Pathways thing and the FLI Worldbuilding Contest thing.
But when I think about it more, I guess I’m happy that they’re doing those things.
Hmm, I’m trying to remember why I initially felt dismissive. I guess I expected that the resulting essays would be implausible or incoherent, and that nobody would pay attention anyway, and thus it wouldn’t really move the needle in the big picture. (I think those expectations were reasonable, and those are still my expectations. [I haven’t read the essays in enough detail to confirm.]) Maybe my feelings are more like frustration than dismissiveness—frustration that progress is so hard. Again, yeah, I guess it’s good that people are trying that kind of thing.
Thanks, yeah, tbh I also felt dismissive about those projects. I’m one of the perhaps few people in this space who never liked scifi, and those projects felt like scifi exercises to me. Scifi feels a bit plastic to me, cheap, thin on the details, might as well be completely off. (I’m probably insulting people here, sorry about that, I’m sure there is great scifi. I guess these projects were also good, all considered.)
But if it’s real, rather than scifi, the future and its absurdities suddenly become very interesting. Maybe we should write papers with exploratory engineering and error bars rather than stories on a blog? I did like the work of Anders Sandberg for example.
What we want the future to be like, and not be like, necessarily has a large ethical component. I also have to say that ethics originating from the xrisk space, such as longtermism, tends to defend very non-mainstream ideas that I tend not to agree with. Longtermism has mostly been critiqued for its ASI claims, its messengers, and its lack of discounting factors, but I think the real controversial parts are its symmetric population ethics (leading to a necessity to quickly colonize the lightcone which I don’t necessarily share) and its debatable decision to count AI as valued population, too (leading to wanting to replace humanity with AI for efficiency reasons).
I disagree with these ideas, so ethically, I’d trust a kind of informed public average more than many xriskers. I’d be more excited about papers trying their best to map possible futures, and using mainstream ethics (and fields like political science, sociology, psychology, art and aesthetics, economics, etc.) to 1) map and avoid ways to go extinct, 2) map and avoid major dystopias, and 3) try to aim for actually good futures.