Thanks for the comment! I agree with a lot of what you’re saying.
Regarding the policy levers: we’re doing research into that right now. I hope to be able to share a first writeup mid October. Shall I email it to you once it’s there? Would really appreciate your feedback!
I agree that pandemic and climate policies have been a mess. In general though I think the argument “A has gone wrong, therefore B will go wrong” is not watertight. A better version of the argument would be statistical rather than anecdotal: “90% of policies have gone wrong, therefore we give 90% probability to this policy also going wrong.” I think though that 1) less than 90% of govt policies have generally gone wrong, and 2) even if there were only 10% chance of policies successfully reducing xrisk, that still seems worth a try.
I think people are generally correct to treat LLMs-in-particular as a normal technology, but I think they’re correct by coincidence.
Agree, although I’m agnostic on whether LLMs or paradigms building upon them will actually lead to takeover-level AI. So people might still be consequentially wrong rather than coincidentally correct.
it seems that you and I are in agreement with you that comms, x-risk awareness, and gradual development are all generally good, on present margins.
Thank you, good to establish.
I agree that goals we could implement would be limited by the state of technical alignment, but as you say, I don’t see a reason to not work on them in parallel. I’m not convinced one is necessarily much harder or easier than the other. The whole thing just seems such a pre-paradigmatic mess that anything seems possible and work on a defensible bet without significant downside risk seems generally good. Goalcrafting seems a significant part of the puzzle that has received comparatively little attention (small contribution). The four options you mention could be interesting to work out further, but of course there’s a zillion other possibilities. I don’t think there’s even a good taxonomy right now..?
I agree that involving society was poorly defined, but what I have in mind would at least include increasing our comms efforts about AI’s risks (including but not limited to extinction). Hopefully this increases input that non-xriskers can give. Political scientists seem relevant, historians, philosophers, social scientists. Artists should make art about possible scenarios. I think there should be a public debate about what alignment should mean exactly.
I don’t think anyone of us (or even our bubble combined) is wise enough to decide the future of the universe unilaterally. We need to ask people: if we end up with this alignable ASI, what would you want it to do? What dangers do you see?
Thanks, yeah, tbh I also felt dismissive about those projects. I’m one of the perhaps few people in this space who never liked scifi, and those projects felt like scifi exercises to me. Scifi feels a bit plastic to me, cheap, thin on the details, might as well be completely off. (I’m probably insulting people here, sorry about that, I’m sure there is great scifi. I guess these projects were also good, all considered.)
But if it’s real, rather than scifi, the future and its absurdities suddenly become very interesting. Maybe we should write papers with exploratory engineering and error bars rather than stories on a blog? I did like the work of Anders Sandberg for example.
What we want the future to be like, and not be like, necessarily has a large ethical component. I also have to say that ethics originating from the xrisk space, such as longtermism, tends to defend very non-mainstream ideas that I tend not to agree with. Longtermism has mostly been critiqued for its ASI claims, its messengers, and its lack of discounting factors, but I think the real controversial parts are its symmetric population ethics (leading to a necessity to quickly colonize the lightcone which I don’t necessarily share) and its debatable decision to count AI as valued population, too (leading to wanting to replace humanity with AI for efficiency reasons).
I disagree with these ideas, so ethically, I’d trust a kind of informed public average more than many xriskers. I’d be more excited about papers trying their best to map possible futures, and using mainstream ethics (and fields like political science, sociology, psychology, art and aesthetics, economics, etc.) to 1) map and avoid ways to go extinct, 2) map and avoid major dystopias, and 3) try to aim for actually good futures.