To be honest, these epistemic positions sound like counterproductice delaying tactics at best. If the future of humanity can be summed up as “we are an increasingly embattled minor nation in a hostile international conflict zone, and we must ruthlessly police the ingroup-outgroup divide to maintain our bio-ethno-national identity” then I don’t see much if any increases in wellbeing in store for “normal humans”. At best we become transhumanist North Korea, at worst we find the war we’re looking for and we lose to forces we cannot even comprehend.
Most of what motivates me to work on AI safety and theory of alignment is the belief that there are options other than what you have presented here.
I’m afraid you might be right, though maybe something like “transhumanist North Korea” is the best we can hope for while remaining meaningfully human. Care to outline, or link to, other options you have in mind?
I wish that I had easy answers I could link. I’ve looked at a lot of angles in this space (everything from AI for epistemics to collective deliberation/digital democracy tools all the way to the Deep Lore of agent foundations/FEP etc.), and I haven’t found anything like a satisfying solution package. Even stuff I wrote up I was not happy with. My current project plan is to try and deconfuse myself about a lot of these topics, and working to bridge the gap between the theory of machine learning as we historically understood it and theories of biological learning. I think the divide between ML systems and biological systems is smaller than people think, and therefore there is both more room for multi-scalar cooperation and also more risk of danger from misconceptions and mistreatment.
Would love to discuss more in DMs/email if you feel up for it or have thoughts to share, most of my relevant contact info/past work can be found here: https://utilityhotbar.github.io
To be honest, these epistemic positions sound like counterproductice delaying tactics at best. If the future of humanity can be summed up as “we are an increasingly embattled minor nation in a hostile international conflict zone, and we must ruthlessly police the ingroup-outgroup divide to maintain our bio-ethno-national identity” then I don’t see much if any increases in wellbeing in store for “normal humans”. At best we become transhumanist North Korea, at worst we find the war we’re looking for and we lose to forces we cannot even comprehend.
Most of what motivates me to work on AI safety and theory of alignment is the belief that there are options other than what you have presented here.
I’m afraid you might be right, though maybe something like “transhumanist North Korea” is the best we can hope for while remaining meaningfully human. Care to outline, or link to, other options you have in mind?
Hey David,
I wish that I had easy answers I could link. I’ve looked at a lot of angles in this space (everything from AI for epistemics to collective deliberation/digital democracy tools all the way to the Deep Lore of agent foundations/FEP etc.), and I haven’t found anything like a satisfying solution package. Even stuff I wrote up I was not happy with. My current project plan is to try and deconfuse myself about a lot of these topics, and working to bridge the gap between the theory of machine learning as we historically understood it and theories of biological learning. I think the divide between ML systems and biological systems is smaller than people think, and therefore there is both more room for multi-scalar cooperation and also more risk of danger from misconceptions and mistreatment.
Sounds like we’re in the same boat!
Would love to discuss more in DMs/email if you feel up for it or have thoughts to share, most of my relevant contact info/past work can be found here: https://utilityhotbar.github.io