it seems that “you need to do the trauma processing first and only then do useful work” is a harmful self-propagating meme in a very similar way as “you need to track and control every variable in order for AI to go well”
This. Trauma processing is just as prone to ouroboros-ing as x-risk work, if not more so.
Wouldn’t it be relevant in that someone could recognize unproductive, toxic dynamics in their concerns about AI risk as per your point (if I understand you correctly), decide to process trauma first and then get stuck in the same sorts of traps? While “I’m traumatized and need to fix it before I can do anything” may not sound as flashy as “My light cone is in danger from unaligned, high-powered AI and I need to fix that before I can do anything”, it’s just as capable of paralyzing a person, and I speak both from my own past mistakes and from those of multiple friends.
If you just go around healing traumas willy-nilly, then you might not ever see through any particular illusion like this one if it’s running in you.
Kind of like, generically working on trauma processing in general might or might not help an alcoholic quit drinking. There’s some reason for hope, but it’s possible to get lost in loops of navel-gazing, especially if they never ever even admit to themselves that they have a problem.
But if it’s targeted, the addiction basically doesn’t stand a chance.
I’m not trying to say “Just work on traumas and be Fully Healed™ before working on AI risk.”
I’m saying something much, much more precise.
I do in fact think there’s basically no point in someone working on AI risk if they don’t dissolve this specific trauma structure.
Well, or at least make it fully conscious and build their nervous system holding capacity enough that it (a) they can watch it trying to run in real time and (b) they can also reliably stop it from grabbing their inner steering wheel so to speak.
But frankly, for most people it’d be easier just to fully integrate the pain than it would be to develop that level of general nervous system capacity without integrating said pain.
This. Trauma processing is just as prone to ouroboros-ing as x-risk work, if not more so.
Agreed.
And it’s also not actually relevant to my point.
(Though I understand why it looks relevant.)
Wouldn’t it be relevant in that someone could recognize unproductive, toxic dynamics in their concerns about AI risk as per your point (if I understand you correctly), decide to process trauma first and then get stuck in the same sorts of traps? While “I’m traumatized and need to fix it before I can do anything” may not sound as flashy as “My light cone is in danger from unaligned, high-powered AI and I need to fix that before I can do anything”, it’s just as capable of paralyzing a person, and I speak both from my own past mistakes and from those of multiple friends.
Of course that’s possible. I didn’t mean to dismiss that part.
But… well, as I just wrote to Richard_Ngo:
I do in fact think there’s basically no point in someone working on AI risk if they don’t dissolve this specific trauma structure.
Well, or at least make it fully conscious and build their nervous system holding capacity enough that it (a) they can watch it trying to run in real time and (b) they can also reliably stop it from grabbing their inner steering wheel so to speak.
But frankly, for most people it’d be easier just to fully integrate the pain than it would be to develop that level of general nervous system capacity without integrating said pain.