This feels like a conflict theory on corrupted hardware argument: AI risk people think they are guided by technical considerations, but the norm encompassing their behavior is the same as with everything else in technology, smothering progress instead of earnestly seeking a way forward, navigating the dangers.
So I think the argument is not about the technical considerations, which could well be mostly accurate, but a culture of unhealthy attitude towards them, shaping technical narratives and decisions. There’s been a recent post making a point of the same kind.
This feels like a conflict theory on corrupted hardware argument: AI risk people think they are guided by technical considerations, but the norm encompassing their behavior is the same as with everything else in technology, smothering progress instead of earnestly seeking a way forward, navigating the dangers.
So I think the argument is not about the technical considerations, which could well be mostly accurate, but a culture of unhealthy attitude towards them, shaping technical narratives and decisions. There’s been a recent post making a point of the same kind.