At the risk of seeming quite combative, when you say
And I know a lot of safety people at deepmind and other AGI labs who I’m very confident also sincerely care about reducing existential risks. This is one of their primary motivations, they often got into the field due to being convinced by arguments about ai risk, they will often raise in conversation concerns that their current work or the team’s current strategy is not focused on it enough, some are extremely hard-working or admirably willing to forgo credits so long as they think that their work is actually mattering for X-Risk, some dedicate a bunch of time to forming detailed mental models of how AI leads to bad outcomes and how this could be prevented and how their work fit in, etc.
That’s basically what I mean when I said in my comment
AI safety, by its nature, resists the idea of creating powerful new information technologies to exploit mercilessly for revenue without care for downstream consequences. However, many actors in the AI safety movement are themselves tied to the digital economy, and depend on it for their power, status, and livelihoods. Thus, it is not that there are no genuine concerns being expressed, but that at every turn these concerns must be resolved in a way that keeps the massive tech machine going. Those who don’t agree with this approach are efficiently selected against. [examples follow]
And, after thinking about it, I don’t see your statement conflicting with mine.
At the risk of seeming quite combative, when you say
That’s basically what I mean when I said in my comment
And, after thinking about it, I don’t see your statement conflicting with mine.