At first maybe you try to argue with them about it. But over time, a) you find yourself not bothering to argue with them
>Whose fault is that, exactly…?
b) even when you do argue with them, they’re the ones choosing the terms of the argument.
>Ditto.
If they think X is important, you find yourself focused on argue whether-or-not X is true, and ignoring all the different Ys and Zs that maybe you should have been thinking about.
>Ditto.
---
I agree that nothing about the examples you quote is unacceptably bad – all these things are “socially permissible.”
At the same time, your “Whose fault is that, exactly...?” makes it seem like there’s nothing the guru in question could be doing differently. That’s false.
Sure, some people are okay with seeing all social interactions as something where everyone is in it for themselves. However, in close(r) relationship contexts (e.g. friendships, romantic relationships, probably also spiritual mentoring from a guru?), many operate on the assumption that people care about each other and want to preserve each other’s agency and help each other flourish. In that context, it’s perfectly okay to have an expectation that others will (1) help me notice and speak up if something doesn’t quite feel right to me (as opposed to keeping quiet) and (2) help me arrive at informed/balanced views after carefully considering alternatives, as opposed to only presenting me their terms of the argument.
If the guru never says “I care about you as a person,” he’s fine to operate as he does. But once he starts to reassure his followers that he always has their best interest in mind – that’s when he crosses the line into immoral, exploitative behavior.
You can’t have it both ways. If your answer to people getting hurt is always “well, whose fault was that?”
Then don’t ever fucking reassure them that you care about them!
In reality, I’m pretty sure “gurus” almost always go to great lengths convincing their followers that they care more about them than almost anyone else. That’s where things become indefensible.
“Effective compute” is the combination of hardware growth and algorithmic progress? If those are multiplicative rather than additive, slowing one of the factors may only accomplish little on its own, but maybe it could pave the way for more significant changes when you slow both at the same time?
Unfortunately, it seems hard to significantly slow algorithmic progress. I can think of changes to publishing behaviors (and improving security) and pausing research on scary models (for instance via safety evals). Maybe things like handicapping talent pools via changes to immigration policy, or encouraging capability researchers to do other work. But that’s about it.
Still, combining different measures could be promising if the effects are multiplicative rather than additive.
Edit: Ah, but I guess your point is that even a 100% tax on compute wouldn’t really change the slope of the compute growth curve – it would only move the curve rightward and delay a little. So we don’t get a multiplicative effect, unfortunately. We’d need to find an intervention that changes the steepness of the curve.