[Your arguments] allow you to prove too much. To any objection, you could say “Well, see, you are only objecting to this because you have been thinking about AI risk for too long, and thus you are not able to reason about the issue properly”.
Um. That’s a thing I suppose someone could do with some variation of these frames, sure. That’s not a move I’m at all interested in though. I really would prefer no one does this. It warps the point into something untrue and unkind.
I’m much more interested in something like:
There’s this specific internal system design a person can fall into.
It’s a pretty loud feature of the general rationalist cluster.
If you (a general reader, not you mkualqulera per se) are subject to this pattern and you want out, here’s a way out.
Also, people who are in such a pattern but don’t want out (or are too stuck in it to see they’re in it) are in fact making the real thing harder to solve. So noticing and getting out of this pattern really is a priority if you care about the real thing.
Now, if someone freaks out at me for pointing this out and makes some bizarre assumptions about what I’m saying (like, say, that I’m claiming there’s no AI problem or that I’m saying any action to deal with it is delusional), at that point I consider it way more likely that they’re “drunk”, and I’m much more likely to ignore what they have to say. Their ravings and condemnation land for me like a raging alcoholic who’s super pissed I implied they have a problem with an addiction.
But none of this is about me winning arguments with people. It’s about pointing out a mechanism for those who want to see it.
And for those for whom it doesn’t apply, or to whom it does but they’re determined not to look? Well, cool, good on them! Truly.
(Also, I like the kind of conflict you’re wrestling with. I don’t want to try to argue you out of that. I just wanted to clarify this part a bit.)
Um. That’s a thing I suppose someone could do with some variation of these frames, sure. That’s not a move I’m at all interested in though. I really would prefer no one does this. It warps the point into something untrue and unkind.
I’m much more interested in something like:
There’s this specific internal system design a person can fall into.
It’s a pretty loud feature of the general rationalist cluster.
If you (a general reader, not you mkualqulera per se) are subject to this pattern and you want out, here’s a way out.
Also, people who are in such a pattern but don’t want out (or are too stuck in it to see they’re in it) are in fact making the real thing harder to solve. So noticing and getting out of this pattern really is a priority if you care about the real thing.
Now, if someone freaks out at me for pointing this out and makes some bizarre assumptions about what I’m saying (like, say, that I’m claiming there’s no AI problem or that I’m saying any action to deal with it is delusional), at that point I consider it way more likely that they’re “drunk”, and I’m much more likely to ignore what they have to say. Their ravings and condemnation land for me like a raging alcoholic who’s super pissed I implied they have a problem with an addiction.
But none of this is about me winning arguments with people. It’s about pointing out a mechanism for those who want to see it.
And for those for whom it doesn’t apply, or to whom it does but they’re determined not to look? Well, cool, good on them! Truly.
(Also, I like the kind of conflict you’re wrestling with. I don’t want to try to argue you out of that. I just wanted to clarify this part a bit.)