On a more general note, it’s certainly possible that I vastly overestimate how well the median LessWronger will be at presenting the case for halting AI progress to non-rationalists.
After all, I’ve kept up considerable involvement with my normie family and non-rationalist communities over the past years and put a bunch of skill points into bridging the worlds. To the point that by now, I find it easier to navigate leftist than rationalist spaces despite my more gray tribe politics—because I know the local norms from the olden days, and expect leftists to be more fluent at guess culture so I don’t need to verbalize so many things. In addition, I’m unusually agnostic on the more controversial LW pet topics like transhumanism compared to others here.
At the same time, having constructive conversations with normies is a learnable skill. I suspect that many LWers have about as much learned helplessness around that as I had two or three years ago. I admit that it might make sense for super technical people to stay in their lane and just keep building on their existing skill trees. Still, I suspect that for more rationalists than are currently doing it, investing more skill points into being normie-compatible and helping with Control AI-style outreach might be a high-leverage thing to do.
I’m also hanging out a lot more with normies these days and I feel this.
But I also feel like maybe I just have a very strong local aura (or like, everyone does, that’s how scenes work) which obscures the fact that I’m not influencing the rest of the ocean at all.
I worry that a lot of the discourse basically just works like barrier aggression in dogs. When you’re at one of their parties, they’ll act like they agree with you about everything, when you’re seen at a party they’re not at, they forget all that you said and they start baying for blood. Go back to their party, they stop. I guess in that case, maybe there’s a way of rearranging the barriers so that everyone comes to see it as one big party. Ideally, make it really be one.
On a more general note, it’s certainly possible that I vastly overestimate how well the median LessWronger will be at presenting the case for halting AI progress to non-rationalists.
After all, I’ve kept up considerable involvement with my normie family and non-rationalist communities over the past years and put a bunch of skill points into bridging the worlds. To the point that by now, I find it easier to navigate leftist than rationalist spaces despite my more gray tribe politics—because I know the local norms from the olden days, and expect leftists to be more fluent at guess culture so I don’t need to verbalize so many things. In addition, I’m unusually agnostic on the more controversial LW pet topics like transhumanism compared to others here.
At the same time, having constructive conversations with normies is a learnable skill. I suspect that many LWers have about as much learned helplessness around that as I had two or three years ago. I admit that it might make sense for super technical people to stay in their lane and just keep building on their existing skill trees. Still, I suspect that for more rationalists than are currently doing it, investing more skill points into being normie-compatible and helping with Control AI-style outreach might be a high-leverage thing to do.
I’m also hanging out a lot more with normies these days and I feel this.
But I also feel like maybe I just have a very strong local aura (or like, everyone does, that’s how scenes work) which obscures the fact that I’m not influencing the rest of the ocean at all.
I worry that a lot of the discourse basically just works like barrier aggression in dogs. When you’re at one of their parties, they’ll act like they agree with you about everything, when you’re seen at a party they’re not at, they forget all that you said and they start baying for blood. Go back to their party, they stop. I guess in that case, maybe there’s a way of rearranging the barriers so that everyone comes to see it as one big party. Ideally, make it really be one.