I’d prefer we didn’t. But my impression at this point is that Less Wrong as a community needs to dialogue with models carefully to be willing to adopt them, and “Let’s talk about it right now to sort out whether to adopt it right now” requires quite a lot of social buy-in. The most norm-respecting way I know to get there is to agree on a prediction that folk agree distinguishes between key possibilities, and then revisit it when the prediction’s outcome is determined.
But yes, I agree with what I think you’re saying. I’d love to see this space take on a more “We’re all in this together” attitude toward the rest of humanity, and focus more on the futures we do want. And I think those are things that could happen without waiting until 2028.
I also wish folk would take more seriously that their feelings and subtle stuff they encountered preverbally plays a way bigger role than it’s going to ever logically appear to. What I see in lots of rationalist spaces, and especially in AI risk spaces, is not subtle in terms of the degree of disembodiment and visceral fear. But I haven’t found a way to convey what I’m seeing there without it sounding condescending or disrespectful or something.
So, I’m trying for something more like collectively preregistering a prediction and revisiting later. I’m now getting the impression that’s not going to work either, but I’m observing what I think is something good coming out of the discussion anyway.
I think I agree. To answer your question:
I’d prefer we didn’t. But my impression at this point is that Less Wrong as a community needs to dialogue with models carefully to be willing to adopt them, and “Let’s talk about it right now to sort out whether to adopt it right now” requires quite a lot of social buy-in. The most norm-respecting way I know to get there is to agree on a prediction that folk agree distinguishes between key possibilities, and then revisit it when the prediction’s outcome is determined.
But yes, I agree with what I think you’re saying. I’d love to see this space take on a more “We’re all in this together” attitude toward the rest of humanity, and focus more on the futures we do want. And I think those are things that could happen without waiting until 2028.
I also wish folk would take more seriously that their feelings and subtle stuff they encountered preverbally plays a way bigger role than it’s going to ever logically appear to. What I see in lots of rationalist spaces, and especially in AI risk spaces, is not subtle in terms of the degree of disembodiment and visceral fear. But I haven’t found a way to convey what I’m seeing there without it sounding condescending or disrespectful or something.
So, I’m trying for something more like collectively preregistering a prediction and revisiting later. I’m now getting the impression that’s not going to work either, but I’m observing what I think is something good coming out of the discussion anyway.