I agree that fixating on doom is often psychologically unhealthy and pragmatically not useful. I also agree with those commenters pointing out that 2028 probably will look mostly normal, even under many pretty short timelines scenarios.
So… why advocate waiting until 2028? It is entirely possible to chill out now, without sacrificing any actual work you’re doing to make the future go better, while also articulating or developing a more positive vision of the future.
It took me about a decade to get through the combined despair of learning about x-risk plus not fully living up to some other things I once convinced myself I was morally obligated to do to consider myself a good person. I’m now a more useful, pleasant, happy, and productive person, more able to actually focus on solving problems and taking actions in the world.
I’d prefer we didn’t. But my impression at this point is that Less Wrong as a community needs to dialogue with models carefully to be willing to adopt them, and “Let’s talk about it right now to sort out whether to adopt it right now” requires quite a lot of social buy-in. The most norm-respecting way I know to get there is to agree on a prediction that folk agree distinguishes between key possibilities, and then revisit it when the prediction’s outcome is determined.
But yes, I agree with what I think you’re saying. I’d love to see this space take on a more “We’re all in this together” attitude toward the rest of humanity, and focus more on the futures we do want. And I think those are things that could happen without waiting until 2028.
I also wish folk would take more seriously that their feelings and subtle stuff they encountered preverbally plays a way bigger role than it’s going to ever logically appear to. What I see in lots of rationalist spaces, and especially in AI risk spaces, is not subtle in terms of the degree of disembodiment and visceral fear. But I haven’t found a way to convey what I’m seeing there without it sounding condescending or disrespectful or something.
So, I’m trying for something more like collectively preregistering a prediction and revisiting later. I’m now getting the impression that’s not going to work either, but I’m observing what I think is something good coming out of the discussion anyway.
I agree that fixating on doom is often psychologically unhealthy and pragmatically not useful. I also agree with those commenters pointing out that 2028 probably will look mostly normal, even under many pretty short timelines scenarios.
So… why advocate waiting until 2028? It is entirely possible to chill out now, without sacrificing any actual work you’re doing to make the future go better, while also articulating or developing a more positive vision of the future.
It took me about a decade to get through the combined despair of learning about x-risk plus not fully living up to some other things I once convinced myself I was morally obligated to do to consider myself a good person. I’m now a more useful, pleasant, happy, and productive person, more able to actually focus on solving problems and taking actions in the world.
I think I agree. To answer your question:
I’d prefer we didn’t. But my impression at this point is that Less Wrong as a community needs to dialogue with models carefully to be willing to adopt them, and “Let’s talk about it right now to sort out whether to adopt it right now” requires quite a lot of social buy-in. The most norm-respecting way I know to get there is to agree on a prediction that folk agree distinguishes between key possibilities, and then revisit it when the prediction’s outcome is determined.
But yes, I agree with what I think you’re saying. I’d love to see this space take on a more “We’re all in this together” attitude toward the rest of humanity, and focus more on the futures we do want. And I think those are things that could happen without waiting until 2028.
I also wish folk would take more seriously that their feelings and subtle stuff they encountered preverbally plays a way bigger role than it’s going to ever logically appear to. What I see in lots of rationalist spaces, and especially in AI risk spaces, is not subtle in terms of the degree of disembodiment and visceral fear. But I haven’t found a way to convey what I’m seeing there without it sounding condescending or disrespectful or something.
So, I’m trying for something more like collectively preregistering a prediction and revisiting later. I’m now getting the impression that’s not going to work either, but I’m observing what I think is something good coming out of the discussion anyway.