it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened). it seems very predictable that people will just (correctly) not listen to the advice, and in 2028 both sides on this issue will believe that their view has been vindicated—you will think of course rationalists will never change their minds and emotions on agi doom, and most rationalists will think obviously it was right not to follow the advice because they never expected agi to definitely happen before 2028.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened).
I’m super sensitive to framing effects. I notice one here. I could be wrong, and I’m guessing that even if I’m right you didn’t intend it. But I want to push back against it here anyway. Framing effects don’t have to be intentional!
It’s not that I started with what I thought was a wrong or bad policy and tried to advocate for it. It’s that given all the constraints, I thought that preregistering a possibility as a “pause and reconsider” moment might be the most effective and respectful. It’s not what I’d have preferred if things were different. But things aren’t different from how they are, so I made a guess about the best compromise.
I then learned that I’d made some assumptions that weren’t right, and that determining such a pause point that would have collective weight is much more tricky. Alas.
But it was Oliver’s comment that brought this problem to my awareness. At no point did I advocate for what I thought at the time was the wrong policy. I had hope because I thought folk were laying down some timeline predictions that could be falsified soon. Turns out, approximately nope.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
Empirically I disagree. That demonstrably has not been within the reach of my skill to do effectively. But it’s a sensible thing to consider trying again sometime.
to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.
it still seems bad to advocate for the exactly wrong policy, especially one that doesn’t make sense even if you turn out to be correct (as habryka points out in the original comment, many think 2028 is not really when most people expect agi to have happened). it seems very predictable that people will just (correctly) not listen to the advice, and in 2028 both sides on this issue will believe that their view has been vindicated—you will think of course rationalists will never change their minds and emotions on agi doom, and most rationalists will think obviously it was right not to follow the advice because they never expected agi to definitely happen before 2028.
i think you would have much more luck advocating for chilling today and citing past evidence to make your case..
I’m super sensitive to framing effects. I notice one here. I could be wrong, and I’m guessing that even if I’m right you didn’t intend it. But I want to push back against it here anyway. Framing effects don’t have to be intentional!
It’s not that I started with what I thought was a wrong or bad policy and tried to advocate for it. It’s that given all the constraints, I thought that preregistering a possibility as a “pause and reconsider” moment might be the most effective and respectful. It’s not what I’d have preferred if things were different. But things aren’t different from how they are, so I made a guess about the best compromise.
I then learned that I’d made some assumptions that weren’t right, and that determining such a pause point that would have collective weight is much more tricky. Alas.
But it was Oliver’s comment that brought this problem to my awareness. At no point did I advocate for what I thought at the time was the wrong policy. I had hope because I thought folk were laying down some timeline predictions that could be falsified soon. Turns out, approximately nope.
Empirically I disagree. That demonstrably has not been within the reach of my skill to do effectively. But it’s a sensible thing to consider trying again sometime.
to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.