to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.
to be clear, I am not intending to claim that you wrote this post believing that it was wrong. I believe that you are trying your best to improve the epistemics and I commend the effort.
I had interpreted your third sentence as still defending the policy of the post even despite now agreeing with Oliver, but I understand now that this is not what you meant, and that you are no longer in favor of the policy advocated in the post. my apologies for the misunderstanding.
I don’t think you should just declare that people’s beliefs are unfalsifiable. certainly some people’s views will be. but finding a crux is always difficult and imo should be done through high bandwidth talking to many people directly to understand their views first (in every group of people, especially one that encourages free thinking among its members, there will be a great diversity of views!). it is not effective to put people on blast publicly and then backtrack when people push back saying you misunderstood their position.
I realize this would be a lot of work to ask of you. unfortunately, coordination is hard. it’s one of the hardest things in the world. I don’t think you have any moral obligation to do this beyond any obligation you feel to making AI go well / improving this community. I’m mostly saying this to lay out my view of why I think this post did not accomplish its goals, and what I think would be the most effective course of action to find a set of cruxes that truly captures the disagreement. I think this would be very valuable if accomplished and it would be great if someone did it.