[Question] Accuracy of arguments that are seen as ridiculous and intuitively false but don’t have good counter-arguments

I think one of the biggest reasons I am worried about AI doom is that there doesn’t seem to be very good counter-arguments. Most of them are just bad, and the ones that aren’t a bit vague (usually something along the lines of “humanity will figure it out” or “maybe LLMs will scale in nice ways”).

However, I’m curious as to how accurate this heuristic is. My question: What examples in the past are there of “argument is widespreadly seen as ridiculous and intuitively false, but the argument is pretty solid and the counter-arguments aren’t”. (Sorry if that’s a bit vague, use your best judgement. I’m looking specifically for examples that are similar to the AI x-risk debate.) And did they turn out true or false? Try to include reasons why the argument was so strong, a typical counter-argument, and the strongest counter-argument.

Please use spoiler text for the final answer so that I can try to predict it before seeing it!