This is evidence that Yudkowsky believed (...) that at least it was plausible enough that could be developed in a correct argument, and he was genuinely scared by it.
Just to be sure, since you seem to disagree with this opinion (whether it is actually Yudkowsky’s opinion or not), what exactly is it that you believe?
a) There is absolutely no way one could be harmed by thinking about not-yet-existing dangerous entities; even if those entities in the future will be able to learn about the fact that the person was thinking about them in this specific way.
b) There is a way one could be harmed by thinking about not-yet-existing dangerous entities, but the way to do this is completely different from what Roko proposed.
If it happens to be (b), then it still makes sense to be angry about publicly opening the whole topic of “let’s use our intelligence to discover the thoughts that may harm us by us thinking about them—and let’s do it in a public forum where people are interested in decision theories, so they are more qualified than average to find the right answer.” Even if the proper way to harm oneself is different from what Roko proposed, making this a publicly debated topic increases the chance of someone finding the correct solution. The problem is not the proposed basilisk, but rather inviting people to compete in clever self-harm; especially the kind of people known for being hardly able to resist such invitation.
I’m not the person you replied to, but I mostly agree with (a) and reject (b). There’s no way you can could possibly know enough about a not-yet-existing entity to understand any of its motivations; the entities that you’re thinking about and the entities that will exist in the future are not even close to the same. I outlined some more thoughts here.
Just to be sure, since you seem to disagree with this opinion (whether it is actually Yudkowsky’s opinion or not), what exactly is it that you believe?
a) There is absolutely no way one could be harmed by thinking about not-yet-existing dangerous entities; even if those entities in the future will be able to learn about the fact that the person was thinking about them in this specific way.
b) There is a way one could be harmed by thinking about not-yet-existing dangerous entities, but the way to do this is completely different from what Roko proposed.
If it happens to be (b), then it still makes sense to be angry about publicly opening the whole topic of “let’s use our intelligence to discover the thoughts that may harm us by us thinking about them—and let’s do it in a public forum where people are interested in decision theories, so they are more qualified than average to find the right answer.” Even if the proper way to harm oneself is different from what Roko proposed, making this a publicly debated topic increases the chance of someone finding the correct solution. The problem is not the proposed basilisk, but rather inviting people to compete in clever self-harm; especially the kind of people known for being hardly able to resist such invitation.
I’m not the person you replied to, but I mostly agree with (a) and reject (b). There’s no way you can could possibly know enough about a not-yet-existing entity to understand any of its motivations; the entities that you’re thinking about and the entities that will exist in the future are not even close to the same. I outlined some more thoughts here.