I find belief in basilisks ridiculous. Arguing that an idea could do harm by merely occupying space in a brain is a tremendous discredit to humanity. Any adult brain that is so vulnerable as to suffer actual emotional damage by the mere contemplation of an idea is a brain accustomed to refusing to deal with reality. If the fact is that The Basilisk wants to torture me, I want to believe that it wants to torture me.
If the fact is that The Basilisk wants to torture me, I want to believe that it wants to torture me.
WARNING: THE LITANY OF TARSKI IS NOT DESIGNED TO WORK INSIDE A FEEDBACK LOOP
In this example the basilisk will want to torture you only if you believe that it will want to torture you. “The fact” is not a fact until the loop is complete. Note that both alternatives are “facts”, even though they appear mutually exclusive.
If the Basilisk wants to torture me, I want to believe that it wants to torture me. If the Basilisk does not want to torture me, I want to believe that it does not want to torture me.
If we’re talking about Langford-type basilisks, that’s a reasonable position. But if you’re claiming that no idea can cause disutility, I find that idea to be ridiculous. And you arguing against an idea on the basis that it would be insulting to humanity is rather … ironic.
Not only is the original sin fictional evidence; its presupossitions about human nature are worthy of being taken far less seriously than my worst nonsense. The whole “things humans are not meant to know” theme goes too much beyond the necessary caution our flaws demand; it’s blatantly misanthropic.
I don’t think he’s referring to the fruit of knowledge of good and evil, I think he’s referring to the doctrine of original sin itself, which he’s suggesting caused harm by occupying space in Christian brains.
Although belief in the inherent brokenness of humanity is a poisonous meme that can, at its worst, make you hate yourself and mess with the happiness of others, I see it as different from a basilisk. You can still function and lead a goal-driven life while under the influence of religious dogma. It does not paralyze you with terror the way a basilisk is reputed to do.
Unless the definition of “actual harm” was meant to contain this sentiment, it’s sufficient for a counterexample to the general point you expressed above to merely be an idea that causes harm by occupying space in a brain, regardless of whether it’s a true basilisk or not.
I agree that many beliefs about basilisks are ridiculous. Especially beliefs about what the correct decisions to make are in response to various scenarios. It would be a mistake to not believe that there is a particular failure mode that an AI creating civilisation could have which resulted in scenarios referred to as Roko’s Basilisk. It isn’t even an all that remarkable or unusual failure mode. Just a particular instance of extorting those vulnerable to extortion in the name of “the greater good”.
Arguing that an idea could do harm by merely occupying space in a brain is a tremendous discredit to humanity.
The mistake here is ‘merely’. I can think of reasons why I would not want my covert assets to each have knowledge of all the other asset’s false identities. The presence of that information could cause (allow) other agents to do harm. This isn’t particularly different in means of action.
I find belief in basilisks ridiculous. Arguing that an idea could do harm by merely occupying space in a brain is a tremendous discredit to humanity. Any adult brain that is so vulnerable as to suffer actual emotional damage by the mere contemplation of an idea is a brain accustomed to refusing to deal with reality. If the fact is that The Basilisk wants to torture me, I want to believe that it wants to torture me.
WARNING: THE LITANY OF TARSKI IS NOT DESIGNED TO WORK INSIDE A FEEDBACK LOOP
In this example the basilisk will want to torture you only if you believe that it will want to torture you. “The fact” is not a fact until the loop is complete. Note that both alternatives are “facts”, even though they appear mutually exclusive.
Interesting. I’ll need to think about this.
Or, alternatively, need to not think about this.
If we’re talking about Langford-type basilisks, that’s a reasonable position. But if you’re claiming that no idea can cause disutility, I find that idea to be ridiculous. And you arguing against an idea on the basis that it would be insulting to humanity is rather … ironic.
This is such a Critical Empathy Fail that I can barely take you seriously. Here’s a hint: original sin.
Not only is the original sin fictional evidence; its presupossitions about human nature are worthy of being taken far less seriously than my worst nonsense. The whole “things humans are not meant to know” theme goes too much beyond the necessary caution our flaws demand; it’s blatantly misanthropic.
I don’t think he’s referring to the fruit of knowledge of good and evil, I think he’s referring to the doctrine of original sin itself, which he’s suggesting caused harm by occupying space in Christian brains.
Although belief in the inherent brokenness of humanity is a poisonous meme that can, at its worst, make you hate yourself and mess with the happiness of others, I see it as different from a basilisk. You can still function and lead a goal-driven life while under the influence of religious dogma. It does not paralyze you with terror the way a basilisk is reputed to do.
Unless the definition of “actual harm” was meant to contain this sentiment, it’s sufficient for a counterexample to the general point you expressed above to merely be an idea that causes harm by occupying space in a brain, regardless of whether it’s a true basilisk or not.
I agree that many beliefs about basilisks are ridiculous. Especially beliefs about what the correct decisions to make are in response to various scenarios. It would be a mistake to not believe that there is a particular failure mode that an AI creating civilisation could have which resulted in scenarios referred to as Roko’s Basilisk. It isn’t even an all that remarkable or unusual failure mode. Just a particular instance of extorting those vulnerable to extortion in the name of “the greater good”.
The mistake here is ‘merely’. I can think of reasons why I would not want my covert assets to each have knowledge of all the other asset’s false identities. The presence of that information could cause (allow) other agents to do harm. This isn’t particularly different in means of action.