I see, my bad. I so far believed to be usually pretty good at detecting when someone is joking. But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there. Although now I am not so sure anymore if people were actually serious on those other occasions :-)
I am going to send you a PM with an example.
Under normal circumstances I would actually regard the following statements by Ben Goertzel as sarcasm:
Of course, this faith placed in me and my team by strangers was flattering. But I felt it was largely justified. We really did have a better idea about how to make computers think. We really did know how to predict the markets using the news.
or
We AI folk were talking so enthusiastically, even the businesspeople in the company were starting to get excited. This AI engine that had been absorbing so much time and money, now it was about to bear fruit and burst forth upon the world!
I guess what I encountered here messed up my judgement by going too far in suppressing the absurdity heuristic.
But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there.
The absurd part was supposed to be that Ben actually came close to building an AGI in 2000. I thought it would be obvious that I was making fun of him for being grossly overconfident.
BTW, I think some people around here do take ideas too seriously, and reports of nightmares probably weren’t jokes. But then I probably take ideas more seriously than the average person, and I don’t know on what grounds I can say that they take ideas too seriously, whereas I take them just seriously enough.
some people around here do take ideas too seriously … I don’t know on what grounds I can say that
If you ever gain a better understanding of what the grounds are on which you’re saying it, I’d definitely be interested. It seems to me that insofar as there are negative mental health consequences for people who take ideas seriously, these would be mitigated (and amplified, but more mitigated than amplified) if such people talked to each other more, which is however made more difficult by the risk that some XiXiDu type will latch onto something they say and cause damage by responding with hysteria.
One could construct a general argument of the form, “As soon as you can give me an argument why I shouldn’t take ideas seriously, I can just include that argument in my list of ideas to take seriously”. It’s unlikely to be quite that simple for humans, but still worth stating.
I see, my bad. I so far believed to be usually pretty good at detecting when someone is joking. But given what I have encountered on Less Wrong in the past, including serious treatments and discussions of the subject, I thought you were actually meaning what you wrote there. Although now I am not so sure anymore if people were actually serious on those other occasions :-)
I am going to send you a PM with an example.
Under normal circumstances I would actually regard the following statements by Ben Goertzel as sarcasm:
or
I guess what I encountered here messed up my judgement by going too far in suppressing the absurdity heuristic.
The absurd part was supposed to be that Ben actually came close to building an AGI in 2000. I thought it would be obvious that I was making fun of him for being grossly overconfident.
BTW, I think some people around here do take ideas too seriously, and reports of nightmares probably weren’t jokes. But then I probably take ideas more seriously than the average person, and I don’t know on what grounds I can say that they take ideas too seriously, whereas I take them just seriously enough.
If you ever gain a better understanding of what the grounds are on which you’re saying it, I’d definitely be interested. It seems to me that insofar as there are negative mental health consequences for people who take ideas seriously, these would be mitigated (and amplified, but more mitigated than amplified) if such people talked to each other more, which is however made more difficult by the risk that some XiXiDu type will latch onto something they say and cause damage by responding with hysteria.
One could construct a general argument of the form, “As soon as you can give me an argument why I shouldn’t take ideas seriously, I can just include that argument in my list of ideas to take seriously”. It’s unlikely to be quite that simple for humans, but still worth stating.