That particular theory, no. I don’t think I’ve mentioned it online before.
I grant that my theory would be rather hard to prove or disprove. If you want to argue that something is absolutely safe, you’d probably be giving a bunch of caveats about proper use and suitable people to use it. If you want to argue that something isn’t absolutely safe, you’ll be bringing up sloppy use and side effects.
Meditation is very commonly recommended as good for people. Alicorn is the first person I’ve heard of who reacted that badly to it.
The practical application of my theory is to take some care in how you make general recommendations of what seems like it should be good for everyone, and pay attention if something which is supposed to be good for everyone seems to be going wrong on you.
I think your response, especially the last paragraph, sums up what is good about your idea (and I agree somewhat with CronoDAS in principle that it is well-motivated). I hope that you enjoyed spelling it out explicitly in the same way that I enjoyed seeing it in more detail.
You did come across as picking on me—I saw your question as meaning that you didn’t agree with me and that you thought worse of me for what I’d said. It was possible to deduce what you were disagreeing with, but the emotional noise made it more difficult, and left me disinclined to pursue the matter.
I was only mildly upset, but it took me a while to decide it was worth trying to address what seemed to be your point.
By the time I’d gotten to my last paragraph, I was enjoying laying things out, but other than that, not especially fun. Neither was writing this reply.
I’m sorry. I didn’t mean to be picking on you so and it sounds like when I tried to be a little friendlier in my later comment I didn’t succeed very well.
I do think that it is important to register when a theory fails as well as when it succeeds, but I wish I had said that in a way that was less snarky.
I don’t think I’m doing a very good job of being friendly in this post either so I guess I’ll leave it at that.
I don’t think I’m doing a very good job of being friendly
I have been somewhat intrigued to observe that attempts to be friendly or conciliatory in comments seem to backfire more often than not. The dynamics are counter-intuitive.
I forgot to mention that another implication is to pay attention if someone tells you that something which is supposed to be good for everyone is going wrong on them.
That backs my theory that anything which is strong enough to do good is strong enough to do harm.
I think that is related to the theory of why idiot-proofing is misguided. If you want to make something completely idiot-proof, you have to make it impossible to make a bad decision, which, in practice, means taking away the ability to make any decisions at all—meaning that anything idiot-proof is also pretty much guaranteed to be completely useless. If something is powerful enough to do good, it has to be powerful enough to change something, and, as in the case of idiot-proofing, it’s really, really hard to prevent every possible bad change without preventing all change whatsoever.
I think that is related to the theory of why idiot-proofing is misguided.
Good theory, but I also quite like the more traditional theory:
Programming today is a race between software engineers striving to build bigger and better idiot- proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning.
Edit: I am curious if we see the same element, however. It seems to me that that element is aptly summarized as “writing a program that cannot fail spectacularly when used by someone who doesn’t understand it is a tremendous challenge—one which is necessary to face, but one which has stood against the combined best efforts of at least a generation of programmers.”
That particular theory, no. I don’t think I’ve mentioned it online before.
I grant that my theory would be rather hard to prove or disprove. If you want to argue that something is absolutely safe, you’d probably be giving a bunch of caveats about proper use and suitable people to use it. If you want to argue that something isn’t absolutely safe, you’ll be bringing up sloppy use and side effects.
Meditation is very commonly recommended as good for people. Alicorn is the first person I’ve heard of who reacted that badly to it.
The practical application of my theory is to take some care in how you make general recommendations of what seems like it should be good for everyone, and pay attention if something which is supposed to be good for everyone seems to be going wrong on you.
I hope I didn’t come across as picking on you; I just know that when people form pet theories, they have trouble letting them go and this site tends to work towards a bit of nit-picking in that regard.
I think your response, especially the last paragraph, sums up what is good about your idea (and I agree somewhat with CronoDAS in principle that it is well-motivated). I hope that you enjoyed spelling it out explicitly in the same way that I enjoyed seeing it in more detail.
You did come across as picking on me—I saw your question as meaning that you didn’t agree with me and that you thought worse of me for what I’d said. It was possible to deduce what you were disagreeing with, but the emotional noise made it more difficult, and left me disinclined to pursue the matter.
I was only mildly upset, but it took me a while to decide it was worth trying to address what seemed to be your point.
By the time I’d gotten to my last paragraph, I was enjoying laying things out, but other than that, not especially fun. Neither was writing this reply.
I’m sorry. I didn’t mean to be picking on you so and it sounds like when I tried to be a little friendlier in my later comment I didn’t succeed very well.
I do think that it is important to register when a theory fails as well as when it succeeds, but I wish I had said that in a way that was less snarky.
I don’t think I’m doing a very good job of being friendly in this post either so I guess I’ll leave it at that.
Sorry—I should have gotten back to you sooner.
What happened with your comment above was that it seemed like an attempt to take charge of my emotions, and that’s an extreme hot-button issue for me.
Also, my original comment was pushing things a little in the wrong direction—putting too much emphasis on it being my theory.
So, some years later, and I’m surprised I was upset. I consider this to be progress.
I have been somewhat intrigued to observe that attempts to be friendly or conciliatory in comments seem to backfire more often than not. The dynamics are counter-intuitive.
Oh no! I just thought I was having an off day.
Although I guess to be fair I should pose my original question to you as well; have you really been looking at cases where that does not hold?
It certainly seems to be true in this case but “more often than not” makes me fear for the future.
I have tried. But I expect there are cases that didn’t catch my attention at all.
I forgot to mention that another implication is to pay attention if someone tells you that something which is supposed to be good for everyone is going wrong on them.
I think that is related to the theory of why idiot-proofing is misguided. If you want to make something completely idiot-proof, you have to make it impossible to make a bad decision, which, in practice, means taking away the ability to make any decisions at all—meaning that anything idiot-proof is also pretty much guaranteed to be completely useless. If something is powerful enough to do good, it has to be powerful enough to change something, and, as in the case of idiot-proofing, it’s really, really hard to prevent every possible bad change without preventing all change whatsoever.
Good theory, but I also quite like the more traditional theory:
Do you like it, or believe it?
Mostly like it for comedy value, but I think there is an element of truth.
I would agree, on reflection.
Edit: I am curious if we see the same element, however. It seems to me that that element is aptly summarized as “writing a program that cannot fail spectacularly when used by someone who doesn’t understand it is a tremendous challenge—one which is necessary to face, but one which has stood against the combined best efforts of at least a generation of programmers.”