This is just cracking a dark artsy joke. I still like it since reversed stupidity (or is it intelligence in this case?) truly isn’t intelligence (Konkvistador’s brain starts to ache).
No the better approach is to simply take into account if any important conflict of interests exist between you and a very clever party you don’t fully trust when evaluating their arguments. Yes yes ad hominem I know, yet it does sounds like good tactical advice no?
Are you telling me that I apply this as a dark arts tactic to avoid being persuaded? That is, are you calling me stupid and arrogant? I insist that one of those does not apply!
EDIT: Oh, wait, you could be suggesting that I’m trying to portray XiXiDu as stupid and arrogant? I deny that charge. It doesn’t apply in this instance and when I do say things that are insulting my track record indicates that I say them rather directly. In fact, I point out that it is XiXiDu that calls himself stupid and on more than one occasion I have flat out denied and contradicted his claim.
EDIT: Never mind. Parent changed. New reply:
This is just cracking a dark artsy joke.
Huh? No it isn’t. It’s an agreement with XiXiDu’s point. It is an phenomenon that applies and, I suggest, one that is implemented to a certain context sensitive degree by humans.
Oh sorry I thought you where being sarcastic and that you where disagreeing with XiXiDu and perhaps even setting up a straw man. Maybe the reason I misidentified this is because of your use of “arrogant”. When I think of protective stupidity I don’t associate that word with it.
How do you find out whether a conflict of interests exists? That’s one of the things someone who’s trying to manipulate you will try to conceal, and if they’re a lot smarter, they’re more likely to succeed at it.
Sure. But I think at least some conflicts of interests are very hard to conceal. At the very least if someone finds this argument compelling the other party can’t prompt them to denounce this check on principle.
Most strategies that could help one avoid malicious advice stemming from hard to detect conflicts of interests seem to have a (to me) unacceptably high false positive rate. Not so much in the context of a scenario where you are dealing with a a boxed AI but more say when one is interacting very intelligent people in a business envrionment or personal life. It seems to me that such strategies would carry high opportunity costs.
This is just cracking a dark artsy joke. I still like it since reversed stupidity (or is it intelligence in this case?) truly isn’t intelligence (Konkvistador’s brain starts to ache).
No the better approach is to simply take into account if any important conflict of interests exist between you and a very clever party you don’t fully trust when evaluating their arguments. Yes yes ad hominem I know, yet it does sounds like good tactical advice no?
Edit: It turns out it wasn’t a joke.
Are you telling me that I apply this as a dark arts tactic to avoid being persuaded? That is, are you calling me stupid and arrogant? I insist that one of those does not apply!
EDIT: Oh, wait, you could be suggesting that I’m trying to portray XiXiDu as stupid and arrogant? I deny that charge. It doesn’t apply in this instance and when I do say things that are insulting my track record indicates that I say them rather directly. In fact, I point out that it is XiXiDu that calls himself stupid and on more than one occasion I have flat out denied and contradicted his claim.
EDIT: Never mind. Parent changed. New reply:
Huh? No it isn’t. It’s an agreement with XiXiDu’s point. It is an phenomenon that applies and, I suggest, one that is implemented to a certain context sensitive degree by humans.
Oh sorry I thought you where being sarcastic and that you where disagreeing with XiXiDu and perhaps even setting up a straw man. Maybe the reason I misidentified this is because of your use of “arrogant”. When I think of protective stupidity I don’t associate that word with it.
How do you find out whether a conflict of interests exists? That’s one of the things someone who’s trying to manipulate you will try to conceal, and if they’re a lot smarter, they’re more likely to succeed at it.
Sure. But I think at least some conflicts of interests are very hard to conceal. At the very least if someone finds this argument compelling the other party can’t prompt them to denounce this check on principle.
Most strategies that could help one avoid malicious advice stemming from hard to detect conflicts of interests seem to have a (to me) unacceptably high false positive rate. Not so much in the context of a scenario where you are dealing with a a boxed AI but more say when one is interacting very intelligent people in a business envrionment or personal life. It seems to me that such strategies would carry high opportunity costs.