Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
Quirinus_Quirrell
Of course. The defining difference is that force can’t be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn’t count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modification strategy when you should be using force, people may defy you when you can ill afford it. If you use force when you should be using behavior modification strategies, you will accumulate enemies you don’t need.
Translation: [...] I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
I never said anything about using force. Not that there’s anything wrong with that, but it’s a different position, not a translation.
Or what, you’ll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
That doesn’t close the loophole, it adds a constraint. And it’s only significant for those who both hire enough people to be vulnerable to statistical analysis of their hiring practices, and receive too many bad applicants from protected classes. If it is a significant constraint, you want to find that out from the data, not from guesswork, and apply the minimum legally acceptable correction factor.
Besides, it’s not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can’t get to. There aren’t any legal restrictions on that.
If the best way to choose who to hire is with a statistical analysis of legally forbidden criteria, then keep your reasons secret and shred your work. Is that so hard?
From the username, I was expecting that the suggestion was going to be to say avada kedavra.
I’d never say that on a forum that would generate a durable record of my comment.
I’m beginning to think that LW needs some better mechanism for dealing with the phenomenon of commenters who are polite, repetitive, immune to all correction, and consistently wrong about everything.
The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven’t learned how to lose. This can be fixed by making it clear that this community’s respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other communities), but some people don’t seem to have noticed.
Therefore, I propose adding a “plonk” button on each comment. Pressing it would hide all posts from that user for a fixed duration, and also send them an anonymous message (red envelope) telling them that someone plonked them, which post they were plonked for, and a form letter reminder that self-consistency is not a virtue and a short guide to losing gracefully.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.