From the username, I was expecting that the suggestion was going to be to say avada kedavra.
I’d never say that on a forum that would generate a durable record of my comment.
From the username, I was expecting that the suggestion was going to be to say avada kedavra.
I’d never say that on a forum that would generate a durable record of my comment.
If the best way to choose who to hire is with a statistical analysis of legally forbidden criteria, then keep your reasons secret and shred your work. Is that so hard?
That doesn’t close the loophole, it adds a constraint. And it’s only significant for those who both hire enough people to be vulnerable to statistical analysis of their hiring practices, and receive too many bad applicants from protected classes. If it is a significant constraint, you want to find that out from the data, not from guesswork, and apply the minimum legally acceptable correction factor.
Besides, it’s not like muggles are a protected class. And if they were? Just keep them from applying in the first place, by building your office somewhere they can’t get to. There aren’t any legal restrictions on that.
The “just hack out of the matrix” answer, however, presupposes the existence of a security hole, which is unlikely.
Not as unlikely as you think.
Or what, you’ll write me an unhappy ending? Just be thankful I left a body behind for you to finish your story with.
Translation: [...] I cannot walk away from this and leave you being wrong, you must profess to agree with me and if you are not rational enough to understand and accept logical arguments then you will be forced to profess agreement.
I never said anything about using force. Not that there’s anything wrong with that, but it’s a different position, not a translation.
Of course. The defining difference is that force can’t be ignored, so threatening a punishment only constitutes force if the punishment threatened is strong enough; condemnation doesn’t count unless it comes with additional consequences. Force is typically used in the short term to ensure conformance with plans, while behaviour modification is more like long-term groundwork. Well executed behaviour modifications stay in place with minimal maintenance, but the targets of force will become more hostile with each application. If you use a behaviour modification strategy when you should be using force, people may defy you when you can ill afford it. If you use force when you should be using behavior modification strategies, you will accumulate enemies you don’t need.
Memory charms do have their uses. Unfortunately, they seem to only work in universes where minds are ontologically basic mental entities, and the potions available in this universe are not fast, reliable or selective enough to be adequate substitutes.
You needn’t worry on my behalf. I post only through Tor from an egress-filtered virtual machine on a TrueCrypt volume. What kind of defense professor would I be if I skipped the standard precautions?
By the way, while I may sometimes make jokes, I don’t consider this a joke account; I intend to conduct serious business under this identity, and I don’t intend to endanger that by linking it to any other identities I may have.
In short, there most certainly ARE legal restrictions on building your office somewhere deliberately selected for it’s inaccessibility to those with a congenital inability to e.g. teleport,
The Americans with Disabilities Act limits what you can build (every building needs ramps and elevators), not where you can build it. Zoning laws are blacklist-based, not whitelist-based, so extradimensional spaces are fine. More commonly, you can easily find office space in locations that poor people can’t afford to live near. And in the unlikely event that race or national origin is the key factor, you get to choose which country or city’s demographics you want.
A lack of teleportation-specific case law would not work in your favor, given the judge’s access to statements you’ve already made.
This is the identity under which I speak freely and teach defense against the dark arts. This is not the identity under which I buy office buildings and hire minions. If it was, I wouldn’t be talking about hiring strategies.
Meh. The villains seem a lot less formidable in real life, like they left something essential behind in the fiction.
Hey, be patient. I haven’t been here very long, and building up power takes time.
Good idea. I’d vote at least once for this.
I recommend one additional layer of outgoing indirection prior to the Tor network as part of standard precaution measures.
Let’s not get too crazy; I’ve got other things to do. and there are more practical attacks to worry about first, like cross-checking post times against alibis. I need to finish my delayed-release comment script first before I worry about silly things like setting up extra relays. Also, there are lesson plans I need to write, and some Javascript I want Clippy to have a look at.
This issue came up on Less Wrong before, and I will reiterate the advice I gave there: if a forbidden criteria affects a hiring decision, keep your reasons secret and shred your work. The linked article is about a case where the University of Kentucky was forced to pay $125,000 to an applicant, Martin Gaskell. This happened because the chairman of the search committee, Michael Cavagnero, was stupid enough to write this in a logged email:
If Martin were not so superbly qualified, so breathtakingly above the other applicants in background and experience, then our decision would be much simpler. We could easily choose another applicant, and we could content ourselves with the idea that Martin’s religious beliefs played little role in our decision. However, this is not the case. As it is, no objective observer could possibly believe that we excluded Martin on any basis other than religious...
And that’s where the trouble starts, because Martin Gaskell’s religious beliefs would have been a serious risk to the university’s reputation. No one would take a creationist seriously as an astronomer, and no one would take an observatory seriously if one of the first few Google results for its name connected it to creationism.
Which is why, as soon as they realized they had a creationist as a potentially leading candidate, they should have moved their hiring process into private meetings with poor note-taking, and started looking for better pretexts. Yes, anti-discrimination laws are crazy, but not all judges are. However, a judge can only work around craziness if you allow a suitable pretext, which means not discussing how you need to break the crazy laws in writing.
Someone as clever, powerful, and rich as yourself can likely find a collision if you get to choose both source texts (which is easier than finding a collision with one of the two inputs determined by someone else).
This is actually much harder than you’d think. A hash function is considered broken if any collision is found, but a mere collision is not sufficient; to be useful, a collision must have chosen properties. In the case of md5sum, it is possible to generate collisions between files which differ in a 128-byte aligned block, with the same prefix and suffix. This works well for any file format that is scriptable or de-facto scriptable—wrap the colliding block in a comparison statement, and behave differently depending on its result. However, even for md5sum, it is still impossible to generate a collision between plain-text files with two separate chosen texts; nor is it possible to generate collisions between files that have no random-seeming sections, or that have random sections that are too small, not block-aligned, or are drawn from a constrained alphabet. (Snowyowl’s joke would require a preimage attack, which is harder still, and which won’t be available at first even if sha1sum is broken, so he will not be able to fulfill his promise to reveal a message with that sha1sum.)
Anyways, since you asked, here are a few more hashes of the same thing. I didn’t bother with the SHA3 finalists, since they don’t seem to have made convenient command-line utilities yet and I don’t want to force people to fiddle too much to verify my hashes.
sha512sum: 85cf46426d025843d6b0f11e3232380c6fac6cae88b66310ee8fbcd3f81722d08b2154c6388ecb1ee9cebc528e0f56e3be7a057cd67531cfda442febe0132418 sha384sum: 400d47bf97b6a3ccd662e0eb1268820c57d10e2a623c3a007b297cc697ed560862dda19b74638f92a3550fbbfe14d485 md5sum: 8fec2109c85f622580e1a78c9cabdab4
When should you punish someone for a crime they will commit in the future?
Easy. When they can predict you well enough and they think you can predict them well enough that if you would-counterfactually punish them for committing a crime in the future, it influences the probability that they will commit the crime by enough to outweigh the cost of administering the punishment times the probability that you will have to do so. Or when you want to punish them for an unrelated reason and need a pretext.
Not every philosophical question needs to be complicated.
You’re safeguarding against the wrong thing. If I needed to fake a prediction that badly, I’d find a security hole in Less Wrong with which to edit all your comments. I wouldn’t waste time establishing karma for sockpuppets to post editable hashes to deter others from posting hashes themselves, that would be silly. But as it happens, I’m not planning to edit this hash, and doing that wouldn’t have been a viable strategy in the first place.
I voted on this and the immediate parent, but I won’t reveal why, or which direction, or how many times, or which account I used.
The problem is quite simple. Tim, and the rest of the class of commenters to which you refer, simply haven’t learned how to lose. This can be fixed by making it clear that this community’s respect is contingent on retracting any inaccurate positions. Posts in which people announce that they have changed their mind are usually upvoted (in contrast to other communities), but some people don’t seem to have noticed.
Therefore, I propose adding a “plonk” button on each comment. Pressing it would hide all posts from that user for a fixed duration, and also send them an anonymous message (red envelope) telling them that someone plonked them, which post they were plonked for, and a form letter reminder that self-consistency is not a virtue and a short guide to losing gracefully.