Even a more sane and more continuously distributed measure could yield that result, depending on how you fit the scale. If you measure the likelihood of making a mistake (so zero would be a perfect driver, and one a rabid lemur), I expect the distribution to be hella skewed. Most people drive in a sane way most of the time. But it’s the few reckless idiots you remember—and so does every single one of the thousand other drivers who had the misfortune to encounter them. It would not surprise me if driving mistakes followed more-or-less a Pareto distribution.
Autolykos
There probably was a time when killing Hitler had a significant chance of ending the war by enabling peace talks (allowing some high-ranking German generals/politicians to seize power while plausibly denying having wanted this outcome). The window might have been short, and probably a bit after ’42, though. I’d guess any time between the Battle of Stalingrad (where Germany stopped winning) and the Battle of Kursk (which made Soviet victory inevitable) should’ve worked—everyone involved should rationally prefer white peace to the very real possibility of a bloody stalemate. Before, Germany would not accept. Afterwards, the Soviets wouldn’t.
Yup. Layer 8 issues are a lot harder to prevent than even Layer 1 issues :)
While air gaps are probably the closest thing to actual computer security I can imagine, even that didn’t work out so well for the guys at Natanz… And once you have systems on both sides of the air gap infected, you can even use esoteric techniques like ultrasound from the internal speaker to open up a low bandwith connection to the outside.
And some people would like to make it sit down and write “I will not conjure up what I can’t control” a thousand times for this. But I, for one, welcome our efficient market overlords!
Where did you get the impression that European countries do this on a large enough scale to matter*? There are separate bike roads in some cities, but they tend to end abruptly and lead straight into traffic at places where nobody expects cyclists to appear or show similar acts of genius in their design. If you photograph just the right sections, they definitely look neat. But integrating car and bike traffic in a crowded city is a non-trivial problem; especially in Europe where roads tend to follow winding goat paths from the Dark Ages and are way too narrow for today’s traffic levels already.
While the plural of anecdote is not data, two of my friends suffered serious head trauma in a bicycle accident they never fully recovered from (without a helmet, they’d likely be dead), while nobody I know personally ever was in a severe car accident. And quick search also seems to indicate that cycling is about as dangerous as driving (with both of them paling by comparison to motorcycles...).
*with the possible exception of the Netherlands, but even for them I’m not sure.
I know you intended your comment to be a little tongue-in-cheek, but it is actual energy, measured in Joules, we’re talking about. Exerting willpower drains blood glucose levels.
I don’t know of studies that indicate intraverts would drain glucose faster than extraverts when socializing, but that seems to be a pretty straightforward thing to measure, and I’d look forward to the results. At least, i can tell from personal experience that I need to exert willpower to stay in social situations (especially when there are lots of people close by or when it’s loud), and I’m a hardcore intravert. Also, I can conclude from the observation that there are actually lots of people who like to go to these places, while very few people enjoy activities that force them to exert willpower, that not everyone feels about it the way I do.
There’s another argument I think you might have missed:
Utilitarism is about being optimal. Instinctive morality is about being failsafe.
Implicit in all decisions is a nonzero possibility that you are wrong. Once you take that into account, having some “hard” rules like not agreeing to torture here (or in other dilemmas), not pushing the fat guy on the tracks in the trolley problem, etc, can save you from making horrible mistakes at the cost of slightly suboptimal decisions. Which is, incidentally, how I would want a friendly AI to decide as well—losing a bit in the average case to prevent a really horrible worst case.
That rule alone would, of course, make you vulnerable to Pascal’s Mugging. I think the way to go here is to have some threshold at which you round very low (or very high) probabilities off to zero (or one) when the difference is small against the probability of you being wrong. Not only will this protect you against getting your decisions hacked, it will also stop you from wasting computing power on improbable outcomes. This seems to be the reason why Pascal’s Mugging usually fails on humans.
Both of these are necessary patches because we operate on opaque, faulty and potentially hostile hardware. One without the other is vulnerable to hacks and catastrophic failure modes, but both taken together are a pretty strong base for decisions that, so far, have served us humans pretty well. In two rules:
1) Ignore outcomes to which you assign a lower probability than to you being wrong/mistaken about the situation. 2) Ignore decisions with horrible worst case scenarios if there are options with a less horrible worst case and still acceptable acceptable average case.
When both of these apply to the same thing, or this process eliminates all options, you have a dilemma. Try to reduce your uncertainty about 1) and start looking for other options in 2). If that is impossible, shut up and do it anyway.
Exactly. Stocks are almost always better long-term investments than anything else (if mixed properly; single points of failure are stupid). The point of mixing in “slow” options like bonds or real estate is that it gives you something to take money out of when the stocks are low (and replenish it when the stocks are high). That may look suboptimal, but still beats the alternatives of borrowing money to live from or selling off stocks you expect to rise mid-term. The simulation probably does a poor job of reflecting that.
Intelligence is basically how quickly you learn from experience, so being smart should allow you to get to the same level with much less time put in (which seems to be what the OP is hinting at). I’d also expect diminishing returns, especially if you always socialize with the same (type of) people. At some point, each social group (or even every single person) becomes a skill of its own. Once your generic social skills are at an acceptable level, pick your specializations carefully. Life is too short to waste it on bad friends.
My thoughts exactly. The first commandment of multiclassing in 3rd is “Thou shalt not lose caster levels”. Also, Wizards are easily the most OP base class, if played well. Multiclassing them into anything without wizard spell progression is just a waste.
OTOH, using gestalt rules to make a Wizard//Rogue isn’t half bad, even if a little short on HP and proficiencies. I prefer Barbarian or even the much ridiculed Monk in place of the Rogue.
I suppose you already drew the obvious conclusion, but I still think it’s worth spelling out:
The key to people liking you is making sure they feel good when you’re around. Causality is secondary.
A quick google search found this:
Emma Chapman, Simon Baron-Cohen, Bonnie Auyeung, Rebecca Knickmeyer, Kevin Taylor & Gerald Hackett (2006) Fetal testosterone and empathy: Evidence from the Empathy Quotient (EQ) and the “Reading the Mind in the Eyes” Test, Social Neuroscience, 1:2, 135-148, http://dx.doi.org/10.1080/17470910600992239
I can’t find a citation for the whole story right now, but as I remember it, it goes something like this: When the first wave of testosterone hits a male fetus, it kills off well over 80% of the brain cells responsible for empathy and reading emotions. Which is not as bad as it sounds, some of them do grow back. And then comes puberty...
Only say things that can be heard. If you can anticipate that you are too many inferential steps away, you should talk about something else. Which means in this case: Be patient and build their knowledge from the bottom, not from the top.
If you have already started and notice the problem too late, yeah, you’re kinda screwed. The honest answer seems pretty rude, and not saying anything is worse. I’d probably try to salvage what I still can by saying something along the lines of “I know this is a complicated and confusing issue, and it takes a while to explain where I’m coming from*. I can point you to these resources if you’re really interested in the matter.” And not bring it up again unless they start it.
This allows you to drop a conversation that’s going nowhere, while they can research it if they want to or ignore it if they don’t while still saving face in both cases.
*Or, if it went really bad: ”...and I suck at explaining.”—taking the blame for the failed communication can defuse the sting of making them feel stupid.
There is also something else going on here, which I realized after learning about personality types, especially Jung’s theories and the Myers-Briggs Type Indicator. One dimension separates along the primary mode of seeing the world (Sensing vs iNtuitive), with the former ones collecting individual facts and strictly following isolated rules, and the latter ones always looking for the generalized principle behind the facts and questioning the origin and sense of rules.
These two types have a lot of trouble understanding each others’ way of thinking and frequently get into each others’ hairs; e.g. S types tend to interpret N types questioning rules out of curiosity as a personal attack on their way of life (especially so if accurate), while N types tend to dismiss criticism by S types as small-minded bean counting and accuse them of missing the forest for the trees.
Now, there are roughly four to six times as many S types as N types around, and on top of that most weak cases of N types tend to hide it so as not to seem too weird. On the other hand, abstract topics (natural sciences, Less Wrong) tend to attract N types. From this baseline (and your description) I infer that you are also one of the aliens. You can’t fundamentally alter your way of thinking to fit in (would you even want to?) - the best you can hope for is to find and befriend the other hidden aliens while trying to get along with the rest.
There’s also a nice TED talk on the matter. Just google “Weirdos, Misfits and You”. And you might like Eugene Ionesco’s “Rhinoceros”. It’s usually taken as a metaphor for something else, but I still find that it hits the mark pretty well. It’s also short and fun to read, so there’s no good excuse not to.
Then he asked the wrong question. Straight up asking “Ougi, why did you decide on a formal dress code when this apparently has no meaning for your teachings?” is a different question from “Does wearing robes make us a cult?”, and shows a different understanding of what the robes mean. The answer would still be deliberately confusing and enigmatic, but that’s kinda the whole point of a koan.
Danger, wild speculation ahead: I’d assume it has something to do with the saying “Engineers can’t lie.” I can imagine constantly experiencing that doing things in violation with reality leads to failure, while at the same time hearing politicians lie pretty much every time they open their mouth and having them get elected again and again (or not failing in another way), to make quite a few of them seriously fed up with the current government in particular and humanity in general. Some less stable personalities might just want to watch the world burn at that point. Which should make them recruitable as terrorists, if you use the right sales pitch.
It’s probably one of the many useful functions of the court jester :)