So if you’re giving examples and you don’t know how many to use, use three.

I’m not sure I follow. Could you give a couple more examples of when to use this heuristic?

Karma: 2,070 (LW), 0 (AF)

So if you’re giving examples and you don’t know how many to use, use three.

I’m not sure I follow. Could you give a couple more examples of when to use this heuristic?

Seems I’m late to the party, but if anyone is still looking at this, here’s another color contrast illusion that made the rounds on the internet some time back.

For anyone who hasn’t seen it before, knowing that it’s a color contrast illusion, can you guess what’s going on?

Major hint, in rot-13: Gurer ner bayl guerr pbybef va gur vzntr.

Full answer: Gur “oyhr” naq “terra” nernf ner gur fnzr funqr bs plna. Lrf, frevbhfyl.

The image was created by Professor Akiyoshi Kitaoka, an incredibly prolific source of crazy visual perception illusions.

Commenting in response to the edit…

I took the Wired quiz earlier but didn’t actually fill in the poll at the time. Sorry about that. I’ve done so now.

Remarks: I scored a 27 on the quiz, but couldn’t honestly check any of the four diagnostic criteria. I lack many distinctive autism-spectrum characteristics (possibly to the extent of being on the other side of baseline), but have a distinctly introverted/antisocial disposition.

A minor note of amusement: Some of you may be familiar with John Baez, a relentlessly informative mathematical physicist. He produces, on a less-than-weekly basis, a column on sundry topics of interest called This Week’s Finds. The most recent of such mentions topics such as using icosahedra to solve quintic equations, an isomorphism between processes in chemistry, electronics, thermodynamics, and other domains described in terms of category theory, and some speculation about applications of category theoretical constructs to physics.

Which is all well and good and worth reading, but largely off-topic. Rather, I’m mentioning this on LW because of the link and quotation Baez put at the end of the column, as it seemed like something people here would appreciate.

Go ahead and take a look, even if you don’t follow the rest of the column!

Ah, true, I didn’t think of that, or rather didn’t think to generalize the gravitational case.

Amusingly, that makes a nice demonstration of the topic of the post, thus bringing us full circle.

Similarly, my quick calculation, given an escape velocity high enough to walk and an object 10 meters in diameter, was about 7 * 10^9. That’s roughly the density of electron-degenerate matter; I’m pretty sure nothing will hold together at that density without substantial outside pressure, and since we’re excluding gravitational compression here I don’t think that’s likely.

Keeping a shell positioned would be easy; just put an electric charge on both it and the black hole. Spinning the shell fast enough might be awkward from an engineering standpoint, though.

I don’t think you’d be landing at all, in any meaningful sense. Any moon massive enough to make walking possible at all is going to be large enough that an extra meter or so at the surface will have a negligible difference in gravitational force, so we’re talking about a body spinning so fast that its equatorial rotational velocity is approximately orbital velocity (and probably about 50% of escape velocity). So for most practical purposes, the boots would be in orbit as well, along with most of the moon’s surface.

Of course, since the centrifugal force at the equator due to rotation would almost exactly counteract weight due to gravity, the only way the thing could hold itself together would be tensile strength; it wouldn’t take much for it to slowly tear itself apart.

It’s an interesting idea, with some intuitive appeal. Also reminds me of a science fiction novel I read as a kid, the title of which currently escapes me, so the concept feels a bit mundane to me, in a way. The complexity argument is problematic, though—I guess one could assume some sort of per-universe Kolmogorov weighting of subjective experience, but that seems dubious without any other justification.

The example being race/intelligence correlation? Assuming any genetic basis for intelligence whatsoever, for there to be absolutely

*no*correlation at all with race (or any distinct subpopulation, rather) would be quite unexpected, and I note Yvain discussed the example only in terms as uselessly general as the trivial case.Arguments involving the magnitude of differences, singling out specific subpopulations, or comparing genetic effects with other factors seem to quickly end up with people grinding various political axes, but Yvain didn’t really go there.

The laws of the physics are the rules, without which we couldn’t play the game. They make it hard for any one player to win.

Except that, as far as thermodynamics goes, the game is rigged and the house always wins. Thermodynamics in a nutshell, paraphrased from C. P. Snow:

You can’t win the game.

You can’t break even.

You can’t stop playing.

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o’clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him, saying, “And therefore such-and-such is true.”

“Why is that?” the guy on the couch asks.

“It’s trivial! It’s trivial!” the standing guy says, and he rapidly reels off a series of logical steps: “First you assume thus-and-so, then we have Kerchoff’s this-and-that; then there’s Waffenstoffer’s Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so...” The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, “Yeah, yeah. It’s trivial.”

We physicists were laughing, trying to figure them out. We decided that “trivial” means “proved.” So we joked with the mathematicians: “We have a new theorem—that mathematicians can prove only trivial theorems, because every theorem that’s proved is trivial.”

The mathematicians didn’t like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

Since when has being “good enough” been a prerequisite for loving something (or someone)? In this world, that’s a quick route to a dismal life indeed.

There’s the old saying in the USA:

*“My country, right or wrong; if right, to be kept right; and if wrong, to be set right.”*The sentiment carries just as well, I think, for the universe as a whole. Things as they are may be very wrong indeed, but what does it solve to hate the universe for it? Humans have a long history of loving not what is perfect, but what is broken—the danger lies not in the emotion, but in failing to heal the damage. It may be a crapsack universe out there, but it’s still*our*sack of crap.By all means, don’t look away from the tragedies of the world. Figuratively, you can rage at the void and twist the universe to your will, or you can sit the universe down and stage a loving intervention. The main difference between the two, however, is how you feel about the process; the universe, for better or worse, really isn’t going to notice.

Really, does it actually

*matter*that something isn’t a magic bullet? Either the cost/benefit balance is good enough to warrant doing something, or it isn’t. Perhaps taw is overstating the case, and certainly there are other causes of akrasia, but someone giving disproportionate attention to a plausible hypothesis isn’t really evidence*against*that hypothesis, especially one supported by multiple scientific studies.From what I can see, there’s more than sufficient evidence to warrant serious consideration for something like the following propositions:

Application of short-term willpower measurably expends some short-term biological resource

Willpower “weakens” as the resource is depleted, recovering over a longer time span

Resource expenditure correlates with reduced blood sugar concentration

Increasing blood sugar (temporarily?) restores resource availability

So, my questions are: If this is correct, what practical use could we make of the idea? What could we do as individuals or as a group to decide whether it’s useful enough to bother thinking about? Particularly in cases where willpower is needed mostly to

*start*a task rather than continue it, if there’s a simple way to get a quick, short-term boost that might make the difference between several hours of productivity vs. akratic frustration, that’s significant!As an aside, I recall seeing some studies indicating that there may be more general principles in play here, regarding the mind’s executive functions as a whole, but I don’t have citations on hand at the moment.

I thought the mathematical terms went something like this:

Trivial: Any statement that has been proven

Obviously correct: A trivial statement whose proof is too lengthy to include in context

Obviously incorrect: A trivial statement whose proof relies on an axiom the writer dislikes

Left as an exercise for the reader: A trivial statement whose proof is both lengthy and very difficult

Interesting: Unproven, despite many attempts

It’s said that “ignorance is bliss”, but that doesn’t mean knowledge is misery!

I recall studies showing that major positive/negative events in people’s lives don’t really change their overall happiness much in the long run. Likewise, I suspect that seeing things in terms of grim, bitter truths that must be stoically endured has very little to do with what those truths are.

Which is fair enough I suppose, but it sounds bizarrely optimistic to me. We’re talking about a time span a thousand times longer than the

*current age of the universe*. I have a hard time giving weight to any nontrivial proposition expected to be true over that kind of range.

It’s a reasonable point, if one considers “eventual cessation of thought due to thermodynamic equilibrium” to have an immeasurably small likelihood compared to other possible outcomes. If someone points a gun at your head, would you be worrying about dying of old age?

A nontrivial variant is also directed sarcastically at someone who lost badly (this seems to be most common where the ambient rudeness is high, e.g., battle.net).

Also, few ways are more effective at discovering flaws in an idea than to begin explaining it to someone else; the greatest error will inevitably spring to mind at precisely the moment when it is most socially embarrassing to admit it.

For what it’s worth, the credit score system makes a lot more sense when you realize it’s not about evaluating “this person’s ability to repay debt”, but rather “expected profit for lending this person money at interest”.

Someone who avoids carrying debt (e.g., paying interest) is not a good revenue source any more than someone who fails to pay entirely. The ideal lendee is someone who reliably and consistently makes payment with a maximal interest/principal ratio.

This is another one of those Hanson-esque “X is not about X-ing” things.