I live in St. Petersburg. Unfortunately, that’s Florida, not Russia. Which is a shame, because I’m always impressed by your comments, and usually learn something. спасибо.
khafra
One of the goals I’ve seen emerging here is to build effective, rational groups. If they’re so fragile they can’t survive emotional engagement, or even seeing each others’ photos, that’s a good thing to find out sooner rather than later.
Great stat visualization and breakdown! But it’s not in first normal form: There’s two Bellevues, and the St. Petersburg entry doesn’t specify whether it’s Florida (which has a blob in the St. Petersburg/Tampa area, and no entry for Tampa) or Russia (which seems to be the origin of a few prominent LW’ers). I’m not sure of the most efficient way to resolve the ambiguities.
Or developer-friendly, at any rate—but I must admit, frappr’s AFLAX interface isn’t the most stable on Linux.
St. Petersburg here, so I’m excited about hearing from Mr. Vassar in Sarasota, Orlando, and possibly Tampa.
Reminds me of The User Illusion, which adds that the consciousness has an astoundingly low bandwidth—around 16bps—around 6 orders of magnitude lower than the senses transmit to the brain.
Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.
I see sketerpot’s story less as an arbitrary change in beliefs backfilled by rationalizations, and more as him learning that he can change his beliefs in such a fundamental way and then exploring beliefs with epistemic best practices in mind.
But that might just be because it’s also my story.
With a somewhat valuable but straightforward comment, an upvote with no further discussion is optimal, because both the author and the readers understand why it’s good.
With a worthless but ingenuously written comment, the readers gain nothing from further discussion, but commentary helps the author to more easily discover his error. Do what your decision theory requires regarding the good of the many vs. the good of the few.
I would amend this suggestion to opening a new thread when the current one reaches a number significantly lower than 500. On the last open thread, 19 of 20 comments in the first 500 were replies to other comments.
- 11 Mar 2010 10:34 UTC; 6 points) 's comment on Spring 2010 Meta Thread by (
Outlawing AI research was successful in Dune, but unsuccessful in Mass Effect. But I’ve never seen AI research fictionally outlawed until it’s done actual harm, and I seen no reason to expect a different outcome in reality. It seems a very unlikely candidate for the type of moral panic that tends to get unusual things outlawed.
NancyLebovitz wasn’t suggesting that the risks of UFAI would be averted by legislation; rather, that such legislation would change the research landscape, and make it harder for SIAI to continue to do what it does—preparation would be warranted if such legislation were likely. I don’t think it’s likely enough to be worth dedicating thought and action to, especially thought and action which would otherwise go toward SIAI’s primary goals.
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
Your personally being inconvenienced by the heat death of the universe is even less likely than winning the powerball lottery; if you wouldn’t spend $1 on a lottery ticket, why spend $1 worth of time worrying about the limits of entropy? Sure, it’s the most unavoidable of existential risks, but it’s vanishingly unlikely to be the one that gets you.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
A redditor in r/Anarchism just posted a semi-scholarly article on this topic.
It should be safe to use on Philip K. Dick fan forums.
Professor Mordin Solus solves marginal cases by refusing to experiment on any species with at least one member capable of Calculus, which is a bit different from criticism, “argument from species normality.”
That sounds like a reasonable conclusion—compared to an intelligence capable enough of introspection and planning to make a friendly AI, the overwhelming majority of my actions arise purely from unreasoning instinct.
An insular and hidebound community falls into the class of problems a young site hopes it’s lucky enough to be faced with. I’m a ~1 year OB reader, and while I never claim to represent these sites, I do correct unambiguous biases where the readership seems receptive, and direct people here who seem particularly ready (the last was a Randi conference-goer; another young white male working on a postgrad law degree—but I’m good at bringing girls to my martial arts class, so I’ll try that here).