gwern
Caledonian: I would suggest that a decent criterion would be whenever the outcome of not adopting the technology doesn’t mean death.
Eric: believe it or not, Highlights is still running Goofus and Gallant; it’s an impressively long run!
Because the last news your readers want to hear, is that this person who is wealthier than you, is also smarter, happier, and not a bad person morally.
Don’t forget, elites tend to be healthier and longer-lived too.
I agree with Ben Jones here; scott clark’s repetition of the old official history is wrong. As someone who spent far too much time on Star Wars when I was younger, I can heartily recommend The Secret History of Star Wars for a good look into how it all actually developed. (The full PDF used to be available at http://www.secrethistoryofstarwars.com/ but what’s left is still worth the reading.)
Chad: if you seriously think that Turing-completeness does not imply the possibility of sentience, then you’re definitely in the wrong place indeed.
- 28 Dec 2011 19:20 UTC; 0 points) 's comment on Religious dogma as group identity by (
Keith, Eliezer: from what I remember of Catholic doctrine (I grew up one), breaking the seal of confession is a lesser sin than murder—as murder is a mortal sin. You go straight to hell for that one, no passing go—Especially as Jesus specifically said ‘do not kill’ is one of the strongest commandments—but breaking the seal, IIRC, is ‘just’ de-frocking and excommunication (which may or may not condemn you to hell), which are only undoable by the Pope.
However, mortal sins can be forgiven, and I recall that self-defense lessens the gravity of the offense. So given the hypothetical case of a sinner who is going to kill the priest, I think the thing to do would be to kill the sinner; but in the case of the sinner killing a bunch of other people & specifically excepting the priest (so he can’t claim self-defense as in the first case), that’s harder. I suppose it comes down to whether you think you can convince the Pope that you were justified in breaking the seal.
Anon of /jp/: thanks for pinpointing the Tsukihime reference for me. I was racking my brain—I was in that incredibly annoying state where I knew that I knew the reference, but I couldn’t quite recall it (‘Was it a Death Note reference? But that’s not right!’).
To those discussing randomized Quicksort: relevant to your discussion about more intelligent behaivour might be the Introsort algorithm.
See https://secure.wikimedia.org/wikipedia/en/wiki/Introsort
“Introsort or introspective sort is a sorting algorithm designed by David Musser in 1997. It begins with quicksort and switches to heapsort when the recursion depth exceeds a level based on (the logarithm of) the number of elements being sorted. It is the best of both worlds, with a worst-case O(n log n) runtime and practical performance comparable to quicksort on typical data sets. Since both algorithms it uses are comparison sorts, it too is a comparison sort.”
Of course, the true upper limit might be much higher than current human intelligence But if there exists any upper bound, it should influence the “FOOM”-scenario. Then 30 minutes head start would only mean arriving at the upper bound 30 minutes earlier.
Rasmus Faber: plausible upper limits for the ability of intelligent beings include such things as destroying galaxies and creating private universes.
What stops an Ultimate Intelligence from simply turning the Earth (and each competitor) into a black hole in those 30 minutes of nigh-omnipotence? Even a very weak intelligence could do things like just analyze the OS being used by the rival researchers and break in. Did they keep no backups? Oops; game over, man, game over. Did they keep backups? Great, but now the intelligence has just bought itself a good fraction of an hour (it just takes time to transfer large amounts of data). Maybe even more, depending on how untried and manual their backup system is. And so on.
I’m going to echo CatDancer: for me the most valuable insight was that a little information goes a very long way. From the example of the simulated beings breaking out to the Bayescraft interludes to the few observations and lots of cogitations in Three Worlds Collide to GuySrinivasan’s random-walk point, I’ve become more convinced that you can get a surprising amount of utility out of a little data; this changes other beliefs like my assessment of how possible AI rapid takeoff is.
Tim: but don’t prediction markets have a lot of benefits compared to stock markets? They terminate on usually set dates, they’re very narrowly focused (compare ‘will the Democrats win in 2008’ to ‘will GE’s stock go up on October 11, 2008’ - there are so many fewer confounding factors for the former), and they’re easier to use.
Well, you don’t have to use the fake-money ones. Intrade and Betfair have always seemed perfectly serviceable to me, and they’re real money prediction markets.
On a related point, fake money could actually be good. There’s less motivation to bet what you really truly think, but not wagering real money means you can make trades on just about everything in that market—you aren’t so practically or mentally constrained. You’re more likely to actually play, or play more.
(Suppose I don’t have $500 to spare or would prefer not to risk $500 I do have? Should I not test myself at all?)
Nazgul: games like the ones on http://www.philosophersnet.com/games/ , which often come with discussions of contradictions or difficulties with the positions you take and links to further discussion?
(They’re simple games, but I like’em anyway.)
WTF, dude? Not everything in life is about improving your rationality. Do you expect to become more rational after eating a hot dog? How about a peach?
I think you’re being a bit too skeptical here. Yes, I usually eat peaches because I like them and they’re good for my health. But I would, in fact, expect to become more rational after eating—especially if I was about to enter a supermarket to do my grocery shopping!
Well, if we’re going into history… I believe (despite being a northern democrat) that the Civil War was fundamentally unjust. It makes a mockery of the principles of the Declaration of Independence if secessionary states will be outright invaded.
(If slavery was an issue, then the North should’ve just bought out the South—likely would’ve been much cheaper than the actual war.)
I am a man capable of getting into details, I like details and if truth be told I’m rather good at it, I think details are important; so name one person who agreed with me other than superficially?
What ramifications and consequences of the ‘atoms are not identity’ belief do you think the upvoters of Eliezer are not thinking about? How is their acceptance superficial?
I would, however, suggest that they are lacking somewhat in humanity. There is such a thing as being altruistic beyond the human norm, and this is an example of it.
Reminds me of one of the 101 Zen Stories http://www.101zenstories.com/index.php?story=13 :
“Hello, brother,” Tanzan greeted him. “Won’t you have a drink?”
“I never drink!” exclaimed Unsho solemnly.
“One who does not drink is not even human,” said Tanzan.
“Do you mean to call me inhuman just because I do not indulge in intoxicating liquids!” exclaimed Unsho in anger. “Then if I am not human, what am I?”
“A Buddha,” answered Tanzan.
″...but recently saw an article pointing out that the average IQ of students from one of the Scandinavian countries (Denmark?) had increased measurably over the last 50 years.”
Isn’t that just the Flynn effect? It’s true of far many more countries than just Denmark.
Latanius: Descartes doesn’t say anything about what the ‘I’ is. Perhaps you would understand it better if the formula was “Something exists”? I really don’t see how that can be objected to, except on grounds of vacuousness/triviality/tautologicality.
(Which ironically enough, is a reasonable translation of one of Parmenides’s chief claims, that “It is.”)