They came impressively close considering they didn’t have any giant shoulders to stand on.
Snowyowl
Frugality and working from finite data
I made a prediction with sha1sum 0000000000000000000000000000000000000000. It’s the prediction that sha1sum will be broken. I’ll only reveal the exact formulation once I know whether it was true or false.
I think this conversation just jumped one of the sharks that swim in the waters around the island of knowledge.
Blues, Greens and abortion
In Dirk Gently’s universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won’t.
I used to annoy my little sister by reading up on simple games with mathematically perfect strategies, then beating her every single time. Now she refuses to play with me. (E.g. Nim, and variations thereof. If you don’t know how to play Nim, it’s actually a pretty good example for this thread. The perfect strategy requires a certain amount of mental arithmetic and is non-obvious, so if you don’t know it the game is pretty playable.)
Edit: In retrospect, I’m a jerk.
Pretty sure that book is a collection of jokes (albeit pessimistic ones). Besides, your perceptions are fine, it’s your opinions that are worthless and misleading.
- 25 Jan 2013 0:48 UTC; 1 point) 's comment on Some Claims Are Just Too Extraordinary by (
“Computed” means that only the input and output are important: as long as you can get from “2+2″ to “4”, it doesn’t matter how you do it. “Computation” means that it’s the algorithm you use that is important.
If a computer can give the response a human would have to a given situation, despite that computer using an AI which operates on different principles from the human brain (simulating a universe containing a human brain is sufficient), is that computer thinking/conscious? If yes, then thought/consciousness can be computed. If no, then thought/consciousness is the computation.
This is related to the Turing Test, in which a computer is deemed conscious if it can produce responses indistinguishable from those of a human, regardless of the algorithm used.
If it’s a stupid idea and it works, then it isn’t stupid.
-- French Ninja, Freefall
Puts me in mind of “Rationalists should win”.
Wow. I thought I understood regression to the mean already, but the “correlation between X and Y-X” is so much simpler and clearer than any explanation I could give.
Hello. I’m Snowyowl, or Christopher if you’re interested in my real name. (Some people are.) I first discovered this site on Friday 14th August, when a friend of mine (who calls herself Kron) pointed me in the direction of the story “Harry Potter and the Methods Of Rationality”.
I don’t consider myself a rationalist, because that seems like a sure-fire way of feeling superior to 90% of the world. Also, I have realised in the past week that a lot of my beliefs and opinions are contradictory—in LessWrong lingo, my Bayesian network isn’t internally consistent. Of course, I had noticed that before now, but it didn’t seem an important problem before I read a few relevant blog posts. So no, I’m not a rationalist, and I hadn’t even heard the word until two weeks ago.
I’m a second-year mathematics undergrad at the time of writing; I had actually heard of Bayes’ Theorem years ago. I have also taken courses branching out into computing and physics. The techniques in your blog appeal to my way of thinking, since I enjoy mathematics and logic, and applying scientific methods to everyday life is a relatively new concept to me.
So hello, LessWrong! I look forward to many calm and reasonable debates!
we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Let’s be Bayesian about this.
Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.
It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.
I don’t have enough information to give exact probabilities, but it’s a lot more likely than you seem to think that we will survive the next billion years without assimilation from an alien UFAI.
Personally, I think the most likely scenario is either that Earth is somehow special and intelligent life is rarer than we give it credit for; or that alien UFAIs are generally not interested in interstellar/intergalactic travel.
EDIT: More rigorously, let Uf be the event “Alien UFAIs are a threat to us”, and Ap be the event “We exist today” (anthropic principle). The prior probability P(Uf) is large, by your arguments, but P(Ap given Uf) is much smaller than P(Ap given not-Uf). Since we observe Ap to be true, the posterior probability P(Uf given Ap) is fairly small.
Isotope separation of deuterium, tritium, and He3 for fusion power.
Technically, that’s more easily done with a centrifuge, or perhaps distillation. But I agree with your other points. Carbon nanotubes, here we come!
You make a surprisingly convincing argument for people not being real.
Good thing too. At the time of writing you’d have lost 110 points of karma for this post, instead of only 11.
Everyone knows that clever people use longer words.
Er, I meant to say that it’s a commonly held belief that the length and obscurity of words used increases asymptotically with intelligence.
This sounds interesting and relevant. Here’s my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two.
My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.
Out of curiosity, which time was Yudkowsky actually telling the truth? When he said those five assertions were lies, or when he said the previous sentence was a lie? I don’t want to make any guesses yet. This post broke my model; I need to get a new one before I come back.
You’re participating in a flamewar here, though it’s a credit to you, EY, and LessWrong that nobody has yet posted in all caps. Tempers are running high all around; I recommend that one or all parties involved stop fighting before someone gets hurt. (read: is banned, has their reputation irrevocably damaged, or otherwise has their ability to argue compromised).
0.0001% is a huge amount of risk, enough that if one person in six thousand did what you just did, humanity should be doomed to certain extinction. Even murder doesn’t have such a huge effect. I think you overestimate the impact of your actions. Sending a few emails to a blogger has an impact I would estimate to be 10^(-15) or less.
Certainly making this post has little purpose beyond inciting an argument. All you’ll do is polarise LessWrong and turn us against each other.