Eliezer (who appears to only have a single name, like Prince or Jesus)
Mr. Jesus H. Christ is a bad example. Also there’s this.
Eliezer (who appears to only have a single name, like Prince or Jesus)
Mr. Jesus H. Christ is a bad example. Also there’s this.
Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about “crime being on the rise” all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.
It looks to me like you arrived at this position via weighing the available evidence. In other words, you reasoned yourself into it. Upon second reading I see you don’t have a base rate for the amount of violent crime on the news in peaceful countries, and you derived a high absolute level from a high[er than you’d like] rate of change. But you’ve shown a willingness to reason, even if you reasoned poorly (as poorly as me when I’m not careful. Scary!) So I think jooyus’ quote survives.
Although it’s late, I’d like to say that XiXiDu’s approach deserves more credit and I think it would have helped me back when I didn’t understand this problem. Eliezer’s Bayes’ Theorem post cites the percentage of doctors who get the breast cancer problem right when it’s presented in different but mathematically equivalent forms. The doctors (and I) had an easier time when the problem was presented with quantities (100 out of 10,000 women) than with explicit probabilities (1% of women).
Likewise, thinking about a large number of trials can make the notion of probability easier to visualize in the Monty Hall problem. That’s because running those trials and counting your winnings looks like something. The percent chance of winning once does not look like anything. Introducing the competitor was also a great touch since now the cars I don’t win are easy to visualize too; that smug bastard has them!
Or you know what? Maybe none of that visualization stuff mattered. Maybe the key sentence is “[Candidate] A always stays with his first choice”. If you commit to a certain door then you might as well wear a blindfold from that point forward. Then Monty can open all 3 doors if he likes and it won’t bring your chances any closer to 1⁄2.
The number of upvotes indicates popularity, not quality. I just upvoted Doug’s comment but that doesn’t mean I think it’s 8 times better than josh’s comment.
that’s why grocery stores design their floor layouts so that you can’t help but notice the delicious rows of candy bars while you’re trapped in the checkout line. no escape!
In theory your escape would be a competing supermarket that hides their candy bars to attract your business.
do the rest of you actually find the choice of 1A clearly intuitive?
I chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic.
Upvoted; new vocabulary.
Finally, Lucas implicitly assumes that if the mind is a formal systems, then our “seeing” a statement to be true involves the statement being proved in that formal system.
To me this seems like the crux of the issue (in fact, I perceive it to be the crux of the issue, so QED). Of course there are LW posts like Your Intuitions are not Magic, but surely a computer could output something like “arithmetic is probably consistent for the following reasons...” instead of a formal proof attempt if asked the right question.
Or, if one of the kids is Eliezer Yudkowsky, you can write Maxwell’s equations and say “simple”, then write a program simulating Thor and say “not simple”.
based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B
Here’s a contradiction with A and ~A both being evidence for the same thing. You could tell your spouse “Go up and check if little Timmy went to bed”. Before ze comes back you already have an estimate of how likely Timmy is to go to bed on time (your prior belief). But then your spouse, who was too tired to climb the stairs, comes back and tells you “Little Timmy may or may not have gone to bed”. Now, if both of those possibilities would be evidence of Timmy’s staying up late then you should update your belief accordingly. But how can you do that without receiving any new information?
″...need to make billions of sequential self-modifications when humans don’t need to” to do what? Exist, maximize utility, complete an assignment, fulfill a desire...? Some of those might be better termed as “wants” than “needs” but that info is just as important in predicting behavior.
2) Ask myself what I would differentially expect to observe if ghosts existed or didn’t, and look for those things
The tricky part about this is establishing how much weird stuff you’d expect to see in the absence of ghosts. There will always be unexplained phenomena, but how many is too many?
I like the premise. Last month’s Douglas Hofstadter quote comes to mind. Some problems:
At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations… you insist that your science is about to crack that problem at any moment.
Why would I insist this? I don’t even know how the electrical signals (the what?!) change the volume. I just know how to make the wires change the volume, and I know how to make them change the music too.
You would conclude that somehow the right configuration of wires engenders classical music and intelligent conversation. You would not realize that you’re missing an enormous piece of the puzzle.
Some inquisitive Bushman I turned out to be. This is still a very magical radio.
Also, I think a clever Bushman could figure out that the radio is transmitting sounds from somewhere else. It is the reality after all so there are clues. He hears a person talking when no one’s there; the circuitry is too simple to write symphonies and simulate most human discussion; the radio doesn’t work in caves...
I took “Harry’s parents come to Hogwarts” as a completely insane move
I did too at first, but when Harry reads the follow-up letter from his father we see that it turned out for the best.
At least a few months.
Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I’ll be ignominiously giving Eliezer’s explanation of Bayes’ Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.
Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.
*dun-dun-dun-dun
Steelman that shoddy argument
Mend its faults so they can’t be seen
Help that bastard make more sense
A reformulation to see what they mean
it seems to me that almost every “The AI is an unfriendly failure” story begin with “The Humans are wasting too many resources, which I can more efficiently use for something else.”
Really? I think the one I see most is “I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them”. At least in Hollywood. You may be looking in more interesting places.
Per your AI, does it have an obvious incentive to help people below the median energy level?
Keep in mind this is a hypothetical character behaving in an unrealistic and contrived manner. If she doesn’t heed social norms or effective communication strategies then there’s nothing we can infer from those considerations.
I presume Rokia was able to buy a hybrid and some prime real estate after all this.