Matthew C: Monadology.
Peter_de_Blanc
John, Stuart, let’s do the math:
H1: “the coin will come up heads 95% of the time.”
Whether a given coinflip is evidence for or against H1 depends not only on the value of that coinflip, but on what other hypotheses you are comparing H1 to. So let’s introduce...
H2: “the coin will come up heads 50% of the time.”
By Bayes’ Theorem (odds form), the odds conditional upon the data D are:
p(H1|D) / p(H2|D) = p(H1)p(D|H1) / p(H2)p(D|H2)
So when we see the data, our odds are multiplied by the likelihood ratio p(D|H1)/p(D|H2).
If D = heads, our likelihood ratio is:
p(heads|H1) / p(heads|H2) = .95 / .5 = 1.9.
If D = tails, our likelihood ratio is:
p(tails|H1) / p(tails|H2) = .05 / .5 = 0.1.
If you prefer to measure evidence in decibels, then a result of heads is 10log10(1.9) ~= +2.8db of evidence and a result of tails is 10log10(0.1) = −10.0db of evidence.
The same result is true regardless of how you group the coinflips; if you get nothing but heads, that is even stronger evidence for H1 than if you get 95% heads and 5% tails. This is true because we are only comparing it to hypothesis H2. If we introduce hypothesis H3:
H3: “the coin will come up heads 99% of the time.”
Then we can also measure the likelihood ratio p(D|H1) / p(D|H3).
Plugging in “heads” or “tails”, we get:
p(heads|H1) / p(heads|H3) = 0.95 / 0.99 = 0.9595… p(tails|H1) / p(tails|H3) = 0.05 / 0.01 = 5.0
So a result of heads is about −0.18 db of evidence for H1, and a result of tails is about +7.0 db of evidence.
If you have a uniform prior on [0, 1] for the frequency of a heads, then you can use Laplace’s Rule of Succession.
Barkley, it looks to me like Eli derived it using the sum and product rules of probability theory.
I remember at the AGIRI workshop in DC last year, Alexei Samsonovich talked about sorting a list of English words along two dimensions—“valence” and “arousal,” indicating some component of the emotional response which words evoke.
Maybe audiences respond to speeches by summing the emotion vectors of each word in the speech, rather than parsing sentences.
Quick test: who here is excited by the prospects of anthropic quantum computing?
Eli, you said:
An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, “Maybe a really powerful agent was angry and threw a lightning bolt.” The human brain is the most complex artifact in the known universe. If anger seems simple, it’s because we don’t see all the neural circuitry that’s implementing the emotion. (Imagine trying to explain why Saturday Night Live is funny, to an alien species with no sense of humor. But don’t feel superior; you yourself have no sense of fnord.) The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.
I think it’s worth noting that Norse tribesfolk already knew about human beings, so whatever model of the universe they made had to include angry agents in it somewhere.
- 15 May 2012 16:31 UTC; 4 points) 's comment on I Stand by the Sequences by (
Lee B, Gray Area: what if you had a proof that 2 + 2 = 3, and, although you seem to recall having once seen a proof that 2 + 2 = 4, you can’t remember exactly how it went?
Unknown Healer:
Maybe he means that his expected value for his lifespan diverges to +infinity.
(Me too.)
I would think that SIAI is a better investment than cryonics.
IIRC, Peter de Blanc told me that any consistent utility function must have an upper bound (meaning that we must discount lives like Steve suggests). The problem disappears if your upper bound is low enough. Hopefully any realistic utility function has such a low upper bound, but it’d still be a good idea to solve the general problem.
Nick, please see my blog (just click on my name). I have a post about this.
bjk, see General Relativity.
Caledonian, in reply to the first half of your post: some of evolution’s designs are quite impressive, yes. They took billions of years to produce. Just wait until we’ve had a billion years to design stuff—then you’ll be really impressed.
Also, your taunting is not useful. Stop it.
Taka, if you don’t draw conclusions from simplified models, then you can’t make any decisions ever.
What is the difference between moral terminal values and terminal values in general? At first glance, the former considers other beings, whereas the latter may only consider oneself—can someone make this more precise?
Huh? Considering only oneself is less general than considering everything.
Certainly. But can you give a succinct way of distinguishing moral terminal values from other terminal values?
No. What other sorts of terminal values did you have in mind?
Josh, I would say that making oneself happy is a morality, and so is causing pain to others. It sure isn’t our morality. If you could find a short definition of our morality, I would be totally amazed.
Benoit, you assert that our use of real numbers leads to confusion and paradox. Please point to that confusion and paradox.
Also, how would your proposed number system represent pi and e? Or do you think we don’t need pi and e?
Geremiah: it’s worth understanding a problem before proposing a solution.
Recovering Irrationalist said:
I wouldn’t pick an omnipotent but equally ignorant me to be my best possible genie.
Right. It’s silly to wish for a genie with the same beliefs as yourself, because the system consisting of you and an unsafe genie is already such a genie.
Eli, you said:
In the superintelligent domain, as you say, violence is not an ontological category and there is no firm line between persuading someone with a bad argument and reprogramming their brain with nanomachines. In our world there is a firm line, however.
I don’t think there is such a firm line. I think argument shades smoothly into cult brainwashing techniques.
You should ask the greatest mathematician of the ancient world to work on FAI theory. If he solves the analogous problem, then when he explains his solution to you over the Chronophone, it’ll come out on your end as a design for an AI.