No, not absurd. I was worried that we’ll never get to the point where we actually “enjoy life” as human beings.
Don’t trust reasoning. (Or, at least, don’t trust the reasoning of people who disagree with me.)
No, that’s not what I wanted to argue. I wrote in the post that we should continue to use our best methods. We should try to solve friendly AI. I said that we should be careful and discount some of the implied utility.
Take for example the use of Bayes’ theorem. I am not saying that we shouldn’t use it, that would be crazy. What I am saying is that we should be careful in the use of such methods.
If for example you use probability theory to update on informal arguments or anecdotal evidence you are still using your intuition to assign weight to to evidence. Using math and numeric probability estimates might make you unjustifiably confident of your results because you mistakenly believe that you don’t rely on your intuition.
I am not saying that we shouldn’t use math to refine our intuition, what I am saying is that we can still be wrong by many orders of magnitutes as long as we are using our heuristics in an informal setting rather than evaluating data supplied by experimentation.
But then I thought about Bayes’ rule and realized I was wrong — even a convincing-sounding “yes” gives you some new information. In this case, H = “He thinks I’m pretty” and E = “He gave a convincing-sounding ‘yes’ to my question.” And I think it’s safe to assume that it’s easier to sound convincing if you believe what you’re saying than if you don’t, which means that P(E | H) > P(E | not-H). So a proper Bayesian reasoner encountering E should increase her credence in H.
But by how much should a proper Bayesian reasoner increase her credence in H? Bayes’ rule only tells us by how much given the input. But the variables are often filled in by our intuition.
No, not absurd. I was worried that we’ll never get to the point where we actually “enjoy life” as human beings.
No, that’s not what I wanted to argue. I wrote in the post that we should continue to use our best methods. We should try to solve friendly AI. I said that we should be careful and discount some of the implied utility.
Take for example the use of Bayes’ theorem. I am not saying that we shouldn’t use it, that would be crazy. What I am saying is that we should be careful in the use of such methods.
If for example you use probability theory to update on informal arguments or anecdotal evidence you are still using your intuition to assign weight to to evidence. Using math and numeric probability estimates might make you unjustifiably confident of your results because you mistakenly believe that you don’t rely on your intuition.
I am not saying that we shouldn’t use math to refine our intuition, what I am saying is that we can still be wrong by many orders of magnitutes as long as we are using our heuristics in an informal setting rather than evaluating data supplied by experimentation.
Take this example. Julia Galef wrote:
But by how much should a proper Bayesian reasoner increase her credence in H? Bayes’ rule only tells us by how much given the input. But the variables are often filled in by our intuition.