Is it equivalent to state that leaning about the origins of your priors “screens off” the priors themselves?
Oscar_Cunningham
Scott Aaronson has a nice post about the differences between gravity and electromagnetism. It seems his thoughts were running along the same lines as yours when he wrote it; he asks almost all the same questions. http://www.scottaaronson.com/blog/?p=244
I imagine by declaring that they would power up the LHC iff the Green party won, thus forcing everyone who would vote Blue to come down with a fever on election day.
Surely that only works if the probability of winning a case depends only on the skill of the lawyers, and not on the actual facts of the cases. I imagine a lawyer with no training at all could unravel your plan and make it clear that your hobos had nothing to back up their case.
Also, being English myself, it hadn’t dawned on me that the losers-pay rule doesn’t apply everywhere. Having no such system at all seems really stupid.
It also occurs to me that hiring expensive lawyers under losers-pay is like trying to fix a futarchy: you don’t lose anything if you succeeded, but you stand to lose a lot if you fail.
Being a science and maths geek, I’ve tended to dismiss a lot of philosophy as bullshit, and have only recently begun to realise that (some of) what I’ve dismissed is actually valid and interesting. Of course, one place where this effect is incredibly strong is when a parent is arguing with a child, i.e. the “teenagers think they know everything” syndrome.
If someone starts a post with “Rationality is wrong...” or similar, I’m much more likely to downvote it than if they start it with “I’ve got these scenarios where standard rationality techniques seem not to work...” To this extent at least the presentation of the ideas matters as much as the content. So I hope that these rules will cause people to present their ideas more cautiously, while still posting experimentally. If you are thinking of posting something controversial, it might be worth seeking advice from the other users who you think will be interested.
Basically, I think downvotes should work as an “utter stupidity” filter.
“A rat isn’t exactly seeking an optimum level of food, it’s seeking an optimum ratio of ventromedial to ventrolateral hypothalamic stimulation, or, in rat terms, a nice, well-fed feeling.”
So if I move my hand away from a hot pan, am I actually seeking to: “move my hand away from a hot pan” or
“avoid touching the pan” or
“avoid being burnt” or
“avoid pain receptors in my hand being activated” or
“avoid neural signals in my brain that correspond to pain” or
“avoid the feeling of pain”?
Someone needs to do some buck-stopping or else the master-slave model will turn into a master-slave1-slave2-slave3… model. Although come to think of it, that might me more correct. (EDIT: Note to self, line spacing is weird, I’m off to look in the wiki)
My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I’m still a vegetarian. Clearly I’m on shaky ground, since my beliefs weren’t formed from evidence, but purely from nurture.
Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse than mistreating them). Since eating meat is not necessary to live, it must therefore be as bad as hunting for fun, which is much more widely disapproved of. (I’m not a vegan, and I often eat sweets containing gelatine, if asked to explain this, I would rationalise that eating these thing causes the death of many fewer animals than actually eating, like, steak).
But having read all of Eliezer’s posts, I now realise that I could have come up with that rationalisation even if eating meat were not wrong, and that I’m now in just a bad a position as a religious believer. I want a crisis of faith, but I have a problem… I don’t know where to go back to. There’s no objective basis for morality. I don’t know what kind of evidence I should condition on (I don’t know what would be different about the world if eating meat was good instead of bad). If a religious person realises they have no evidence they should go back to their priors. Because god has a tiny prior, they should immediately stop believing. I don’t know exactly what the prior on “killing animals is wrong” is, but I think it has a reasonable size (certainly larger than that for god), and I feel more justified in being vegetarian because of this. What should I do now?
Footnote: I probably don’t have to say this, but I don’t want arguments for or against vegetarianism, simply advice on how one should challenge one’s own moral beliefs. I’ve used “eating meat” and “killing animals” interchangeably in my post, because I think that they are morally equivalent due to supply and demand.
That’s an excellent point, and one I may not have spotted otherwise. Thank you.
Well, clearly we can assert anything we want, so the quote becomes:
That without evidence can be dismissed without evidence.
And we notice that evidence doesn’t change depending on whether you’re considering something for belief or dismissal, so the quote becomes:
That without evidence can be dismissed.
So Hitchens is really telling us that prior probabilities tend to be small, which is true since there are almost always many possible hypotheses that the probability mass is split between.
I think that anthropics is a useless distraction, but until I’ve worked out why it’s a useless distraction it still gets in the way of everything.
Could someone explain why this has been voted up so much? I didn’t find it particularly funny, or to have any non-trivial insight.
Anyone care to post evidence either way?
What’s the point of this? Surely there are more direct ways of doing a survey of how many users we have? Or are you just trying to encourage participation?
Agreed, nothing is lost by posting something in the open thread first, and then posting an expanded version if it generates interest. Personally, I’d like to see the idea expanded.
The problem being that we often find ourselves doing things for reasons other than the ones we think we do. Robin Hanson will tell you that.
The problem I can see with this idea is that the AI will extrapolate from its knowledge about the red wire to deduce things about the rest of the universe. Maybe it calculates that the laws of physics must work differently around the wire, so it builds a free-energy circuit around the wire. But the circuit behaves differently than expected, touches the red wire, and the AI dies.
Why have you split your claim between genders? Are these values are naturally different between genders or that the differences are learned? In a society with large gender differences such as ours (or at least mine) it’s hard to separate the differences in values due to gender (if there even are any) from the learned behaviour of members of the different sexes.
Fair enough. But those particular differences?
In the same way, it’s hopeless to try to assign probabilities to events and do a Bayesian update on everything. But you can still take advice from theorems like “Conservation of expected evidence” and the like. Formalisations might not be good for specifics, but they’re good for telling you if you’re going wrong in some more general manner.