harry carries around a small boulder as ring. the transfiguration could be finite incantem’d before battle. although quirrel did say that most magic battles are actually ambushes.
sboo
“”And,” her voice said, “if you want to break school rules or something, you can ask me about it, I promise I won’t just say no.”″
perhaps eliezer’s is not outlining but “fixing” her faults. by the end of ch75, hermione seems to have experienced a crisis of faith and become more morally harry.
i’m involved with a startup. there’s so much well-intentioned bullshit and it’s the founders who harm themselves more than any user or any investor.
i watched the video, and felt something was wrong, and then i read your article, you dissected it mercilessly, and nailed it precisely.
precision is hard. you know, until i started studying math, i didn’t even know what “be precise” really means.
awesome post, eliezer. you sound like quirrel.
indeed.
if we decouple the cost of caching into “was true but is false” and “was never true”, it may be that one dominates the other in likelihood. so maybe, the most efficient solution to the “cached thought” problem is not rethinking things, but ignoring most things by default. this, however, has the opportunity cost of false negatives.
i’ve personally found that i am very dependent on cached thoughts when learning/doing something new (not necessarily bad). like breadth over depth. what i do is try to force each cached thought to have a contradictory, or at least very different, twin.
e.g. though i have never coded in it, if i hear “C++”, i’ll (try to) think both “not worth it, too unsafe and errorprone” and “so worth it, speed and libraries”. whenever i don’t have enough data to have a strong opinion, i must say that i am ok with caching thoughts, as long as i know they are cached and i try to cache “contradictory twins” together.
src?
i like what you said about fiction perceived as distant reality. “long long ago in a galaxy far far away”.
have you succeeded in chaining these “one-inference-steps”?
that is, have you found you can take people with different beliefs / less domain knowledge, in casual conversation, and quickly explain things one inference at a time? i’ve found that i can only pull a few of those, even if they follow and are delightfully surprised by each one, else i start sounding too weird.
i think by ‘emergence’ you just mean ‘implication’
60mph?
“we irrationally find present costs more salient than future costs”
Present Bias is not always irrational!
it can be rationalized (as in, “find rational cause” not “make up excuse”) as hedging against uncertainty. the future is never certain. our predictions about the future aren’t even probable. if you save your money instead of spending it, you might lose it all to madoff. if you don’t use that giftcard to some restaurant, your tastes might change and it won’t be worth anything.
in fact, Geometric Discouting maximizes average (undiscounted utility) if, every moment in time, there is some probability that you will transition to a state where you won’t ever be able to get more utility. i think of it as the Apocalypse. then the discount is less about preference and more about an uncertain future.
even better, let’s say you know THAT there is some “Apocalypse probability”, but not WHAT it is. put a beta distribution on it, a natural prior on probabilities. then every day, when you wake up (i.e. the Coin Of Fates lands heads), it’s a little more likely that the daily apocalypse is less likely (e.g. think about how unlikely flipping a fair coin 365 times is, you need to be a fool to not lower your estimate of the tails odds). update by bayes, you get laplace’s rule, and Hyperbolically Discounted reward. it’s like the Anthropic Principle.
i had to put in the math there to say that present bias can be rational and logical, and this can be shown formally and precisely. but really, it comes from common sense. just because a behavioral economist tells you that they’ll give you money tomorrow (and you know he’s telling the truth, since unlike psychology, the journals won’t accept deceptive experiments), doesn’t mean you’ll get the money (the world changes, e.g. they forget or err in mailing the check), and it doesn’t mean you’ll want the money (you change, e.g. you win the lottery). shit happens. people change.
having said all that, it’s safe to say that most of present bias is irrational. this is obvious from the frequent feelings of frustration with our present problems and anger against our past self for not solving them. at least, for me.
it’s just i’ve been smelling this Fetish lately for hating heuristics, biases, and intuition. but really, these things work really well much of the time for many tasks. and that’s often the first thing we hear in informed discussions, but i think people get caught up and forget about it (not saying lukeprog did, just making a big deal about one word he used).
(it’s like Lazy Evaluation. haskell is often fast despite, not because of, it. but sometimes, you really didn’t need to do something, and since everything is like a generator, you save big on computation.)
anyway, great post! (i stopped reading it halfway through because of the silliness of reading the internets to procrastinate my chores, and finished after :) i need to keep rereading it and thinking about it until i can figure out a way to remember and implement these things in my own mind.
ps check out “Strotz Meets Allais: Diminishing Impatience and the Certainty Effect”
even if poison were cheap, every fight has a risk. better to neither fight nor flee.
what? no. maybe only strong “compassionate”/”nurturing” females can keep groups of hundreds together without fragmentation.
you can only horcrux matter, not “minds”.
tl;dr
Roots of Empathy says caring for babies nurtures empathy.
right: Worst Argument In The World.
as 5-HTP is metabolized to melatonin, i wonder how much of the effect comes from melatonin itself.
not me. there was consent and the capacity for consent, so the kiss was wistful at worst.