I’m not sure what to make out of it, but one could run the motivating example backwards:
this time Daniel has been thinking about how his brain is bad at numbers and decides to do a quick sanity check.
He pictures himself walking along the beach after the oil spill, and encountering a group of people cleaning birds as fast as they can.
“He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he’d endure that and quickly arrives at the conclusion that he doesn’t care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low.”
What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he’d feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.
It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.
Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can’t solve. But maybe societies can.
That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it’s implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the “intrinsic” value of something small (not really what I mean, but I’m short on time right now, I hope you can see what I’m pointing at), and obviously mine aren’t going to work for everyone.
(As a matter of fact, my actual “bird death” thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.)
If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.
That sounds like a failure of the thought experiment to me.
You didn’t give details on how or how not to set up the thought experiment. I took it to mean ‘your spontaneous valuation when imagining the situation’ followed by n objective’multiplication’. Now my reaction wasn’t that of aversion, but I tried to think of possible reactions and what would follow from that.
The goal is to generate a thought experiment that helps you identify the “intrinsic” value of something small. But the ‘intrinsic’ value appears to heavily depend on the setup of the thought experiment. And it humans value small things nonlinearly more than large/many things one can hack the valuation by constraining the thought experiment to only small things.
Nothing wrong with mind hacks per se. I have read your productivity post. But I don’t think they don’t help in establishing ‘intrinsic’ value. For personal self-modification (motivation) it seems to work nice.
I’m not sure what to make out of it, but one could run the motivating example backwards:
“He pictures himself helping the people and wading deep in all that sticky oil and imagines how long he’d endure that and quickly arrives at the conclusion that he doesn’t care that much for the birds really. And would rather prefer to get away from that mess. His estimate how much it is worth for him to rescue 1000 birds is quite low.”
What can we derive from this if we shut-up-and-calculate? If his value for rescuing 1000 birds is 10$ now 1 million birds still come out as 10K$. But it could be zero now if not negative (he’d feel he should get money for saving the birds). Does that mean if we extrapolate that he should strive to eradicate all birds? Surely not.
It appears to means that our care-o-meter plus system-2-multiply gives meaningless answers.
Our empathy towards beings is to a large part dependent on socialization and context. Taking it out of its ancestral environment is bound to cause problems I fear individuals can’t solve. But maybe societies can.
That sounds like a failure of the thought experiment to me. When I run the bird thought experiment, it’s implicitly assumed that there is no transportation cost in/out of the time experiment, and the negative aesthetic cost from imagining myself in the mess is filtered out. The goal is to generate a thought experiment that helps you identify the “intrinsic” value of something small (not really what I mean, but I’m short on time right now, I hope you can see what I’m pointing at), and obviously mine aren’t going to work for everyone.
(As a matter of fact, my actual “bird death” thought experiment is different than the one described above, and my actual value is not $3, and my actual cost per minute is nowhere near $1, but I digress.)
If this particular thought experiment grates for you, you may consider other thought experiments, like considering whether you would prefer your society to produce an extra bic lighter or an extra bird-cleaning on the margin, and so on.
You didn’t give details on how or how not to set up the thought experiment. I took it to mean ‘your spontaneous valuation when imagining the situation’ followed by n objective’multiplication’. Now my reaction wasn’t that of aversion, but I tried to think of possible reactions and what would follow from that.
Nothing wrong with mind hacks per se. I have read your productivity post. But I don’t think they don’t help in establishing ‘intrinsic’ value. For personal self-modification (motivation) it seems to work nice.