I agree that the framing effect is more important than the reference-dependence of sense-data encoding. However, the loss of sense-data is not always just “adjusting for irrelevant background”, and is not always throwing away something we would later have decided is “irrelevant to our goals.”
When I first read the post, I thought you were going to say something along the lines of:
“Evolution has optimized us to strip away the irrelevant features when it comes to vision, since it’s been vital for our survival. But evolution hasn’t done that for things like abstract value, since there’s been no selection pressure for that. It’s bad that our judgments in cases like the K&T examples don’t work more like vision, but that’s how it goes”.
Indeed, saying “let’s make the problem worse” and then bringing up vision feels a bit weird. After all, vision seems like a case where our brain does things exactly right—it ignores the “framing effects” caused by changed lightning conditions and leaves invariant the things that actually matter.
An illuminating (no pun intended) example of when the adjustment to the ambient level of sense-data affects what people think they want would be nice. Without it the whole section seems to detract from your point.
But I’m not raising a puzzle about how people think they want things even when they are behavioristic machines. I’m raising a puzzle about how we can be said to actually want things even when they are behavioristic machines that, for example, exhibit framing effects and can’t use neurons to encode value for the objective intensities of stimuli.
I agree that the framing effect is more important than the reference-dependence of sense-data encoding. However, the loss of sense-data is not always just “adjusting for irrelevant background”, and is not always throwing away something we would later have decided is “irrelevant to our goals.”
When I first read the post, I thought you were going to say something along the lines of:
“Evolution has optimized us to strip away the irrelevant features when it comes to vision, since it’s been vital for our survival. But evolution hasn’t done that for things like abstract value, since there’s been no selection pressure for that. It’s bad that our judgments in cases like the K&T examples don’t work more like vision, but that’s how it goes”.
Indeed, saying “let’s make the problem worse” and then bringing up vision feels a bit weird. After all, vision seems like a case where our brain does things exactly right—it ignores the “framing effects” caused by changed lightning conditions and leaves invariant the things that actually matter.
I wrote a response here.
An illuminating (no pun intended) example of when the adjustment to the ambient level of sense-data affects what people think they want would be nice. Without it the whole section seems to detract from your point.
I wrote a response here.
But I’m not raising a puzzle about how people think they want things even when they are behavioristic machines. I’m raising a puzzle about how we can be said to actually want things even when they are behavioristic machines that, for example, exhibit framing effects and can’t use neurons to encode value for the objective intensities of stimuli.