I think that current technology generally doesn’t allow us to wirehead those drives very effectively. We target them a little bit via puppies, social media, and so on, but by and large our core values are still centered on, and derived from interacting with, other flesh-and-blood humans.
…And wow, this suddenly makes me feel much more concerned about where the whole “virtual friends” thing is going. Character.ai recently claimed that they were serving 20,000 queries per second. That’s a lot already, but in the future, presumably they and their competitors will use much better LLMs, and attach them to charismatic expressive animated faces and voices, plus voice and video input. At that point, the whole human social world—friendship, love, norm-following, status-seeking, self-image, helping the downtrodden, etc.—might be in for a wrenching change, when it has to compete with this new unprecedented upstart competition.
Huh. Should I be concerned that in practice human morality is about to go “poof”, because the constraints that hold it in place are about to disappear?
Like if powerful people get literally 0 negative social reinforcement for committing atrocities, because their whole social group is composed of superhumanly loving superhumanly charismatic sycophantic yes men...then why not?
(I’m pretty cynical about the morality of most humans due to factory farming. Evidently, almost everyone is happy to subsidize the torture of non-human animals, as long as 1) they don’t have to see the torture, and 2) they don’t face social censor for doing so. This is suggestive that many conventional moral norms that most people would be horrified to violate, are mainly the result of social punishment pressures, rather than resulting from care or fairness desires.)
(I’m pretty cynical about the morality of most humans due to factory farming. Evidently, almost everyone is happy to subsidize the torture of non-human animals, as long as 1) they don’t have to *see * the torture, and 2) they don’t face social censor for doing so. This is suggestive that many conventional moral norms that most people would be horrified to violate, are mainly the result of social punishment pressures, rather than resulting from care or fairness desires.)
Why do you dismiss the obvious hypothesis that “almost everyone” basically just doesn’t really think that factory farming is morally bad in any substantive way? You use the word “torture”, but I expect that most people simply wouldn’t agree that factory farming is morally equivalent to torturing a person. This seems to fully explain what we observe w.r.t. how people behave toward factory farming, without putting any pressure whatsoever on the view that conventional moral norms are mostly the result of people’s moral intuitions / conscience / etc. (We may still judge this view to false for other reasons, of course.)
Why do you dismiss the obvious hypothesis that “almost everyone” basically just doesn’t really think that factory farming is morally bad in any substantive way?
I confirm that I do dismiss this hypothesis on the basis of various pieces of evidence, from answers on surveys to the results of the Milligram experiment (though most people’s views about who counts as a moral patient definitely not a crux for my overall model here), but I would prefer not to get into it.
Huh. Should I be concerned that in practice human morality is about to go “poof”, because the constraints that hold it in place are about to disappear?
Like if powerful people get literally 0 negative social reinforcement for committing atrocities, because their whole social group is composed of superhumanly loving superhumanly charismatic sycophantic yes men...then why not?
(I’m pretty cynical about the morality of most humans due to factory farming. Evidently, almost everyone is happy to subsidize the torture of non-human animals, as long as 1) they don’t have to see the torture, and 2) they don’t face social censor for doing so. This is suggestive that many conventional moral norms that most people would be horrified to violate, are mainly the result of social punishment pressures, rather than resulting from care or fairness desires.)
Why do you dismiss the obvious hypothesis that “almost everyone” basically just doesn’t really think that factory farming is morally bad in any substantive way? You use the word “torture”, but I expect that most people simply wouldn’t agree that factory farming is morally equivalent to torturing a person. This seems to fully explain what we observe w.r.t. how people behave toward factory farming, without putting any pressure whatsoever on the view that conventional moral norms are mostly the result of people’s moral intuitions / conscience / etc. (We may still judge this view to false for other reasons, of course.)
I confirm that I do dismiss this hypothesis on the basis of various pieces of evidence, from answers on surveys to the results of the Milligram experiment (though most people’s views about who counts as a moral patient definitely not a crux for my overall model here), but I would prefer not to get into it.