Thanks, & thanks for putting in your own perspective here. I sympathize with that too; fwiw Vladimir_Nesov’s answer would have satisfied me, because I am sufficiently familiar with what the terms mean. But for someone new to those terms, they are just unexplained jargon, with links to lots of lengthy but difficult to understand writing. (I agree with Richard’s comment nearby). Like, I don’t think Vladimir did anything wrong by giving a jargon-heavy, links-heavy answer instead of saying something like “It may be hard to construct a utility function that supports the latter but rejects the former, but if instead of utility maximization we are doing something like utility-maximization-subject-to-deontological-constraints, it’s easy: just have a constraint that you shouldn’t harm sentient beings. This constraint doesn’t require you to produce more sentient beings, or squeeze existing ones into optimized shapes.” But I predict that this blowup wouldn’t have happened if he had instead said that.
I may be misinterpreting things of course, wading in here thinking I can grok what either side was thinking. Open to being corrected!
To be clear I super appreciate you stepping in and trying to see where people were coming from (I think ideally I’d have been doing a better job with that in the first place, but it was kinda hard to do so from inside the conversation)
I found Richard’s explanation about what-was-up-with-Vlad’s comment to be helpful.
Thanks, & thanks for putting in your own perspective here. I sympathize with that too; fwiw Vladimir_Nesov’s answer would have satisfied me, because I am sufficiently familiar with what the terms mean. But for someone new to those terms, they are just unexplained jargon, with links to lots of lengthy but difficult to understand writing. (I agree with Richard’s comment nearby). Like, I don’t think Vladimir did anything wrong by giving a jargon-heavy, links-heavy answer instead of saying something like “It may be hard to construct a utility function that supports the latter but rejects the former, but if instead of utility maximization we are doing something like utility-maximization-subject-to-deontological-constraints, it’s easy: just have a constraint that you shouldn’t harm sentient beings. This constraint doesn’t require you to produce more sentient beings, or squeeze existing ones into optimized shapes.” But I predict that this blowup wouldn’t have happened if he had instead said that.
I may be misinterpreting things of course, wading in here thinking I can grok what either side was thinking. Open to being corrected!
To be clear I super appreciate you stepping in and trying to see where people were coming from (I think ideally I’d have been doing a better job with that in the first place, but it was kinda hard to do so from inside the conversation)
I found Richard’s explanation about what-was-up-with-Vlad’s comment to be helpful.