So, I think I can say things within the frame given here (I guess they’d mostly be similar to ESRogs – the main difference being that I’d object harder that the word “indulgence” is an appropriate word to describe what’s going on. The whole point in my worldview is for people to be able to have good experiences. “Indulgence” conjures to my mind a “you’re taking a break from the thing you’re supposed to be doing.” The experiencing-of-good-things is an important part of The Good)
But my inner Zvi is screaming at the entire framing of this [fake edit: so was my inner Qiaochu, although for different reasons, and he’s already spoken for himself].
I will note that goodness is a fairly confusing concept, and is one of the places where I think it’s good to catch up on some sequences if you haven’t already.
Eliezer’s Mere Goodness collection has a lot of theoretical background. (Much of this is oriented more towards “how do you make sure an AI is ‘good’” as opposed to “what does ‘good’ mean as a human?” A lot of it is probably stuff you already know and I’m not sure which is which)
Nate Soare’s Replacing Guilt series is more directly tailored as a response to the sort of question in the OP. I think a good post to start with to see if the series is a good fit for you is “Should” considered harmful.
(fyi, I kept running into Effective Altruist types who still seemed to have an unhealthy relationship with guilt, so I registered the domain doingguiltbetter.com to redirect to Nate’s series)
So, I think I can say things within the frame given here (I guess they’d mostly be similar to ESRogs – the main difference being that I’d object harder that the word “indulgence” is an appropriate word to describe what’s going on. The whole point in my worldview is for people to be able to have good experiences. “Indulgence” conjures to my mind a “you’re taking a break from the thing you’re supposed to be doing.” The experiencing-of-good-things is an important part of The Good)
But my inner Zvi is screaming at the entire framing of this [fake edit: so was my inner Qiaochu, although for different reasons, and he’s already spoken for himself].
I will note that goodness is a fairly confusing concept, and is one of the places where I think it’s good to catch up on some sequences if you haven’t already.
Eliezer’s Mere Goodness collection has a lot of theoretical background. (Much of this is oriented more towards “how do you make sure an AI is ‘good’” as opposed to “what does ‘good’ mean as a human?” A lot of it is probably stuff you already know and I’m not sure which is which)
Nate Soare’s Replacing Guilt series is more directly tailored as a response to the sort of question in the OP. I think a good post to start with to see if the series is a good fit for you is “Should” considered harmful.
(fyi, I kept running into Effective Altruist types who still seemed to have an unhealthy relationship with guilt, so I registered the domain doingguiltbetter.com to redirect to Nate’s series)