I think there may be confusions about the things that we’re pointing to and it seems useful to try and clear those up first.
First off, by akrasia I’m pointing at situations where you look back and think, “Drat! In that instant, I could have done this other thing that would have better served my goals.”
(It may actually be the case that you were satisfying some other hidden part of yourself during that time. I’m not implying that instances of additional counterfactual productivity always indicate realistic places to improve, merely that they appear to be.)
Or something like that. Happy to try and zero in on a better working definition if you’d like to propose something different. I think the things we have in mind of examples of akrasia may not be the same.
Secondly, I want to stress that I don’t think that forcing / a brute force approach towards productivity is not optimal. Often, the sort of internal debugging of an Internal Double Crux and Focusing flavor seems like the right way to debug internal aversions as well as move forward.
The sort of “dropping your obligations” and sourcing internal motivation that Nate Soares writes about in his Replacing Guilt series is, I think, very important.
I also think, though, that this model sometimes isn’t at the right level of granularity to deal with smaller problems like forgetting to get things done or getting sucked into distraction-loops. For this, I think the above algorithm can be useful.
Models of akrasia that personify it or model it as a malicious agent don’t feel tractable or intuitive to me. Given the statement above, I want to once again stress that I don’t think this is a solution to all problems related to getting things done. I do think that the reductionist framing and considering the potential dangers of reification are important, though. Hope that tempers any impressions you have of optimism on my part.
I read the OP and I thought I understood really well what you were trying to say. I wrote some criticism.
You read the criticism and thought that I failed to understand you. You answered by pointing to this apparent misunderstanding and reiterating your view from the OP in a slightly different way.
I read your answer and I thought that your answer meant that you had understood around 0% of my original comment. I am now backtracking to point out how much my criticism was not about what you thought it was about.
I think there may be confusions about the things that we’re pointing to and it seems useful to try and clear those up first.
When not specified, I assume standard definitions like “a lack of self-control or the state of acting against one’s better judgment” (from Wikipedia), and your “want-want vs want” or “being unable to do what you ‘want’ to do” seems to capture it also. I don’t think that this was a problem in our communication. Your clarifications above are exactly what I’d have expected after reading just the OP.
Often, the sort of internal debugging of an Internal Double Crux and Focusing flavor seems like the right way to debug internal aversions as well as move forward.
I feel like you’re answering by pulling from your database of “smart things to say about akrasia”, instead of actually reading what I’m trying to say. This is frustrating. The database obviously includes CFAR stuff, like “Double Crux > willpower” and “reductionism + use tools to solve problems”, and Nate’s “Replacing Guilt”. But I know those sources too, I value them and I’m not arguing about these points at all.
Models of akrasia that personify it or model it as a malicious agent
This is some strawman that you are pulling from the naive approach, and not at all what I was saying.
“living, coherent thing” → this doesn’t mean “person”
“which knows stuff “you” don’t and is sometimes actively undermining “you”″ → this doesn’t mean “malicious”
I do think that the reductionist framing and considering the potential dangers of reification are important, though.
And once again, you are rebutting a strawman of what I said. “model-based reinforcement learning on goals that you are not consciously unaware of” is something that can be usefully treated in a reductionist way, and not a reification but the best short description of what’s most likely out there in reality that I could think of.
Uh, I’m sorry, I’m tired now, we can continue trying to understand each other’s points tomorrow. So far, I am losing hope
Sounds good, I’ll try to follow-up more later. You’re right that I wasn’t engaging with your points. There were a few things I thought I explained poorly in the original post and wanted to make sure those weren’t the points of disagreement.
This conversation didn’t move forward, which I think is unfortunate.
I think SquirrelInHell agrees with something like the view described on sinceriously.fyi, especially false faces. If you read that, you may see where the objection was coming from.
Hey, thanks for the thoughts!
I think there may be confusions about the things that we’re pointing to and it seems useful to try and clear those up first.
First off, by akrasia I’m pointing at situations where you look back and think, “Drat! In that instant, I could have done this other thing that would have better served my goals.”
(It may actually be the case that you were satisfying some other hidden part of yourself during that time. I’m not implying that instances of additional counterfactual productivity always indicate realistic places to improve, merely that they appear to be.)
Or something like that. Happy to try and zero in on a better working definition if you’d like to propose something different. I think the things we have in mind of examples of akrasia may not be the same.
Secondly, I want to stress that I don’t think that forcing / a brute force approach towards productivity is not optimal. Often, the sort of internal debugging of an Internal Double Crux and Focusing flavor seems like the right way to debug internal aversions as well as move forward.
The sort of “dropping your obligations” and sourcing internal motivation that Nate Soares writes about in his Replacing Guilt series is, I think, very important.
I also think, though, that this model sometimes isn’t at the right level of granularity to deal with smaller problems like forgetting to get things done or getting sucked into distraction-loops. For this, I think the above algorithm can be useful.
Models of akrasia that personify it or model it as a malicious agent don’t feel tractable or intuitive to me. Given the statement above, I want to once again stress that I don’t think this is a solution to all problems related to getting things done. I do think that the reductionist framing and considering the potential dangers of reification are important, though. Hope that tempers any impressions you have of optimism on my part.
So here’s what happened until now:
I read the OP and I thought I understood really well what you were trying to say. I wrote some criticism.
You read the criticism and thought that I failed to understand you. You answered by pointing to this apparent misunderstanding and reiterating your view from the OP in a slightly different way.
I read your answer and I thought that your answer meant that you had understood around 0% of my original comment. I am now backtracking to point out how much my criticism was not about what you thought it was about.
When not specified, I assume standard definitions like “a lack of self-control or the state of acting against one’s better judgment” (from Wikipedia), and your “want-want vs want” or “being unable to do what you ‘want’ to do” seems to capture it also. I don’t think that this was a problem in our communication. Your clarifications above are exactly what I’d have expected after reading just the OP.
I feel like you’re answering by pulling from your database of “smart things to say about akrasia”, instead of actually reading what I’m trying to say. This is frustrating. The database obviously includes CFAR stuff, like “Double Crux > willpower” and “reductionism + use tools to solve problems”, and Nate’s “Replacing Guilt”. But I know those sources too, I value them and I’m not arguing about these points at all.
This is some strawman that you are pulling from the naive approach, and not at all what I was saying.
“living, coherent thing” → this doesn’t mean “person”
“which knows stuff “you” don’t and is sometimes actively undermining “you”″ → this doesn’t mean “malicious”
And once again, you are rebutting a strawman of what I said. “model-based reinforcement learning on goals that you are not consciously unaware of” is something that can be usefully treated in a reductionist way, and not a reification but the best short description of what’s most likely out there in reality that I could think of.
Uh, I’m sorry, I’m tired now, we can continue trying to understand each other’s points tomorrow. So far, I am losing hope
Sounds good, I’ll try to follow-up more later. You’re right that I wasn’t engaging with your points. There were a few things I thought I explained poorly in the original post and wanted to make sure those weren’t the points of disagreement.
This conversation didn’t move forward, which I think is unfortunate.
I think SquirrelInHell agrees with something like the view described on sinceriously.fyi, especially false faces. If you read that, you may see where the objection was coming from.