So… is there any specific thing that you think cannot be modeled as elegantly as in IFS, by using simple reinforcement learning of when to do a thing or feel a certain way?
Without such examples, I don’t think your article has really done anything to improve the case for IFS’s validity as an actually-reductionist model of human behavior. That is, when you say:
the “passive” version sounds to me like it’s just a description of how the “agenty” version is implemented.
you’re kind of making my point. If we’re doing reductionism, then “how it’s implemented” is actually pretty important! That sounds like a feature of the passive model, not a bug.
In other words, on at least the “IFS as a reductionist model” side, it looks like you have used an awful lot of words to basically concede my point. ;-)
In other words, on at least the “IFS as a reductionist model” side, it looks like you have used an awful lot of words to basically concede my point. ;-)
Well… yes? That’s what I said in the opening, that I think we mostly agree on the reductionist thing but are just choosing to emphasize things somewhat differently and have mild disagreements on what’s a useful framing. :-)
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical. But I think your attempts to salvage the idea of agency are tied to you seeing it as essential to the “positive intention” frame you see as part of IFS’ main appeal or value.
So, I’ve written a separate comment to address that, and the other practical-side arguments of the article.
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical.
I certainly agree! Have you looked at some of the later posts that I’ve been referencing in my comments, say the one on neural Turing machines? (I know you read the UtEB one.) Dissolving agency has been one of my reasons for writing them, too.
So… is there any specific thing that you think cannot be modeled as elegantly as in IFS, by using simple reinforcement learning of when to do a thing or feel a certain way?
Without such examples, I don’t think your article has really done anything to improve the case for IFS’s validity as an actually-reductionist model of human behavior. That is, when you say:
you’re kind of making my point. If we’re doing reductionism, then “how it’s implemented” is actually pretty important! That sounds like a feature of the passive model, not a bug.
In other words, on at least the “IFS as a reductionist model” side, it looks like you have used an awful lot of words to basically concede my point. ;-)
Well… yes? That’s what I said in the opening, that I think we mostly agree on the reductionist thing but are just choosing to emphasize things somewhat differently and have mild disagreements on what’s a useful framing. :-)
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical. But I think your attempts to salvage the idea of agency are tied to you seeing it as essential to the “positive intention” frame you see as part of IFS’ main appeal or value.
So, I’ve written a separate comment to address that, and the other practical-side arguments of the article.
I certainly agree! Have you looked at some of the later posts that I’ve been referencing in my comments, say the one on neural Turing machines? (I know you read the UtEB one.) Dissolving agency has been one of my reasons for writing them, too.