In other words, on at least the “IFS as a reductionist model” side, it looks like you have used an awful lot of words to basically concede my point. ;-)
Well… yes? That’s what I said in the opening, that I think we mostly agree on the reductionist thing but are just choosing to emphasize things somewhat differently and have mild disagreements on what’s a useful framing. :-)
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical. But I think your attempts to salvage the idea of agency are tied to you seeing it as essential to the “positive intention” frame you see as part of IFS’ main appeal or value.
So, I’ve written a separate comment to address that, and the other practical-side arguments of the article.
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical.
I certainly agree! Have you looked at some of the later posts that I’ve been referencing in my comments, say the one on neural Turing machines? (I know you read the UtEB one.) Dissolving agency has been one of my reasons for writing them, too.
Well… yes? That’s what I said in the opening, that I think we mostly agree on the reductionist thing but are just choosing to emphasize things somewhat differently and have mild disagreements on what’s a useful framing. :-)
Eh, I think that dissolving agency is actually pretty darn important, in a variety of ways, both practical and theoretical. But I think your attempts to salvage the idea of agency are tied to you seeing it as essential to the “positive intention” frame you see as part of IFS’ main appeal or value.
So, I’ve written a separate comment to address that, and the other practical-side arguments of the article.
I certainly agree! Have you looked at some of the later posts that I’ve been referencing in my comments, say the one on neural Turing machines? (I know you read the UtEB one.) Dissolving agency has been one of my reasons for writing them, too.