I agree they rarely do that and are not driven to it. I knew you meant that, but I’m not playing along with your use of the word because it seems to me to be an obvious rules-lawyering of what investigation is, in what is in my opinion confusingly called motte-and-bailey; if you were willing to try to use words the same way as everyone else you could much more easily point to the concept involved here. For example, I would have straightforwardly agreed if you had simply said “they do not consistently seek to investigate, especially not towards verbalizing or discovering new concepts”. But the overclaim is “they do not investigate”. They obviously do, and this includes all interpretations I see for your word use—they do sometimes seek out new concepts, in brief flashes when pushed to do so fairly hard—and if you believe they do not, it’s a bad sign about your understanding; but they also obviously are not driven to it or grown around it in the way a human is, so I don’t disagree with your main point, only with word uses like this.
...I will update to be less harsh rather than being banned, then. surprised I was even close to that, apologies. in retrospect, I can see why my frustration would put me near that threshold.
I don’t think I mind harshness, though maybe I’m wrong. E.g. your response to me here https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in?commentId=hjvF8kTQeJnjirXo3 seems to me comparably harsh, and I probably disagree a bunch with it, but it seems contentful and helpful, and thus socially positive/cooperative, etc. I think my issue with this thread is that it seems to me you’re aggressively missing the point / not trying to get the point, or something, idk. Or just talking about something really off-topic even if superficially on-topic in a way I don’t want to engage with. IDK.
[ETA: like, maybe I’m “overclaiming”—mainly just be being not maximally precise—if we look at some isolated phrases, but I think there’s a coherent and [ought to be plausible to you] interpretation of those phrases in context that is actually relevant to what I’m discussing in the post; and I think that interpretation is correct, and you could disagree with that and say so; but instead you’re talking about something else.]
[ETA: and like, yeah, it’s harder to describe the ways in which LLMs are not minds than to describes ways in which they do perform as well as or better than human minds. Sometimes important things are hard to describe. I think some allowance should be made for this situation.]
[edit: retracted due to communication difficulty]
I agree they rarely do that and are not driven to it. I knew you meant that, but I’m not playing along with your use of the word because it seems to me to be an obvious rules-lawyering of what investigation is, in what is in my opinion confusingly called motte-and-bailey; if you were willing to try to use words the same way as everyone else you could much more easily point to the concept involved here. For example, I would have straightforwardly agreed if you had simply said “they do not consistently seek to investigate, especially not towards verbalizing or discovering new concepts”. But the overclaim is “they do not investigate”. They obviously do, and this includes all interpretations I see for your word use—they do sometimes seek out new concepts, in brief flashes when pushed to do so fairly hard—and if you believe they do not, it’s a bad sign about your understanding; but they also obviously are not driven to it or grown around it in the way a human is, so I don’t disagree with your main point, only with word uses like this.
Just FYI instead of doing this silently, this comment thread is pretty close to making me decide to just ban you from commenting on my posts.
...I will update to be less harsh rather than being banned, then. surprised I was even close to that, apologies. in retrospect, I can see why my frustration would put me near that threshold.
I don’t think I mind harshness, though maybe I’m wrong. E.g. your response to me here https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in?commentId=hjvF8kTQeJnjirXo3 seems to me comparably harsh, and I probably disagree a bunch with it, but it seems contentful and helpful, and thus socially positive/cooperative, etc. I think my issue with this thread is that it seems to me you’re aggressively missing the point / not trying to get the point, or something, idk. Or just talking about something really off-topic even if superficially on-topic in a way I don’t want to engage with. IDK.
[ETA: like, maybe I’m “overclaiming”—mainly just be being not maximally precise—if we look at some isolated phrases, but I think there’s a coherent and [ought to be plausible to you] interpretation of those phrases in context that is actually relevant to what I’m discussing in the post; and I think that interpretation is correct, and you could disagree with that and say so; but instead you’re talking about something else.]
[ETA: and like, yeah, it’s harder to describe the ways in which LLMs are not minds than to describes ways in which they do perform as well as or better than human minds. Sometimes important things are hard to describe. I think some allowance should be made for this situation.]