...I will update to be less harsh rather than being banned, then. surprised I was even close to that, apologies. in retrospect, I can see why my frustration would put me near that threshold.
I don’t think I mind harshness, though maybe I’m wrong. E.g. your response to me here https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in?commentId=hjvF8kTQeJnjirXo3 seems to me comparably harsh, and I probably disagree a bunch with it, but it seems contentful and helpful, and thus socially positive/cooperative, etc. I think my issue with this thread is that it seems to me you’re aggressively missing the point / not trying to get the point, or something, idk. Or just talking about something really off-topic even if superficially on-topic in a way I don’t want to engage with. IDK.
[ETA: like, maybe I’m “overclaiming”—mainly just be being not maximally precise—if we look at some isolated phrases, but I think there’s a coherent and [ought to be plausible to you] interpretation of those phrases in context that is actually relevant to what I’m discussing in the post; and I think that interpretation is correct, and you could disagree with that and say so; but instead you’re talking about something else.]
[ETA: and like, yeah, it’s harder to describe the ways in which LLMs are not minds than to describes ways in which they do perform as well as or better than human minds. Sometimes important things are hard to describe. I think some allowance should be made for this situation.]
Just FYI instead of doing this silently, this comment thread is pretty close to making me decide to just ban you from commenting on my posts.
...I will update to be less harsh rather than being banned, then. surprised I was even close to that, apologies. in retrospect, I can see why my frustration would put me near that threshold.
I don’t think I mind harshness, though maybe I’m wrong. E.g. your response to me here https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in?commentId=hjvF8kTQeJnjirXo3 seems to me comparably harsh, and I probably disagree a bunch with it, but it seems contentful and helpful, and thus socially positive/cooperative, etc. I think my issue with this thread is that it seems to me you’re aggressively missing the point / not trying to get the point, or something, idk. Or just talking about something really off-topic even if superficially on-topic in a way I don’t want to engage with. IDK.
[ETA: like, maybe I’m “overclaiming”—mainly just be being not maximally precise—if we look at some isolated phrases, but I think there’s a coherent and [ought to be plausible to you] interpretation of those phrases in context that is actually relevant to what I’m discussing in the post; and I think that interpretation is correct, and you could disagree with that and say so; but instead you’re talking about something else.]
[ETA: and like, yeah, it’s harder to describe the ways in which LLMs are not minds than to describes ways in which they do perform as well as or better than human minds. Sometimes important things are hard to describe. I think some allowance should be made for this situation.]