This essay seems like it’s trying to address two different audiences: LW, and the people who get mind-hacked by AIs. That’s to its detriment, IMO.
E.g. The questions in the Corollary FAQ don’t sound like the questions you’d expect from someone who’s been mind-hacked by AI. Like, why expect someone in a sycophancy doom loop to ask about if it’s OK to use AI for translation? Also, texts produced by sycophancy doom loops look pretty different to AI translated texts. Both share a resemblance to low quality LLM assisted posts, yes. But you’re addressing people who think they’ve awoken ChatGPT, not low-quality posters who use LLM assistance.
If you’re curious about the reason, it’s mostly empirical; I in fact see a lot of LLM-y stuff over the course of doing editing for people, so the FAQ is in some sense genuine (though of course aggregated). Though I admit it’s a combination of actual questions I get (in my own words) and questions I think I should get.
My intended audience for the post is something like: “someone who is maybe very slightly mind-hacked and has been using AI as a makeshift peer for certain topics, and is thus in an emotionally vulnerable state when actually putting their ideas in front of knowledgeable humans”. The “awakening” stuff is there for two reasons:
Many such people do in fact seem to buy in to a weaker version of the “awakened Nova” doom loop, due to (I think) a combination of enthusiasm/mind blown-ness (they are behind the median LW user in this but ahead of much of the rest of the world) and a pre-existing open-minded affinity for AI.
It serves as a good example of a more obviously problematic level of what I take to be basically the same gradient as “check out my collaborative recursive evolution algorithm research with ChatGPT”, such that I can gently suggest that people doing the latter are in a similar trap to (pretty obviously mistaken) people falling for the former.
Whether I succeed, I do not know, but those are the reasons! If I have succeeded, then a teenager with nobody to talk about this stuff who excitedly posted their LLM-assisted ideas and had them rejected, upon reading this, would be more likely to stick around LW to learn more and integrate slowly into the existing community.
Then I’d lean away from the “this is for people who’ve awoken ChatGPT” framing. E.g. change your title to something like “so you think LLMs make you smarter”, or something to that effect.
Considered it when originally drafting, but nah, think we’ll just have to agree to disagree here. For what it’s worth, if you actually browse the rejected posts themselves a high enough fraction are a little awaken-y (but not obviously full crackpot) that I don’t think the title is misleading even given my aims. It is all a little fuzzy, too; like, my hope is to achieve a certain kind of nudge, but the way I decided to do that involves sharing information that is disproportionately framed around “awakening” situations for creative reasons not totally clear to me. Like, my intuition says “the post you want to write for this purpose is [X]” and I’m left to guess why. I do respect the opinion that it doesn’t really work, but I don’t currently share it.
Fair enough. If/when you get any empirical data on how well this post works, writing it up would be pretty valuable and would likely resolve any remaining disagreements we have.
This essay seems like it’s trying to address two different audiences: LW, and the people who get mind-hacked by AIs. That’s to its detriment, IMO.
E.g. The questions in the Corollary FAQ don’t sound like the questions you’d expect from someone who’s been mind-hacked by AI. Like, why expect someone in a sycophancy doom loop to ask about if it’s OK to use AI for translation? Also, texts produced by sycophancy doom loops look pretty different to AI translated texts. Both share a resemblance to low quality LLM assisted posts, yes. But you’re addressing people who think they’ve awoken ChatGPT, not low-quality posters who use LLM assistance.
Agree that that’s a challenge with the post.
If you’re curious about the reason, it’s mostly empirical; I in fact see a lot of LLM-y stuff over the course of doing editing for people, so the FAQ is in some sense genuine (though of course aggregated). Though I admit it’s a combination of actual questions I get (in my own words) and questions I think I should get.
My intended audience for the post is something like: “someone who is maybe very slightly mind-hacked and has been using AI as a makeshift peer for certain topics, and is thus in an emotionally vulnerable state when actually putting their ideas in front of knowledgeable humans”. The “awakening” stuff is there for two reasons:
Many such people do in fact seem to buy in to a weaker version of the “awakened Nova” doom loop, due to (I think) a combination of enthusiasm/mind blown-ness (they are behind the median LW user in this but ahead of much of the rest of the world) and a pre-existing open-minded affinity for AI.
It serves as a good example of a more obviously problematic level of what I take to be basically the same gradient as “check out my collaborative recursive evolution algorithm research with ChatGPT”, such that I can gently suggest that people doing the latter are in a similar trap to (pretty obviously mistaken) people falling for the former.
Whether I succeed, I do not know, but those are the reasons! If I have succeeded, then a teenager with nobody to talk about this stuff who excitedly posted their LLM-assisted ideas and had them rejected, upon reading this, would be more likely to stick around LW to learn more and integrate slowly into the existing community.
Then I’d lean away from the “this is for people who’ve awoken ChatGPT” framing. E.g. change your title to something like “so you think LLMs make you smarter”, or something to that effect.
Considered it when originally drafting, but nah, think we’ll just have to agree to disagree here. For what it’s worth, if you actually browse the rejected posts themselves a high enough fraction are a little awaken-y (but not obviously full crackpot) that I don’t think the title is misleading even given my aims. It is all a little fuzzy, too; like, my hope is to achieve a certain kind of nudge, but the way I decided to do that involves sharing information that is disproportionately framed around “awakening” situations for creative reasons not totally clear to me. Like, my intuition says “the post you want to write for this purpose is [X]” and I’m left to guess why. I do respect the opinion that it doesn’t really work, but I don’t currently share it.
Fair enough. If/when you get any empirical data on how well this post works, writing it up would be pretty valuable and would likely resolve any remaining disagreements we have.