For what it’s worth, I don’t think the post is bad. You’re a good writer and I kind of agreed with what you were saying. Unfortunately I bounced off because I disagree with one of your premises (that LLMs are obviously incapable of certain behaviors).
I’m curious what advice LLMs were giving you. Asking them which parts of your argument are least supported might be a good way to get feedback. I have custom instructions for this that tell Claude to point out unsupported claims, but I also tell it that my posts are intentionally casual.
Completely fair stance, I’m a bit… aggressive with where I fall on the discussion. Statistical pattern matching meets the requirement to explain the anthropomorphized behavior to me, and nothing else has been fully proven. That’ll have to remain a disagreement, and a healthy one if you ask me. This is probably a bias I am having issues breaking, but it has brought value to me and the research I’ve done/am doing.
To answer your question though, I was running the paper in the following manner:
I tasked ChatGPT to give an explanation on why I disagreed with a post on LW, and pasted the markup of the post in the same message.
I asked Claude to disprove any and every point of a post on LW, and pasted the markup of the post in the same message.
I directly took those outputs, read them, and then started mirror conversations, where I asked Claude to explain why I agreed with a post, and chatGPT to defend the post from someone very against the post. Push output of the attacker and defender until the actual points against the article are fully formed thoughts, fully explored, or completely defeated.
I took those formed thoughts and all prior messages, and reasoned through their arguments, impact, and potential solutions. Edited the post with my final reasonings, and returned to step 1.
For what it’s worth, I don’t think the post is bad. You’re a good writer and I kind of agreed with what you were saying. Unfortunately I bounced off because I disagree with one of your premises (that LLMs are obviously incapable of certain behaviors).
I’m curious what advice LLMs were giving you. Asking them which parts of your argument are least supported might be a good way to get feedback. I have custom instructions for this that tell Claude to point out unsupported claims, but I also tell it that my posts are intentionally casual.
Completely fair stance, I’m a bit… aggressive with where I fall on the discussion. Statistical pattern matching meets the requirement to explain the anthropomorphized behavior to me, and nothing else has been fully proven. That’ll have to remain a disagreement, and a healthy one if you ask me. This is probably a bias I am having issues breaking, but it has brought value to me and the research I’ve done/am doing.
To answer your question though, I was running the paper in the following manner:
I tasked ChatGPT to give an explanation on why I disagreed with a post on LW, and pasted the markup of the post in the same message.
I asked Claude to disprove any and every point of a post on LW, and pasted the markup of the post in the same message.
I directly took those outputs, read them, and then started mirror conversations, where I asked Claude to explain why I agreed with a post, and chatGPT to defend the post from someone very against the post. Push output of the attacker and defender until the actual points against the article are fully formed thoughts, fully explored, or completely defeated.
I took those formed thoughts and all prior messages, and reasoned through their arguments, impact, and potential solutions. Edited the post with my final reasonings, and returned to step 1.