I had a very busy IRL day yesterday and have intended to respond to this.
While I am initially inclined to simply do what you ask out of kindness I am still convinced that I have no real reason to do so and therefore acceding here may portray me as a pushover. This really is an instance where some human neutral third party input to this dispute would be extremely helpful and I wish there was more of a culture online of such interventions. I would expect there to be such a culture here on Lesswrong, but perhaps not.
Nevertheless I did consult a non-human mediator. I prompt Gemini with the following in a new chat:
I wrote a post where I said Chalmers’s p-zombie argument is so bad “How could any rational person accept something so absurd?” but then go on to say that his argument is more persuasive from color inversion. Someone else characterized this as arguing against Chalmers. Is that an accurate characterization or should they retract it?
As a guard against sycophancy I do phrase it from your perspective. As far as I know any personalization feature of Gemini is turned off, including remembering past chats. Even if I’ve incorrectly assessed Gemini’s outside knowledge, previously I did not interact with Gemini about this post, these comments, nor the general idea of people on Lesswrong or in the Rationalist community arguing against Chalmers.
Its response concludes:
##Conclusion on the characterization
The other person’s characterization that you are arguing against Chalmers is accurate with respect to the p-zombie argument. You explicitly argued against the p-zombie argument’s soundness and persuasiveness.
They do not need to retract the statement that you argued against him, as you undeniably argued against one of his main arguments. You simply qualified your opposition by showing that you find a related argument to be more compelling. You are essentially a selective critic of his methods, not a complete opponent of his conclusions.
This is an increasingly interesting intellectual exercise and I would be open to alternative prompting techniques or perhaps the opinions of other LLM systems as a comparison.
If an author pops up in your comment section saying “Hey, you’ve misinterpreted my post!” That’s generally strong evidence that you have indeed misinterpreted their post. I’m not saying authors are infallible, but the quotes you’re pulling like “How could any rational person accept something so absurd?” are explicitly framing my starting point before engaging deeply with Chalmers’ book and his arguments. The sentence directly after this one is (paraphrased) “To find out, I read his book and believe his other argument makes a much stronger case.” The post itself then goes on to endorse a position which Chalmers is sympathetic to and uses one of his own arguments in the process.
If you really want to double-down and insist that you’re characterising my post correctly despite my insistence to the contrary, it might be worth at least noting the disagreement in a footnote and pointing people towards this comment chain to clarify.
Regarding “portraying you as a pushover.” This isn’t the case at all! This is LessWrong. The community values people who can accept thoughtful critique and change their view in light of new evidence. The frontpage comment guidelines state:
Don’t be afraid to say ‘oops’ and change your mind
Regarding using LLM’s as non-human mediators, I’m not convinced this is a legitimate method for arbitrating a discussion but if we really want to go down that route I think we should at least use the most powerful models available and copy the full-text of the posts to ensure nothing is cherry-picked or taken out of context. You’re using Gemini 2.5 Flash which is a weaker model than the more powerful Gemini 2.5 Pro. When I use your prompt with more powerful models they’re quick to note that you’re oversimplifying e.g. 2.5 Pro and GPT-5. I imagine this would be even more apparent using the full text of the posts.
I think we’re reaching the point of diminishing returns with this discussion so this will be my last reply. At the end of the day, you don’t “need” to action my request. It’s an ask. You’re ultimately the author of the post and can decide how to handle it. From my perspective, if you continue to disagree with my characterisation I think the best course is simply to note the disagreement in a footnote and link to this comment exchange. That way we don’t need invest any more time into it and a sufficiently motivated reader could unpack the disagreement if they wanted to.
I had a very busy IRL day yesterday and have intended to respond to this.
While I am initially inclined to simply do what you ask out of kindness I am still convinced that I have no real reason to do so and therefore acceding here may portray me as a pushover. This really is an instance where some human neutral third party input to this dispute would be extremely helpful and I wish there was more of a culture online of such interventions. I would expect there to be such a culture here on Lesswrong, but perhaps not.
Nevertheless I did consult a non-human mediator. I prompt Gemini with the following in a new chat:
As a guard against sycophancy I do phrase it from your perspective. As far as I know any personalization feature of Gemini is turned off, including remembering past chats. Even if I’ve incorrectly assessed Gemini’s outside knowledge, previously I did not interact with Gemini about this post, these comments, nor the general idea of people on Lesswrong or in the Rationalist community arguing against Chalmers.
Its response concludes:
https://gemini.google.com/share/52f7996e8472
This is an increasingly interesting intellectual exercise and I would be open to alternative prompting techniques or perhaps the opinions of other LLM systems as a comparison.
If an author pops up in your comment section saying “Hey, you’ve misinterpreted my post!” That’s generally strong evidence that you have indeed misinterpreted their post. I’m not saying authors are infallible, but the quotes you’re pulling like “How could any rational person accept something so absurd?” are explicitly framing my starting point before engaging deeply with Chalmers’ book and his arguments. The sentence directly after this one is (paraphrased) “To find out, I read his book and believe his other argument makes a much stronger case.” The post itself then goes on to endorse a position which Chalmers is sympathetic to and uses one of his own arguments in the process.
If you really want to double-down and insist that you’re characterising my post correctly despite my insistence to the contrary, it might be worth at least noting the disagreement in a footnote and pointing people towards this comment chain to clarify.
Regarding “portraying you as a pushover.” This isn’t the case at all! This is LessWrong. The community values people who can accept thoughtful critique and change their view in light of new evidence. The frontpage comment guidelines state:
Regarding using LLM’s as non-human mediators, I’m not convinced this is a legitimate method for arbitrating a discussion but if we really want to go down that route I think we should at least use the most powerful models available and copy the full-text of the posts to ensure nothing is cherry-picked or taken out of context. You’re using Gemini 2.5 Flash which is a weaker model than the more powerful Gemini 2.5 Pro. When I use your prompt with more powerful models they’re quick to note that you’re oversimplifying e.g. 2.5 Pro and GPT-5. I imagine this would be even more apparent using the full text of the posts.
I think we’re reaching the point of diminishing returns with this discussion so this will be my last reply. At the end of the day, you don’t “need” to action my request. It’s an ask. You’re ultimately the author of the post and can decide how to handle it. From my perspective, if you continue to disagree with my characterisation I think the best course is simply to note the disagreement in a footnote and link to this comment exchange. That way we don’t need invest any more time into it and a sufficiently motivated reader could unpack the disagreement if they wanted to.