I was planning on joining November 1, and I’m also finding this link and the one in the post invalid. Help?
Tommy Crow
I’m planning to make an edit to the piece addressing the common alternate way of defining realism, such that it essentially is synonymous with objectivism. These classification schemes are really useful for me to think about when working on that, so thanks! As you can see in the first one, anti-realism is encompassing subjectivism—potentially confusing to someone who has read my piece, because I specifically classified subjectivism as a realist position! The issue is coming from the fact that your first diagram treats “realism” as meaning “moral claims are truth-apt, some are true, and the truth values of them are mind-independent” (which is basically the same as objectivism) whereas I’ve defined it simply as “moral claims are truth-apt and some of them are true.” Both definitions are commonly acceptable I believe, and the reason I’ve chosen the definition I did is because I want an overarching distinction between believing in mind-dependent moral truths and mind-independent moral truths. But the other way of doing things is common enough that it needs to be addressed in the piece so as to avoid confusion.
In the first categorization scheme, I’m also not exactly sure what nihilism is referring to. Do you know? Is it just referring to Error Theory (and maybe incoherentism)? Usually non-cognitivism would fall within nihilism, no? I actually don’t think either of these diagrams place Nihilism correctly.
That second diagram is pretty crazy. I don’t like it haha. I’m not super well acquainted with the monism/dualism distinction, but in the common conception don’t they both generally assume that morality is real, at least in some semi-robust sense? (And again, why the distinction between Nihilism and Non-Cognitivism? What is Nihilism referring to?)
Thanks so much for sharing! Super useful stuff for me to think about.
Very cool!
I will say, it does seem like LessWrong is one of the worst places to unleash this. In part for reasons already mentioned by previous commenters, and in part because LessWrong is one of the places where people are actually doing cutting-edge stuff on a regular basis. Evaluating whether something is plausible when it is legitimately on the cutting edge of its field is one of the things I think most big llms are super bad at. It’s particularly frustrating when you’re trying to innovate, and people are always like “ChatGPT says that’s impossible!” lol. For example, I have a ChatGPT conversation from from last year where he says a surgery I had already succeeded in figuring out and getting was “probably not” possible. He then proceeded to say a bunch of mealy-mouthed, borderline incorrect stuff about its risks, which did not reflect a deep understanding of the tech involved at all. I think this is pretty par for the course when you’re doing unusual/innovative stuff—Claude and ChatGPT and their buddies are conservative, and biased against stuff that sounds weird.
With that said, the tool itself is very cool. I think I’m just hoping that people use it wisely and in a way that supports a culture of discovery on LessWrong, not in a pedantic way that just causes a lot more unnecessary annoying labor for people trying to do cutting edge stuff.