I will say, it does seem like LessWrong is one of the worst places to unleash this. In part for reasons already mentioned by previous commenters, and in part because LessWrong is one of the places where people are actually doing cutting-edge stuff on a regular basis. Evaluating whether something is plausible when it is legitimately on the cutting edge of its field is one of the things I think most big llms are super bad at. It’s particularly frustrating when you’re trying to innovate, and people are always like “ChatGPT says that’s impossible!” lol. For example, I have a ChatGPT conversation from from last year where he says a surgery I had already succeeded in figuring out and getting was “probably not” possible. He then proceeded to say a bunch of mealy-mouthed, borderline incorrect stuff about its risks, which did not reflect a deep understanding of the tech involved at all. I think this is pretty par for the course when you’re doing unusual/innovative stuff—Claude and ChatGPT and their buddies are conservative, and biased against stuff that sounds weird.
With that said, the tool itself is very cool. I think I’m just hoping that people use it wisely and in a way that supports a culture of discovery on LessWrong, not in a pedantic way that just causes a lot more unnecessary annoying labor for people trying to do cutting edge stuff.
Very cool!
I will say, it does seem like LessWrong is one of the worst places to unleash this. In part for reasons already mentioned by previous commenters, and in part because LessWrong is one of the places where people are actually doing cutting-edge stuff on a regular basis. Evaluating whether something is plausible when it is legitimately on the cutting edge of its field is one of the things I think most big llms are super bad at. It’s particularly frustrating when you’re trying to innovate, and people are always like “ChatGPT says that’s impossible!” lol. For example, I have a ChatGPT conversation from from last year where he says a surgery I had already succeeded in figuring out and getting was “probably not” possible. He then proceeded to say a bunch of mealy-mouthed, borderline incorrect stuff about its risks, which did not reflect a deep understanding of the tech involved at all. I think this is pretty par for the course when you’re doing unusual/innovative stuff—Claude and ChatGPT and their buddies are conservative, and biased against stuff that sounds weird.
With that said, the tool itself is very cool. I think I’m just hoping that people use it wisely and in a way that supports a culture of discovery on LessWrong, not in a pedantic way that just causes a lot more unnecessary annoying labor for people trying to do cutting edge stuff.