Wasn’t Scott’s point specifically about rhetorical techniques? I think if you apply it broadly to “tools”—and especially if your standard for symmetry is met by “could be used for” (as opposed to “is just as useful for”) -- then you’re at risk of ruling out almost every useful tool.
(I don’t know how this thing works, but it’s entirely possible that a) the chatbot employs virtuous, asymmetric argumentative techniques, AND b) the code used to create it could easily be repurposed to create a chatbot that employs unvirtuous, symmetric techniques.)
I disagree with this approach on principle (and with lesswrong’s general bias to closed source anything) but I don’t think we can resolve that disagreement quickly right now, sorry.
Open source the prompts and repo pls
It could be used for low-integrity stuff, so nope, sorry.
If it is a concern that your tool might be symmetric between truth and bullshit, then you should probably not have made the tool in the first place.
Wasn’t Scott’s point specifically about rhetorical techniques? I think if you apply it broadly to “tools”—and especially if your standard for symmetry is met by “could be used for” (as opposed to “is just as useful for”) -- then you’re at risk of ruling out almost every useful tool.
(I don’t know how this thing works, but it’s entirely possible that a) the chatbot employs virtuous, asymmetric argumentative techniques, AND b) the code used to create it could easily be repurposed to create a chatbot that employs unvirtuous, symmetric techniques.)
The concern is that the tool can easily be tweaked to be symmetric.
I disagree with this approach on principle (and with lesswrong’s general bias to closed source anything) but I don’t think we can resolve that disagreement quickly right now, sorry.