one big problem with using LMs too much imo is that they are dumb and catastrophically wrong about things a lot, but they are very pleasant to talk to, project confidence and knowledgeability, and reply to messages faster than 99.99% of people. these things are more easily noticeable than subtle falsehood, and reinforce a reflex of asking the model more and more. it’s very analogous to twitter soundbites vs reading long form writing and how that eroded epistemics.
hotter take: the extent to which one finds current LMs smart is probably correlated with how much one is swayed by good vibes from their interlocutor as opposed to the substance of the argument (ofc conditional on the model actually giving good vibes, which varies from person to person. I personally never liked chatgpt vibes until I wrote a big system prompt)
it’s kind of haphazard and I have no reason to believe I’m better at prompting than anyone else. the broad strokes are I tell it to:
use lowercase
not use emojis
be concise, explain at bird’s eye level
don’t sugar cost things
not be too professional/formal; use some IRC/twitter slang without overdoing it
speak as if it’s a conversation over a dinner table between two close friends who are also technical experts
don’t dumb things down but also don’t use unnecessary jargon
I’ve also been trying to get it to use CS/ML analogies when it would make things clearer, much the same way people on LW would do, but it’s been hard to get the model to do it in a natural, non cringe way. rn it overdoes it and makes lots of very forced and not insightful analogies despite my attempts to explain to it
one big problem with using LMs too much imo is that they are dumb and catastrophically wrong about things a lot, but they are very pleasant to talk to, project confidence and knowledgeability, and reply to messages faster than 99.99% of people. these things are more easily noticeable than subtle falsehood, and reinforce a reflex of asking the model more and more. it’s very analogous to twitter soundbites vs reading long form writing and how that eroded epistemics.
hotter take: the extent to which one finds current LMs smart is probably correlated with how much one is swayed by good vibes from their interlocutor as opposed to the substance of the argument (ofc conditional on the model actually giving good vibes, which varies from person to person. I personally never liked chatgpt vibes until I wrote a big system prompt)
Up for sharing your system prompt?
it’s kind of haphazard and I have no reason to believe I’m better at prompting than anyone else. the broad strokes are I tell it to:
use lowercase
not use emojis
be concise, explain at bird’s eye level
don’t sugar cost things
not be too professional/formal; use some IRC/twitter slang without overdoing it
speak as if it’s a conversation over a dinner table between two close friends who are also technical experts
don’t dumb things down but also don’t use unnecessary jargon
I’ve also been trying to get it to use CS/ML analogies when it would make things clearer, much the same way people on LW would do, but it’s been hard to get the model to do it in a natural, non cringe way. rn it overdoes it and makes lots of very forced and not insightful analogies despite my attempts to explain to it