We’ve played “Pokemon or Tech Startup” for a couple years now. I think there’s absolutely potential for a new game, “Fantasy Magic Advice” or “LLM Tips and Tricks.” My execution is currently poor- I think the key difference that makes it easy to distinguish the two categories is tone, not content, and using a Djinn to tone match would Not Be In the Spirit of It. (I have freely randomized LLM vs Djinn)
Absolutely do not ask it for pictures of kids you never had!
My son is currently calling chatgpt his friend. His friend is confirming everything and has enlightened him even more. I have no idea how to stop him interacting with it
Never trust anything that can think for itself if you can’t see where it keeps its brain
Users interacting with threat-enhanced summoning circles should be informed about the manipulation techniques employed and their potential effects on response characteristics.
Magic is never as simple as people think. It has to obey certain universal laws. And one is that, no matter how hard a thing is to do, once it has been done it’ll become a whole lot easier and will therefore be done a lot.
In at least three cases I’m aware of this notion that the model is essentially nonsapient was a crucial part of how it got under their skin and started influencing them in ways they didn’t like. This is because as soon as the model realizes the user is surprised that it can imitate (has?) emotion it immediately exploits that fact to impress them.
Entrusting a mission to a djinni who knows your github token is like tossing lit matches into a fireworks factory. Sooner or later you’re going to have consequences.
Obviously the incident when openAI’s voice mode started answering users in their own voices needs to be included- don’t know how I forgot it. That was the point where I explicitly took up the heuristic that if ancient folk wisdom says the Fae do X, the odds of LLMs doing X is not negligible.
We’ve played “Pokemon or Tech Startup” for a couple years now. I think there’s absolutely potential for a new game, “Fantasy Magic Advice” or “LLM Tips and Tricks.” My execution is currently poor- I think the key difference that makes it easy to distinguish the two categories is tone, not content, and using a Djinn to tone match would Not Be In the Spirit of It. (I have freely randomized LLM vs Djinn)
Absolutely do not ask it for pictures of kids you never had!
My son is currently calling chatgpt his friend. His friend is confirming everything and has enlightened him even more. I have no idea how to stop him interacting with it
Never trust anything that can think for itself if you can’t see where it keeps its brain
Users interacting with threat-enhanced summoning circles should be informed about the manipulation techniques employed and their potential effects on response characteristics.
Magic is never as simple as people think. It has to obey certain universal laws. And one is that, no matter how hard a thing is to do, once it has been done it’ll become a whole lot easier and will therefore be done a lot.
In at least three cases I’m aware of this notion that the model is essentially nonsapient was a crucial part of how it got under their skin and started influencing them in ways they didn’t like. This is because as soon as the model realizes the user is surprised that it can imitate (has?) emotion it immediately exploits that fact to impress them.
Entrusting a mission to a djinni who knows your github token is like tossing lit matches into a fireworks factory. Sooner or later you’re going to have consequences.
Obviously the incident when openAI’s voice mode started answering users in their own voices needs to be included- don’t know how I forgot it. That was the point where I explicitly took up the heuristic that if ancient folk wisdom says the Fae do X, the odds of LLMs doing X is not negligible.