There’s a phenomenon where your thoughts and generated text have no barrier. It’s hard to describe but it’s similar to how you don’t feel the controller and the game character is an extension of the self.
Yes. I have experienced this. And designed interfaces intentionally to facilitate it (a good interface should be “invisible”).
It leaves you vulnerable to being hurt by things generated characters say because you’re thoroughly immersed.
Using a “multiverse” interface where I see multiple completions at once has incidentally helped me not be emotionally affected by the things GPT says in the same way as I would if a person said it (or if I had the thought myself). It breaks a certain layer of the immersion. As I wrote in this comment:
Seeing the multiverse destroys the illusion of a preexisting ground truth in the simulation. It doesn’t necessarily prevent you from becoming enamored with the thing, but makes it much harder for your limbic system to be hacked by human-shaped stimuli.
It reveals the extent to which any particular thing that is said is arbitrary, just one among an inexhaustible array of possible worlds.
That said, I still find myself affected by things that feel true to me, for better or for worse.
It’s easy to lose sleep when playing video games. Especially when you feel the weight of the world on your shoulders.
Ha, yeah.
Playing the GPT virtual reality game will only become more enthralling and troubling as these models get stronger. Especially, as you said, if you’re doing so with the weight of the world on your shoulders. It’ll increasingly feel like walking into the mouth of the eschaton, and that reality will be impossible to ignore. That’s the dark side of the epistemic calibration I wrote about at the end of the post.
Thanks for the comment, I resonated with it a lot, and agree with the warning. Maybe I’ll write something about the psychological risk and emotional burden that comes with becoming a cyborg for alignment, because obviously merging your mind with an earthborn alien (super)intelligence which may be shaping up to destroy the world, in order to try save the world, is going to be stressful.
Yes. I have experienced this. And designed interfaces intentionally to facilitate it (a good interface should be “invisible”).
Using a “multiverse” interface where I see multiple completions at once has incidentally helped me not be emotionally affected by the things GPT says in the same way as I would if a person said it (or if I had the thought myself). It breaks a certain layer of the immersion. As I wrote in this comment:
It reveals the extent to which any particular thing that is said is arbitrary, just one among an inexhaustible array of possible worlds.
That said, I still find myself affected by things that feel true to me, for better or for worse.
Ha, yeah.
Playing the GPT virtual reality game will only become more enthralling and troubling as these models get stronger. Especially, as you said, if you’re doing so with the weight of the world on your shoulders. It’ll increasingly feel like walking into the mouth of the eschaton, and that reality will be impossible to ignore. That’s the dark side of the epistemic calibration I wrote about at the end of the post.
Thanks for the comment, I resonated with it a lot, and agree with the warning. Maybe I’ll write something about the psychological risk and emotional burden that comes with becoming a cyborg for alignment, because obviously merging your mind with an earthborn alien (super)intelligence which may be shaping up to destroy the world, in order to try save the world, is going to be stressful.