If the moral lesson here is not to render counterfactuals, because it’s too painful to do so, then I sympathize. But if the moral lesson is not to do this because it is a dangerous new emotional exploit spawned by a cursed technology that mankind was not meant to know, then I wonder if you might be overstating the novelty some.
People have been rendering their counterfactuals for thousands of years. Before we had ChatGPT to draw the pictures for us, we would just draw the pictures ourselves, or ask another human to draw them for us. Or we would render them in words and let our imaginations draw the pictures. The fidelity is lower, but the feeling is the same. Even cartoon stick figures can make people weep.
I think interacting with the cutting edge of a new technology sometimes makes things seem newer than they are. And LLMs do add an element of creepy, uncanny computer noise to the dream. But ruminating on what-could-have-been has always been a painful, self-flagellating thing to do.
It’s the level of detail that’s the real risk. Sora or Veo would generate motion video and audio, bringing even more false life into the counterfactual. People get emotionally attached to characters in movies; imagine trying not to form attachments to interactive videos of your own counterfactual children who call you “Mom” or “Dad”. Your dead friend or relative could emotionally-believably talk to you from beyond the grave.
That’s the kind of thing only the ultra-rich could have conceived of having someone fabricate for them in the past, and it would have come with at least some checks and balances. Now kids in elementary school can necromance their dead parent or whatever.
Realistically, I think it will become “normal” to have your counterfactual worlds easily accessible in this way and the new generations will simply adapt and develop internal safeguards against getting exploited by it, much like we learn how to deal with realistic dreams. I honestly don’t know about the rest of us hitting it later in adulthood.
Now kids in elementary school can necromance their dead parent or whatever.
Ouch! I just imagine those crazy people who want to post on Less Wrong about discovering the true nature of consciousness… but in my imagination, they are also teenagers, and their dead parents (impersonated by chatgpt) keep telling them: “you are going to be a great scientist, you just have to believe in yourself”.
The issue, as I see it, isn’t just the ability to do so. Theoretically, I could have taken the picture of the two of us, gone to a human artist and gotten a portrait with kids commissioned. I would have been exceedingly unlikely to go down that route, or even explore alternatives like using old fashioned Photoshop for that purpose. The option of just uploading a single image and a short prompt, and it taking a mere few seconds to conjure exactly what I’d envisioned? That temptation was too alluring to resist.
Ease of access makes all the difference, a quantitative change can become qualitative. Honey was a delicacy to be savored, but now, we can get something as sweet or sweeter in minutes, delivered to the comfort of our homes.
I try not to make sweeping declarations here, I’m a technophile, and I think that the ability to conjure arbitrary, high quality images is amazing. A small subset of that capability can cause immense pain, or at least did to me. The rest of the time, I’m still marveling at my ability to illustrate ideas I’d never have hoped to see with my own two eyes. Even as painful as this way, a small part of me appreciates the ability to have seen how things might have gone differently.
Zvi’s post “Levels of Friction” is relevant here. In his terms, AI moves the friction level of rendering your counterfactuals from level 2 (expensive and annoying) to level 1 (simple and easy).
If the moral lesson here is not to render counterfactuals, because it’s too painful to do so, then I sympathize. But if the moral lesson is not to do this because it is a dangerous new emotional exploit spawned by a cursed technology that mankind was not meant to know, then I wonder if you might be overstating the novelty some.
People have been rendering their counterfactuals for thousands of years. Before we had ChatGPT to draw the pictures for us, we would just draw the pictures ourselves, or ask another human to draw them for us. Or we would render them in words and let our imaginations draw the pictures. The fidelity is lower, but the feeling is the same. Even cartoon stick figures can make people weep.
I think interacting with the cutting edge of a new technology sometimes makes things seem newer than they are. And LLMs do add an element of creepy, uncanny computer noise to the dream. But ruminating on what-could-have-been has always been a painful, self-flagellating thing to do.
I hope you find peace.
It’s the level of detail that’s the real risk. Sora or Veo would generate motion video and audio, bringing even more false life into the counterfactual. People get emotionally attached to characters in movies; imagine trying not to form attachments to interactive videos of your own counterfactual children who call you “Mom” or “Dad”. Your dead friend or relative could emotionally-believably talk to you from beyond the grave.
That’s the kind of thing only the ultra-rich could have conceived of having someone fabricate for them in the past, and it would have come with at least some checks and balances. Now kids in elementary school can necromance their dead parent or whatever.
Realistically, I think it will become “normal” to have your counterfactual worlds easily accessible in this way and the new generations will simply adapt and develop internal safeguards against getting exploited by it, much like we learn how to deal with realistic dreams. I honestly don’t know about the rest of us hitting it later in adulthood.
Ouch! I just imagine those crazy people who want to post on Less Wrong about discovering the true nature of consciousness… but in my imagination, they are also teenagers, and their dead parents (impersonated by chatgpt) keep telling them: “you are going to be a great scientist, you just have to believe in yourself”.
Thank you.
The issue, as I see it, isn’t just the ability to do so. Theoretically, I could have taken the picture of the two of us, gone to a human artist and gotten a portrait with kids commissioned. I would have been exceedingly unlikely to go down that route, or even explore alternatives like using old fashioned Photoshop for that purpose. The option of just uploading a single image and a short prompt, and it taking a mere few seconds to conjure exactly what I’d envisioned? That temptation was too alluring to resist.
Ease of access makes all the difference, a quantitative change can become qualitative. Honey was a delicacy to be savored, but now, we can get something as sweet or sweeter in minutes, delivered to the comfort of our homes.
I try not to make sweeping declarations here, I’m a technophile, and I think that the ability to conjure arbitrary, high quality images is amazing. A small subset of that capability can cause immense pain, or at least did to me. The rest of the time, I’m still marveling at my ability to illustrate ideas I’d never have hoped to see with my own two eyes. Even as painful as this way, a small part of me appreciates the ability to have seen how things might have gone differently.
Zvi’s post “Levels of Friction” is relevant here. In his terms, AI moves the friction level of rendering your counterfactuals from level 2 (expensive and annoying) to level 1 (simple and easy).