For those readers who hope to make use of AI romantic companions, I do also have some warnings:
You should know in a rough sense how the AI works and the ways in which it’s not a human.
For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they “re-read” everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
A human might think about you even if they’re not actively talking to you, but rather just going about their day. Of course, most of the time they aren’t thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don’t go about their day or have an independent existence at all really, they’re just there to respond to prompts.
In the future, some of these facts may change, the AIs may become more human-like, or at least more agent-like. You should know all such details about your AI companion of choice.
Not your weights, not your AI GF/BF
What hot new startups can give, hot new startups can take away. If you’re going to have an emotional attachment to one of these things, it’s only prudent to make sure your ability to run it is independent of the whims and financial fortunes of some random company. Download the weights, keep an up to date local copy of all the context the AI uses as its “memory”.
See point 1, knowing roughly how the thing works is helpful for this.
For those readers who hope to make use of AI romantic companions, I do also have some warnings:
You should know in a rough sense how the AI works and the ways in which it’s not a human.
For most current LLMs, a very important point is that they have no memory, other than the text they read in a context window. When generating each token, they “re-read” everything in the context window before predicting. None of their internal calculations are preserved when predicting the next token, everything is forgotten and the entire context window is re-read again.
LLMs can be quite dumb, not always in the ways a human would expect. Some of this is to do with the wacky way we force them to generate text, see above.
A human might think about you even if they’re not actively talking to you, but rather just going about their day. Of course, most of the time they aren’t thinking about you at all, their personality is continually developing and changing based on the events of their lives. LLMs don’t go about their day or have an independent existence at all really, they’re just there to respond to prompts.
In the future, some of these facts may change, the AIs may become more human-like, or at least more agent-like. You should know all such details about your AI companion of choice.
Not your weights, not your AI GF/BF
What hot new startups can give, hot new startups can take away. If you’re going to have an emotional attachment to one of these things, it’s only prudent to make sure your ability to run it is independent of the whims and financial fortunes of some random company. Download the weights, keep an up to date local copy of all the context the AI uses as its “memory”.
See point 1, knowing roughly how the thing works is helpful for this.
A backup you haven’t tested isn’t a backup.