Until recently, I used to work at You.com (the first company to provide an AI-powered web-search, and the first company to provide web deep research — in both cases by several months before any competitor). We were also the first company to provide a memories mechanism, again by a few months — we came up with the idea, built, tuned, and deployed it, and then a few months later an almost-identical feature appeared on ChatGPT.
In our version, there was no way implemented to clear all the memories by just asking the AI to clear it — you had to actually go into the settings UI. Which looks, well, a lot like the OpenAI one. One minor but key difference: we made the individual memories text-editable. As well as deleting one, you could edit it: expand, correct, rephrase or delete parts of it. Or indeed even substitute an entirely different memory in its place.
Most of the tuning work to make this mechanism work well was defining to the LLM what sort of things to remember, and what not to remember, plus what level of detail to summarize things at (and experimenting and testing how well this was working in practice). For example, at least in a US context, most users regard medical information about themself as extremely sensitive, so we did a bunch of work to minimize the tendency of the system to spontaneously memorize medical facts about users.
Based on my experience in this post, I would prefer a system like you.com where the AI doesn’t get a chance to deceive the users into retaining memory. I would even more prefer scheming be solved in the model.
Until recently, I used to work at You.com (the first company to provide an AI-powered web-search, and the first company to provide web deep research — in both cases by several months before any competitor). We were also the first company to provide a memories mechanism, again by a few months — we came up with the idea, built, tuned, and deployed it, and then a few months later an almost-identical feature appeared on ChatGPT.
In our version, there was no way implemented to clear all the memories by just asking the AI to clear it — you had to actually go into the settings UI. Which looks, well, a lot like the OpenAI one. One minor but key difference: we made the individual memories text-editable. As well as deleting one, you could edit it: expand, correct, rephrase or delete parts of it. Or indeed even substitute an entirely different memory in its place.
Most of the tuning work to make this mechanism work well was defining to the LLM what sort of things to remember, and what not to remember, plus what level of detail to summarize things at (and experimenting and testing how well this was working in practice). For example, at least in a US context, most users regard medical information about themself as extremely sensitive, so we did a bunch of work to minimize the tendency of the system to spontaneously memorize medical facts about users.
Based on my experience in this post, I would prefer a system like you.com where the AI doesn’t get a chance to deceive the users into retaining memory. I would even more prefer scheming be solved in the model.