The fact that this is sanctioned and encouraged by OpenAI has a massive effect beyond it just being possible.
If it is merely possible, but you have to be smart enough to figure it out yourself, and motivated to do it step by step, you are likely to also have the time to wonder why this isn’t offered, and the smarts to wonder whether this is wise, and will either stop, or take extreme precautions. The fact that this is being offered gives the illusion, especially if you are not tech-savvy, that someone smart has thought this through and it is safe and fine, even though it intuitively does not sound like it. This means far, far, far more people using it, giving it data, giving it rights, giving it power, making their business dependent on it and screaming if this service is interrupted, getting the AI personalised to them to a degree that makes it even more likeable.
It feels surreal. I remember thinking “hey, it would be neat if I could give it some specific papers for context for my questions, and if it could remember all our conversations to learn the technical style of explanations that I like… but well, this also comes with obvious risks, I’ll get my errors mirrored back to me, and this would open angles of attack, and come with a huge privacy leak risk, and would make it more powerful, so I see why it is not done yet, that makes sense”, and then, weeks later, this is just… there. Like someone had the same idea, and then just… didn’t think about safety at all?
The fact that this is sanctioned and encouraged by OpenAI has a massive effect beyond it just being possible.
If it is merely possible, but you have to be smart enough to figure it out yourself, and motivated to do it step by step, you are likely to also have the time to wonder why this isn’t offered, and the smarts to wonder whether this is wise, and will either stop, or take extreme precautions. The fact that this is being offered gives the illusion, especially if you are not tech-savvy, that someone smart has thought this through and it is safe and fine, even though it intuitively does not sound like it. This means far, far, far more people using it, giving it data, giving it rights, giving it power, making their business dependent on it and screaming if this service is interrupted, getting the AI personalised to them to a degree that makes it even more likeable.
It feels surreal. I remember thinking “hey, it would be neat if I could give it some specific papers for context for my questions, and if it could remember all our conversations to learn the technical style of explanations that I like… but well, this also comes with obvious risks, I’ll get my errors mirrored back to me, and this would open angles of attack, and come with a huge privacy leak risk, and would make it more powerful, so I see why it is not done yet, that makes sense”, and then, weeks later, this is just… there. Like someone had the same idea, and then just… didn’t think about safety at all?