Without going too far off track, quite a lot of AI plug-ins and offerings lately are following the Bard and Copilot idea of ‘share all your info with the AI so I have the necessary context’ and often also ‘share all your permissions with the AI so I can execute on my own.’
I have no idea how we can be in position to trust that. We are clearly not going to be thinking all of this through.
I think the ‘we’ here needs to be qualified.
There are influential people and organizations who don’t even trust computers to communicate highly sensitive info at all.
They use pen and paper, typewriters, etc...
So of course they wouldn’t trust anything even more complex.
I think the ‘we’ here needs to be qualified.
There are influential people and organizations who don’t even trust computers to communicate highly sensitive info at all.
They use pen and paper, typewriters, etc...
So of course they wouldn’t trust anything even more complex.
I am really not on a Putin level of paranoia.
Just the “these dudes accidentally leaked the titles of our chats to other users, do I really want them to have my email and bank data”level.