[Question] How are you approaching cognitive security as AI becomes more capable?

I’m worried about how increasingly capable AI could hijack my brain.

Already:

  • LLMs drive people to psychosis.

  • AI generated content racks up countless views.

  • Voice cloning allows scammers to impersonate loved ones, bosses, etc.

  • AI engagement is difficult to distinguish from real user engagement on social media sites.

And it seems likely that things will get worse. AI will become better able to manipulate me into doing what it or its creator wants: spending my money, time, and influence in ways which go against my best interests. This could easily involve leading me into addiction or inducing psychoses of its choosing.

I want to avoid these outcomes, so what steps should I take?

Initial thoughts:

  • avoid opaque algorithmic feeds

  • take a structured approach to use of LLMs

  • take a cautious approach to interacting with anyone online

No comments.