Writing here, for me, means exposing myself to people who have a more established worldview than mine.
What I think is, “They have a broad context that I’m unfamiliar with, a well-read and understood bibliography, and of course, limited time, like everyone else.”
Are my comments a waste of people’s time? That’s what I’m wondering as I write this.
I like to brainstorm. I get up in the morning, and before going to work, before showering, I open a Learning Link (LLm), usually a Gemini, who’s more of a flatterer and lets me explore ideas more freely. I tell them, “Tell me something that’s unresolved, and I’ll give you the solution in a sentence.” It’s clear I’m not capable of solving most of the problems they present, but the process of debating with the LLm expands my knowledge on the subject. Sometimes, during the discussion, I think, “That’s a good idea,” and I leave the flattering Gemini behind. I ask Chatgpt, whom I’ve “trained” to always find flaws. I pass it on to Copilot, to Claude. They all try to steer me toward what already exists. I reply, “No! That’s not what I’m proposing. What I’m proposing is better and it has to work” (even though I don’t have the training to know if what I’m thinking is based on something real or not). I think that since I started using LLM, I’m constantly thinking about paths, about learning things, about writing them down..
Okay, the scenario you’re presenting is clear. Let’s imagine a future where privacy isn’t a real human concern.
Children with continuous brain monitoring systems being virtually cloned to create a brain map under the guise of being able to teach them any knowledge quickly. They might say: “We have to be the country with the best researchers; it’s necessary.”
Interconnected relational maps of private human knowledge, relationships, biases, etc., could be used under the pretext of preventing terrorist attacks.
A system assisted in redirecting the behavior of people under surveillance. In this future, the goal wouldn’t be to eliminate people but to redirect their thoughts. An AI seeing what your contact lenses see, correcting your negative impulses (negative according to the norms of that state) in real time.
Systems for uploading private information to the cloud. The problem of memory would disappear, for better or for worse. Dopamine from the past will flood your mind when you need it.
Are there early signs that indicate we are heading towards that future?