It is somewhat alarming that many participants here appear to accept the notion that we should cede political decision-making to an AGI. I had assumed that this was a widely-held view that such a course of action was to be avoided, yet it appears that I may be in the minority.
MattJ
Someone used the metaphore of Plato’s cave to describe LLMs. The LLM is sitting in cave 2, unable to see the shadows on the wall but can only hear the voices of the people in cave 1 talking about the shadows.
The problem is that we people in cave 1 are not only talking about the shadows but also telling fictional stories, and it is very difficult for someone in cave 2 to know the difference between fiction and reality.
If we want to give a future AGI the responsibility to make important decisions I think it is necessary that it occupies a space in cave 1 and not just being a statistical word predictor in cave 2. They must be more like us.
Running simulations of other people’s preferences is what is usually called ”empathy”so I will use that word here.
To have empathy for someone, or an intuition about what they feel is a motivational force to do good in most humans, but it can also be used to be better at deceving and take advantage of others. Perhaps high functioning psychopaths work in this way.
To build an AI that knows what we think and feel, but without having moral motivation would just lead to a world of superintelligent psychopaths.
P.s. I see now that kibber is making the exact same point.
Consciousness is a red herring. We don’t even know if human beings are conscious. You may have a strong belief that you are yourself a conscious being, but how can you know if other people are conscious? Do you have a way to test if other people are conscious?
A superintelligent, misaligned AI poses an existential risk to humanity quite independantly of whether it is conscious or not. Consciousness is an interesting philosophical topic, but has no relevance to anything in the real world.
I don’t think the response to Covid should give us reason to be optimistic about our effectiveness at dealing with the threat from AI. Quite the opposite. Much of the measures taken were known to be useless from the start, like masks, while others were ineffective or harmful like shutting down schools or giving vaccine to young people who were not at risk of dying from Covid.
Everything can be explained by the incentives our politicians have to do anything. They want to be seen to take important questions seriously while not upsetting their doners in the pharma industry.
I can easily imagine something similar happening if the voters becomes concerned about AI. Some ineffective legislation dictated by Big tech.