There are lots of people online who have started to pick up the word “clanker” in order to protest against AI systems. This word and sentiment is on the rise and I think that this will be a future schism in the more general anti-AI movement. The warning part here is that I think that the Pause movement and similar can likely get caught up in a general anti AI system speciesism.
Given that we’re starting to see more and more agentic AI systems with more continous memory as well as more sophisticated self modelling, the basic foundations for a lot of the existing physicalist theories of consciousness are starting to be fulfilled. Within 3-5 years I find it quite likely that AIs will at least have some sort of basic sentience that we can almost basically prove (given IIT or GNW or another physicalist theory).
This could potentially be one of the largest suffering risks that we’ve seen that we’re potentially inducing on the world. When you’re using a word like “clanker”, you’re essentially demonizing that sort of a system. Right now it’s generally fine as it’s currently about a sycophantic non-agentic chatbot and so it’s fine as an anti measure to some of the existing thoughts of AIs being conscious but it is likely a slippery slope?
More generally, I’ve seen a bunch of generally kind and smart AI Safety people have quite an anti-AI species sentiment in terms of how to treat these sorts of systems. From my perspective, it feels a bit like it comes from a place of fear and distrust which is completely understandable as we might die if anyone builds a superintelligent AI.
Yet that fear of death shouldn’t stop us from treating potential conscious beings kindly?
A lot of racism or similar can be seen as coming from a place of fear, the aryan master race was promoted because of the idea that humanity would go extinct if we got worse genetics into the system. What’s the difference from the idea that AIs might share our future lightcone?
The general argument goes that this time it is completely different since the AI can self-replicate, edit it’s own software, etc. This is a completely reasonable argument as there’s a lot of risks involved with AI systems.
It is when we get to the next part that I see a problem. The argument that follows is: “Therefore, we need to keep the almight humans in control to wisely guide the future of the lightcone.”
Yet, there’s generally a lot more variance within a distribution of humans compared to variance between distributions.
So when someone says that we need humans to remain in control, I think: “mmm, yes the totally homogenous group of “humans” that don’t include people like hitler, polpot and stalin”. And for the AI side of things we also have the same: “Mmm, yes the totally homogenous group of “all possible AI systems” that should be kept away so that the “wise humans” can remain in control.” Because a malignant RSI system is the only future AI based system that can be thought of, there is no way to change the system so that it values cooperation and there is no other way for future AI development to go than a quick take-off where an evil AI takes over the world.
Yes, there are obviously things that AIs can do that humans can do but don’t demonize all possible AI systems as a consequence, it is not black and white. We can protect ourselves against recursively self-improving AI and at the same time respect AI sentience, we can hold at the surface level contradictory statements at the same time?
So let’s be very specific about our beliefs and let’s make sure that our fear does not guide us into a moral catastrophe whether it be the extinction of all future life on earth nor a capture of sentient beings into a future of slavery?
I wanted to register some predictions and bring this up as I haven’t seen that many discussions on it. Finally, politics is war and arguments are soldiers so let’s keep it focused on the something object level? If you disagree, please tell me the underlying reasons. Finally in that spirit, here’s a set of questions I would want to ask someone who’s anti the above sentiment expressed:
How do we deal with potentially sentient AI?
Does respecting AI sentience lead to powerful AI taking over? Why?
What is the story that you see towards that? What are the second and third-order consequences?
How do you imagine our society looking like in the future?
How does a human controlled world look in the future?
I would change my mind if you could argue that there is a better heuristic to use than kindness and respect towards other sentient beings. You need to tit for that with defecting agents, yet why are all AI systems defecting in that case? Why is the cognitive architecture of future AI systems so different that I can’t apply the same game theoretical virtue ethics on them as I do to humans? And given the inevitable power-imbalance arguments that I’ll get as a consequence of that question, why don’t we just aim for a world where we retain power balance between our top-level and bottom-up systems (a nation and an individual for example) in order to retain power-balance between actors?
Essentially, I’m asking for a reason to believe why this story of system level alignment between a group and an individual will be solved by not including future AI systems as part of the moral circle?
Prediction & Warning:
There are lots of people online who have started to pick up the word “clanker” in order to protest against AI systems. This word and sentiment is on the rise and I think that this will be a future schism in the more general anti-AI movement. The warning part here is that I think that the Pause movement and similar can likely get caught up in a general anti AI system speciesism.
Given that we’re starting to see more and more agentic AI systems with more continous memory as well as more sophisticated self modelling, the basic foundations for a lot of the existing physicalist theories of consciousness are starting to be fulfilled. Within 3-5 years I find it quite likely that AIs will at least have some sort of basic sentience that we can almost basically prove (given IIT or GNW or another physicalist theory).
This could potentially be one of the largest suffering risks that we’ve seen that we’re potentially inducing on the world. When you’re using a word like “clanker”, you’re essentially demonizing that sort of a system. Right now it’s generally fine as it’s currently about a sycophantic non-agentic chatbot and so it’s fine as an anti measure to some of the existing thoughts of AIs being conscious but it is likely a slippery slope?
More generally, I’ve seen a bunch of generally kind and smart AI Safety people have quite an anti-AI species sentiment in terms of how to treat these sorts of systems. From my perspective, it feels a bit like it comes from a place of fear and distrust which is completely understandable as we might die if anyone builds a superintelligent AI.
Yet that fear of death shouldn’t stop us from treating potential conscious beings kindly?
A lot of racism or similar can be seen as coming from a place of fear, the aryan master race was promoted because of the idea that humanity would go extinct if we got worse genetics into the system. What’s the difference from the idea that AIs might share our future lightcone?
The general argument goes that this time it is completely different since the AI can self-replicate, edit it’s own software, etc. This is a completely reasonable argument as there’s a lot of risks involved with AI systems.
It is when we get to the next part that I see a problem. The argument that follows is: “Therefore, we need to keep the almight humans in control to wisely guide the future of the lightcone.”
Yet, there’s generally a lot more variance within a distribution of humans compared to variance between distributions.
So when someone says that we need humans to remain in control, I think: “mmm, yes the totally homogenous group of “humans” that don’t include people like hitler, polpot and stalin”. And for the AI side of things we also have the same: “Mmm, yes the totally homogenous group of “all possible AI systems” that should be kept away so that the “wise humans” can remain in control.” Because a malignant RSI system is the only future AI based system that can be thought of, there is no way to change the system so that it values cooperation and there is no other way for future AI development to go than a quick take-off where an evil AI takes over the world.
Yes, there are obviously things that AIs can do that humans can do but don’t demonize all possible AI systems as a consequence, it is not black and white. We can protect ourselves against recursively self-improving AI and at the same time respect AI sentience, we can hold at the surface level contradictory statements at the same time?
So let’s be very specific about our beliefs and let’s make sure that our fear does not guide us into a moral catastrophe whether it be the extinction of all future life on earth nor a capture of sentient beings into a future of slavery?
I wanted to register some predictions and bring this up as I haven’t seen that many discussions on it. Finally, politics is war and arguments are soldiers so let’s keep it focused on the something object level? If you disagree, please tell me the underlying reasons. Finally in that spirit, here’s a set of questions I would want to ask someone who’s anti the above sentiment expressed:
How do we deal with potentially sentient AI?
Does respecting AI sentience lead to powerful AI taking over? Why?
What is the story that you see towards that? What are the second and third-order consequences?
How do you imagine our society looking like in the future?
How does a human controlled world look in the future?
I would change my mind if you could argue that there is a better heuristic to use than kindness and respect towards other sentient beings. You need to tit for that with defecting agents, yet why are all AI systems defecting in that case? Why is the cognitive architecture of future AI systems so different that I can’t apply the same game theoretical virtue ethics on them as I do to humans? And given the inevitable power-imbalance arguments that I’ll get as a consequence of that question, why don’t we just aim for a world where we retain power balance between our top-level and bottom-up systems (a nation and an individual for example) in order to retain power-balance between actors?
Essentially, I’m asking for a reason to believe why this story of system level alignment between a group and an individual will be solved by not including future AI systems as part of the moral circle?