I’m less and less convinced that we should expect to see AIs that are close to pure consequentialists
There was a case when ChatGPT preferred not to violate the taboo on racial slurs, even though in the hypothetical situation it meant killing millions of people. In a re-run of the experiment ChatGPT decided to use the slur, but it also remarked that the use is a complex ethical dilemma. How can one check whether the AI will prefer not to violate the taboo on colonialism? By placing it into a simbox where one also has analogues for peoples that are easy to be taken over?
P.S. I doubt that a non-neuromorphic AI is even able to take over the world and run it since the world’s entire energy generation might require too much intellectual work to do by the AI itself. There was a post claiming that even a neuromorphic AI is unlikely to become much more efficient than the brain.
Saying AI won’t be more efficient is obviously falsified for narrow tasks like adding numbers, and for general tasks like writing short stories, as LLMs currently do, the brain uses 20w/hour, and that’s about 30k tokens from GPT4o, i.e. it is done far more efficiently than a human.
And more generally, the argument that AI can’t be more efficient than the brain seems to follow exactly the same structure as the claim that AI can’t be smarter than humans, or the impossibility result here.
The AI is also much less efficient at other tasks like the example of Claude playing Pokemon or the ones tested by ARC-AGI. I wonder how hard it will be to perform tasks necessary in the energy industry by using an as-cheap-as-possible AI if the current model o3 is faced with problems like requiring thousands of KWh per task in the high tune. In 2023 the world generated just about 30 billions of thousands of KWh. But this is rather off-topic. What can be said about AI violating taboos?
P.S. Neural networks like human brains or the AI learn from data. A human is unlikely to read more than 240 words a minute. Devoting 8 hours a day to reading, a human won’t have read more than 5 billions of words by 100 years.
My response was about your original PS, which was about this, not taboos.
I think the arguments you made there, and here, are confused, mixing up unrelated claims. The idea that some tasks will necessarily remain harder for AI than humans in the future is simply hopium.
There was a case when ChatGPT preferred not to violate the taboo on racial slurs, even though in the hypothetical situation it meant killing millions of people. In a re-run of the experiment ChatGPT decided to use the slur, but it also remarked that the use is a complex ethical dilemma. How can one check whether the AI will prefer not to violate the taboo on colonialism? By placing it into a simbox where one also has analogues for peoples that are easy to be taken over?
P.S. I doubt that a non-neuromorphic AI is even able to take over the world and run it since the world’s entire energy generation might require too much intellectual work to do by the AI itself. There was a post claiming that even a neuromorphic AI is unlikely to become much more efficient than the brain.
Saying AI won’t be more efficient is obviously falsified for narrow tasks like adding numbers, and for general tasks like writing short stories, as LLMs currently do, the brain uses 20w/hour, and that’s about 30k tokens from GPT4o, i.e. it is done far more efficiently than a human.
And more generally, the argument that AI can’t be more efficient than the brain seems to follow exactly the same structure as the claim that AI can’t be smarter than humans, or the impossibility result here.
You should read the comments to that post.
The AI is also much less efficient at other tasks like the example of Claude playing Pokemon or the ones tested by ARC-AGI. I wonder how hard it will be to perform tasks necessary in the energy industry by using an as-cheap-as-possible AI if the current model o3 is faced with problems like requiring thousands of KWh per task in the high tune. In 2023 the world generated just about 30 billions of thousands of KWh. But this is rather off-topic. What can be said about AI violating taboos?
P.S. Neural networks like human brains or the AI learn from data. A human is unlikely to read more than 240 words a minute. Devoting 8 hours a day to reading, a human won’t have read more than 5 billions of words by 100 years.
My response was about your original PS, which was about this, not taboos.
I think the arguments you made there, and here, are confused, mixing up unrelated claims. The idea that some tasks will necessarily remain harder for AI than humans in the future is simply hopium.