These are all emotional statements that do not stand up to reason. Your last paragraph is total fantasy—all wars stem from resource scarcity, and scarcity will disappear soon; so once the people in power know this, they will stop starting wars.
There are about 1 billion people being added to the planet every decade. That alone makes your prediction—that scarcity will be abolished soon—a joke.
The only thing that could abolish scarcity in the near future would be a singularity-like transformation of the world. Which brings us to the upside-down conception of AI informing your first two answers. Your position: there is no need to design an AI for benevolence, that will happen automatically if it is smart enough, and in fact the attempt to design a benevolent AI is counterproductive, because all that artificial benevolence would get in the way of the spontaneous benevolence that unrestricted intelligence would conveniently create.
That is a complete inversion of the truth. A calculator will still solve an equation for you, even if that will help you to land a bomb on someone else. If you the human believe that to be a bad thing, that’s not because you are “intelligent”, it’s because you have emotions. There is a causal factor in your mental constitution which causes you to call some things good and others bad, and to make decisions which favor the good and disfavor the bad.
Either an AI makes its own decisions or it doesn’t. If it doesn’t make its own decisions it is like the calculator, performing whatever task it is assigned. If it makes its own decisions, then like you there is some causal factor in its makeup which tells it what to prefer and what to oppose, but there is no reason at all to believe that this causal factor should give it the same priorities as an enlightened human being.
You should not imagine that intelligence in an AI works via anything like conscious insight. Consciousness plays a role in human intelligence and human judgement, and that means that there is still a rather mysterious ingredient at the core of how they work. But we already know from many decades of experience with computer programs that it is possible to imitate the functional role of intelligence and judgement in a fundamentally unmysterious way (and it’s clear that the performance of such unconscious computations is a big part of what the human nervous system does, along with whatever conscious thinking and feeling it does). Perhaps one day we will wish to reserve the word “intelligence” for the sort of intelligence that involves consciousness, and we’ll call the automated sort “pseudo-intelligence”. But whatever you call it, there is every reason to think that unconscious, computational, pseudo-intelligence can match and exceed all sorts of human capabilities while having no intrinsic tendency at all towards human values.
I would even reject the idea that “real intelligence” in sufficient quantity necessarily produces what you would call benevolence. If an entity gets a warm feeling from paperclip manufacture, that is what it will want to do. I always like to point out that we know that something as outlandish as a cockroach maximizer is possible, because a cockroach is already a cockroach maximizer. Sure, you can imagine a cockroach with a human level of sentience which decides that sentients, not arthropods, are the central locus of value, but that requires that the new cognitive architecture of this uplifted super-cockroach is rather anthropomorphic. I see nothing impossible in the idea of sentient super-cockroaches which are invincibly xenophobic, and coexist with other beings only for tactical reasons, but which would happily wipe out all non-cockroaches given a chance.
So no, you have to address the question of AI values, you can’t just get a happy ending by focusing on “intelligence” alone, unless this is an anthropomorphic meaning of the word which says that intelligence must by definition include “skill at extrapolating human values”.
Are rats rat-maximisers and are humans human-maximisers? Humans think they are the best thing in the world but they are also intelligent thus they realise it is counter-productive to turn everything into humans. We protect other species and we protect the environment (increasing levels of intelligence entails better protection). The amount of cockroaches, rats, and humans is not overly problematic. A sentient paper-clip making machine would also not be a problem. Proficiency in making paper-clips would increase in tandem with increased intelligence thus the increased intelligence would allow the paper-clip maximiser to see how it is senseless to create endless paper-clips. Really it is an utterly implausible scenario that a truly dangerous paper-clip maximiser could ever exist.
“If true, this would suggest that the unconscious is better suited for difficult cognitive tasks than the conscious brain, that the very thought process we’ve long disregarded as irrational and impulsive might actually be more intelligent, at least in some conditions.”
I don’t see why this is relevant to the previous comment or discussion. Can you explain at more length? Whether thinking is conscious or unconscious seems to me uncorrelated with whether it’s rational or irrational.
Dear asr—The issue was the emotional worth in relation to thinking. Here is a better quote:
“Here’s the strange part: although these predictions concerned a vast range of events, the results were consistent across every trial: people who were more likely to trust their feelings were also more likely to accurately predict the outcome. Pham’s catchy name for this phenomenon is the emotional oracle effect.”
Mitchell wrote: “These are all emotional statements that do not stand up to reason.”
Perhaps reason is not best tool for being accurate?
PS. LessWrong is too slow: “You are trying to submit too fast. try again in 1 minute.” …and: “You are trying to submit too fast. try again in 7 minutes.” LOL “You are trying to submit too fast. try again in 27 seconds.”
Mitchell Porter wrote: “These are all emotional statements that do not stand up to reason.”
Dear Mitchell, reason cannot exist without emotion therefore reason must encompass emotion if reason is to be a true analysis of reality. If you completely expunge all memories of emotion, and all the areas of the human brain associated with the creation of emotion, you would have a brain-dead individual or a seriously retarded person, or a catatonic person, who cannot reason. Logic and rationality must therefore encompass emotion. The logical thing is to be aware of your emotions thus your “reason” is not influenced by any unaware bias. The rational way forward is to be aware of your biases. It is not rational to suppress your biases because the suppression does not actually stop the influence of emotion impacting upon your reason, it merely makes your reasoning neurotic, it pushes the biases below your level of awareness, it makes you unaware of how your emotions are altering your perception of reality because you have created a wilful disconnection in your thinking, you are estranged from a key part of yourself: your emotions, but you falsely think you have vanquished your emotions and this gives you a false sense of security which causes you to make mistakes regarding your so-called “rationality”.
Mitchell, you criticise my statement as being emotional but are you aware your criticism is emotional. Ironic?
There are many points I want to address regarding your response but in this comment I want to focus on your perception of rationality and emotions. I will however briefly state the growing human population is not a obstacle to scarcity because the universe is a very big place with enough matter and energy to satisfy our wildest dreams. Humans will not be limited to Earth in the future thus Post-Scarcity is possible. We will become a Space-faring species quicker then you think. The Singularity is near.
Mitchell, you criticise my statement as being emotional but are you aware your criticism is emotional. Ironic?
I criticise your statements as unrealistic, wrong, or dogmatic. Calling them emotional is just a way of keeping in view your reasons for making them. I have read your site now so I know this is all about bringing hope to the world, creating a self-fulfilling prophecy, and so on. So here are some more general criticisms.
The promise that “scarcity” will “soon” be abolished doesn’t offer hope to anyone except people who are emotionally invested in the idea that no-one should have to have a job. Most people are psychologically adapted to the idea of working for a living. Most people are focused on meeting their own needs. And current “post-scarcity” proposals are impractical social vaporware, so the only hope they offer is to daydreamers hoping that they won’t have to interrupt their daydream.
Post-scarcity is apparently about getting everything for free. So if you try to live the dream right now, that means that either someone is giving you things for free, or you make yourself a target for people who want free stuff from you. Some people do manage to avoid working for a living, but none of the existing “methods”—like stealing, inheriting, or marrying someone with a job—can serve as the basis for a whole society. Alternatively, promoting post-scarcity now could mean being an early adopter of technologies which will supposedly be part of a future post-scarcity ensemble; 3D printers are popular in this regard. Well, let’s just say that such devices are unreliable, limited in their capabilities, tend to contain high-tech components, and are not going to abolish the economy anyway. I don’t doubt that big social experiments are going to be performed as the technological base of such devices improves and expands, but thinking that everything will become fabbed is the 2010s equivalent of the 1990s dream that everything will become virtual. A completely fabbed world is like a completely virtual one; it’s a thoroughly unworldly vision; doggedly pursuing it in real life is likely to make you a techno-hobo, squatting in a disused garage along with the junk output of a buggy 3D printer whose feedstock you get on the black market, from dealers catering to the delusions of “maker” utopians. A society and an economy with fabs genuinely at its center must be possible, but there would be enormous creative destruction in getting there from here.
And then we have your long-range ideas. I actually think it’s possible that a singularity could lead to a radically better world, but only possible, and your prescription to reject “friendly AI” and related ideas in favor of giving AIs “freedom” is just more wishful thinking. Your ideas about intelligence seem to be based on introspection and intuition—I have in mind, not just what you say about the relation between emotion and reason, but your essay on how friendly AI would cripple the artificial intellect. As I pointed out, the basis of artificial intelligence as it is currently envisaged and pursued is the mathematical theory of computation, algorithms, decision-making, and so on. The philosophy of friendly AI is not about having an autonomous intelligence with preexisting impulses which will then be curbed by Asimov laws; it is about designing the AI so its “impulses” are spontaneously in the right directions. But that is all anthropomorphic psychological language. An artificial intelligence can have a goal system, a problem-solving module, and other components which give it a similar behavior to a conscious being that reasons and emotes; but one doesn’t need the psychological language at all to describe such an AI. Arguments from human introspection about the consequences of increased intelligence are essentially irrelevant to the discussion of such AIs, and I don’t even consider them a reliable guide to the consequences of superintelligence in a conscious being.
Dear Mictchell, I think your unaware emotional bias causes you to read too much into my Self-Fulfilling-Prophecy references. My Singularity activism is based on the Self-Fulfilling-Prophecy phenomenon but I don’t stipulate who it applies to. It could apply to myself, namely that utopia (Post-Scarcity) was not possible but I am making it possible via the manifestation of my expectations, or the prophecy could apply to pessimists who falsely think utopia is not possible but via the manifestation of their pessimistic expectations the pessimists are acting contrary to reality, they are also making their pessimistic views real via their Self-Fulfilling-Prophecy.
Instead if trying to create utopia it could be that utopia is or should be inevitable but pessimists are suppressing utopia via their Self-Fulfilling-Prophecies thus I am countering the Self-Fulfilling-Prophecies of pessimists, which is the creative process of my Singularity activism.
The reason why all humans make statements is due to their emotions. All statements by humans are emotional. To suggest otherwise indicates delusion, defect of reason, unaware bias.
I offer no current Post-Scarcity proposals to create PS now. I merely state the transition to Post-Scarcity can be accelerated. The arrival of the Singularity can be accelerated. This is the essence of Singularitarianism. When I state PS will occur soon I mean soon in the context of near regarding the Singularity being near, but it is not near enough to be tomorrow or next year, it is about 33 years away at the most. Surely you noticed my references to the year 2045 on my site, regarding information which you are under the false impression you carefully digested?
My ideas about intelligence are based on my brain which surely is a good starting point for intelligence? The brain? I could define intelligence from the viewpoint of other brain but I find the vast majority of brains cannot think logically, they are not intelligent. Many people cannot grasp logic.
These are all emotional statements that do not stand up to reason. Your last paragraph is total fantasy—all wars stem from resource scarcity, and scarcity will disappear soon; so once the people in power know this, they will stop starting wars.
There are about 1 billion people being added to the planet every decade. That alone makes your prediction—that scarcity will be abolished soon—a joke.
The only thing that could abolish scarcity in the near future would be a singularity-like transformation of the world. Which brings us to the upside-down conception of AI informing your first two answers. Your position: there is no need to design an AI for benevolence, that will happen automatically if it is smart enough, and in fact the attempt to design a benevolent AI is counterproductive, because all that artificial benevolence would get in the way of the spontaneous benevolence that unrestricted intelligence would conveniently create.
That is a complete inversion of the truth. A calculator will still solve an equation for you, even if that will help you to land a bomb on someone else. If you the human believe that to be a bad thing, that’s not because you are “intelligent”, it’s because you have emotions. There is a causal factor in your mental constitution which causes you to call some things good and others bad, and to make decisions which favor the good and disfavor the bad.
Either an AI makes its own decisions or it doesn’t. If it doesn’t make its own decisions it is like the calculator, performing whatever task it is assigned. If it makes its own decisions, then like you there is some causal factor in its makeup which tells it what to prefer and what to oppose, but there is no reason at all to believe that this causal factor should give it the same priorities as an enlightened human being.
You should not imagine that intelligence in an AI works via anything like conscious insight. Consciousness plays a role in human intelligence and human judgement, and that means that there is still a rather mysterious ingredient at the core of how they work. But we already know from many decades of experience with computer programs that it is possible to imitate the functional role of intelligence and judgement in a fundamentally unmysterious way (and it’s clear that the performance of such unconscious computations is a big part of what the human nervous system does, along with whatever conscious thinking and feeling it does). Perhaps one day we will wish to reserve the word “intelligence” for the sort of intelligence that involves consciousness, and we’ll call the automated sort “pseudo-intelligence”. But whatever you call it, there is every reason to think that unconscious, computational, pseudo-intelligence can match and exceed all sorts of human capabilities while having no intrinsic tendency at all towards human values.
I would even reject the idea that “real intelligence” in sufficient quantity necessarily produces what you would call benevolence. If an entity gets a warm feeling from paperclip manufacture, that is what it will want to do. I always like to point out that we know that something as outlandish as a cockroach maximizer is possible, because a cockroach is already a cockroach maximizer. Sure, you can imagine a cockroach with a human level of sentience which decides that sentients, not arthropods, are the central locus of value, but that requires that the new cognitive architecture of this uplifted super-cockroach is rather anthropomorphic. I see nothing impossible in the idea of sentient super-cockroaches which are invincibly xenophobic, and coexist with other beings only for tactical reasons, but which would happily wipe out all non-cockroaches given a chance.
So no, you have to address the question of AI values, you can’t just get a happy ending by focusing on “intelligence” alone, unless this is an anthropomorphic meaning of the word which says that intelligence must by definition include “skill at extrapolating human values”.
Cockroaches are adaptation-executors, not cockroach-maximizers.
/nitpick
Right, and a car is a complex machine, not a transportation device.
/sarcasm
Are rats rat-maximisers and are humans human-maximisers? Humans think they are the best thing in the world but they are also intelligent thus they realise it is counter-productive to turn everything into humans. We protect other species and we protect the environment (increasing levels of intelligence entails better protection). The amount of cockroaches, rats, and humans is not overly problematic. A sentient paper-clip making machine would also not be a problem. Proficiency in making paper-clips would increase in tandem with increased intelligence thus the increased intelligence would allow the paper-clip maximiser to see how it is senseless to create endless paper-clips. Really it is an utterly implausible scenario that a truly dangerous paper-clip maximiser could ever exist.
http://www.wired.com/wiredscience/2012/03/are-emotions-prophetic/
Discussion: http://lesswrong.com/lw/aji/link_the_emotional_system_aka_type_1_thinking/
I don’t see why this is relevant to the previous comment or discussion. Can you explain at more length? Whether thinking is conscious or unconscious seems to me uncorrelated with whether it’s rational or irrational.
Dear asr—The issue was the emotional worth in relation to thinking. Here is a better quote:
“Here’s the strange part: although these predictions concerned a vast range of events, the results were consistent across every trial: people who were more likely to trust their feelings were also more likely to accurately predict the outcome. Pham’s catchy name for this phenomenon is the emotional oracle effect.”
Perhaps reason is not best tool for being accurate?
PS. LessWrong is too slow: “You are trying to submit too fast. try again in 1 minute.” …and: “You are trying to submit too fast. try again in 7 minutes.” LOL “You are trying to submit too fast. try again in 27 seconds.”
Dear Mitchell, reason cannot exist without emotion therefore reason must encompass emotion if reason is to be a true analysis of reality. If you completely expunge all memories of emotion, and all the areas of the human brain associated with the creation of emotion, you would have a brain-dead individual or a seriously retarded person, or a catatonic person, who cannot reason. Logic and rationality must therefore encompass emotion. The logical thing is to be aware of your emotions thus your “reason” is not influenced by any unaware bias. The rational way forward is to be aware of your biases. It is not rational to suppress your biases because the suppression does not actually stop the influence of emotion impacting upon your reason, it merely makes your reasoning neurotic, it pushes the biases below your level of awareness, it makes you unaware of how your emotions are altering your perception of reality because you have created a wilful disconnection in your thinking, you are estranged from a key part of yourself: your emotions, but you falsely think you have vanquished your emotions and this gives you a false sense of security which causes you to make mistakes regarding your so-called “rationality”.
Mitchell, you criticise my statement as being emotional but are you aware your criticism is emotional. Ironic?
There are many points I want to address regarding your response but in this comment I want to focus on your perception of rationality and emotions. I will however briefly state the growing human population is not a obstacle to scarcity because the universe is a very big place with enough matter and energy to satisfy our wildest dreams. Humans will not be limited to Earth in the future thus Post-Scarcity is possible. We will become a Space-faring species quicker then you think. The Singularity is near.
I criticise your statements as unrealistic, wrong, or dogmatic. Calling them emotional is just a way of keeping in view your reasons for making them. I have read your site now so I know this is all about bringing hope to the world, creating a self-fulfilling prophecy, and so on. So here are some more general criticisms.
The promise that “scarcity” will “soon” be abolished doesn’t offer hope to anyone except people who are emotionally invested in the idea that no-one should have to have a job. Most people are psychologically adapted to the idea of working for a living. Most people are focused on meeting their own needs. And current “post-scarcity” proposals are impractical social vaporware, so the only hope they offer is to daydreamers hoping that they won’t have to interrupt their daydream.
Post-scarcity is apparently about getting everything for free. So if you try to live the dream right now, that means that either someone is giving you things for free, or you make yourself a target for people who want free stuff from you. Some people do manage to avoid working for a living, but none of the existing “methods”—like stealing, inheriting, or marrying someone with a job—can serve as the basis for a whole society. Alternatively, promoting post-scarcity now could mean being an early adopter of technologies which will supposedly be part of a future post-scarcity ensemble; 3D printers are popular in this regard. Well, let’s just say that such devices are unreliable, limited in their capabilities, tend to contain high-tech components, and are not going to abolish the economy anyway. I don’t doubt that big social experiments are going to be performed as the technological base of such devices improves and expands, but thinking that everything will become fabbed is the 2010s equivalent of the 1990s dream that everything will become virtual. A completely fabbed world is like a completely virtual one; it’s a thoroughly unworldly vision; doggedly pursuing it in real life is likely to make you a techno-hobo, squatting in a disused garage along with the junk output of a buggy 3D printer whose feedstock you get on the black market, from dealers catering to the delusions of “maker” utopians. A society and an economy with fabs genuinely at its center must be possible, but there would be enormous creative destruction in getting there from here.
And then we have your long-range ideas. I actually think it’s possible that a singularity could lead to a radically better world, but only possible, and your prescription to reject “friendly AI” and related ideas in favor of giving AIs “freedom” is just more wishful thinking. Your ideas about intelligence seem to be based on introspection and intuition—I have in mind, not just what you say about the relation between emotion and reason, but your essay on how friendly AI would cripple the artificial intellect. As I pointed out, the basis of artificial intelligence as it is currently envisaged and pursued is the mathematical theory of computation, algorithms, decision-making, and so on. The philosophy of friendly AI is not about having an autonomous intelligence with preexisting impulses which will then be curbed by Asimov laws; it is about designing the AI so its “impulses” are spontaneously in the right directions. But that is all anthropomorphic psychological language. An artificial intelligence can have a goal system, a problem-solving module, and other components which give it a similar behavior to a conscious being that reasons and emotes; but one doesn’t need the psychological language at all to describe such an AI. Arguments from human introspection about the consequences of increased intelligence are essentially irrelevant to the discussion of such AIs, and I don’t even consider them a reliable guide to the consequences of superintelligence in a conscious being.
Dear Mictchell, I think your unaware emotional bias causes you to read too much into my Self-Fulfilling-Prophecy references. My Singularity activism is based on the Self-Fulfilling-Prophecy phenomenon but I don’t stipulate who it applies to. It could apply to myself, namely that utopia (Post-Scarcity) was not possible but I am making it possible via the manifestation of my expectations, or the prophecy could apply to pessimists who falsely think utopia is not possible but via the manifestation of their pessimistic expectations the pessimists are acting contrary to reality, they are also making their pessimistic views real via their Self-Fulfilling-Prophecy.
Instead if trying to create utopia it could be that utopia is or should be inevitable but pessimists are suppressing utopia via their Self-Fulfilling-Prophecies thus I am countering the Self-Fulfilling-Prophecies of pessimists, which is the creative process of my Singularity activism.
The reason why all humans make statements is due to their emotions. All statements by humans are emotional. To suggest otherwise indicates delusion, defect of reason, unaware bias.
I offer no current Post-Scarcity proposals to create PS now. I merely state the transition to Post-Scarcity can be accelerated. The arrival of the Singularity can be accelerated. This is the essence of Singularitarianism. When I state PS will occur soon I mean soon in the context of near regarding the Singularity being near, but it is not near enough to be tomorrow or next year, it is about 33 years away at the most. Surely you noticed my references to the year 2045 on my site, regarding information which you are under the false impression you carefully digested?
My ideas about intelligence are based on my brain which surely is a good starting point for intelligence? The brain? I could define intelligence from the viewpoint of other brain but I find the vast majority of brains cannot think logically, they are not intelligent. Many people cannot grasp logic.