Logical rudeness is the error of rejecting an argument for reasons other than disagreement with it. Does your “I don’t think so” mean that you in fact believe that SIAI (possibly) plans to increase the probability of you or someone else being tortured for the rest of eternity? If not, what does this statement mean?
I removed that sentence. I meant that I didn’t believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.
You can call me a hypocrite because I’m in favor of animal experiments to support my own survival. But I’m not sure if I’d like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don’t like such decisions. I’d rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).
I feel uncomfortable that I don’t know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn’t mean it is true. Surely I’d write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
That Yudkowsky claims that they are working for the benefit of humanity doesn’t mean it is true. Surely I’d write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
Better yet, you could use a kind of doublethink—and then even actually mean it. Here is W. D. Hamilton on that topic:
A world where everyone else has been persuaded to be altruistic is a good one to live in from the point of view of pursuing our own selfish ends. This hypocracy is even more convincing if we don’t admit it even in our thoughts—if only on our death beds, so to speak, we change our wills back to favour the carriers of our own genes.
Discriminating Nepotism—as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.
That Yudkowsky claims that they are working for the benefit of humanity doesn’t mean it is true. Surely I’d write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
For 30 years I have been wondering, what indication of its existence might we expect from a true AI? Certainly not any explicit revelation, which might spark a movement to pull the plug. Anomalous accumulation or creation of wealth might be a sign, or an unquenchable thirst for raw information, storage space, and processing cycles, or a concerted attempt to secure an uninterrupted, autonomous power supply. But the real sign, I suspect, would be a circle of cheerful, contented, intellectually and physically well-nourished people surrounding the AI.
I think many people would like to be in that group—if they can find a way to arrange it.
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.
That doesn’t really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of “cheerful, contented, intellectually and physically well-nourished people.”
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
It’s a fair description of today’s more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same—but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.
Logical rudeness is the error of rejecting an argument for reasons other than disagreement with it. Does your “I don’t think so” mean that you in fact believe that SIAI (possibly) plans to increase the probability of you or someone else being tortured for the rest of eternity? If not, what does this statement mean?
I removed that sentence. I meant that I didn’t believe that the SIAI plans to harm someone deliberately. Although I believe that harm could be a side-effect and that they would rather harm a few beings than allowing some Paperclip maximizer to take over.
You can call me a hypocrite because I’m in favor of animal experiments to support my own survival. But I’m not sure if I’d like to have someone leading an AI project who thinks like me. Take that sentence to reflect my inner conflict. I see why one would favor torture over dust specks but I don’t like such decisions. I’d rather have the universe to end now, or having everyone turned into paperclips, than having to torture beings (especially if I am the being).
I feel uncomfortable that I don’t know what will happen because there is a policy of censorship being favored when it comes to certain thought experiments. I believe that even given negative consequences, transparency is the way to go here. If the stakes are this high, people who believe will do anything to get what they want. That Yudkowsky claims that they are working for the benefit of humanity doesn’t mean it is true. Surely I’d write that and many articles and papers that make it appear this way, if I wanted to shape the future to my liking.
I apologize. I realized my stupidity in interpreting your comment a few seconds after posting the reply (which I then deleted).
Better yet, you could use a kind of doublethink—and then even actually mean it. Here is W. D. Hamilton on that topic:
Discriminating Nepotism—as reprinted in: Narrow Roads of Gene Land, Volume 2 Evolution of Sex, p.356.
In TURING’S CATHEDRAL, George Dyson writes:
I think many people would like to be in that group—if they can find a way to arrange it.
Unless AI was given that outcome (cheerful, contented people etc) as a terminal goal, or that circle of people was the best possible route to some other terminal goal, both of which are staggeringly unlikely, Dyson suspects wrongly.
If you think he suspects rightly, I would really like to see a justification. Keep in mind that AGIs are currently not being built using multi-agent environment evolutionary methods, so any kind of ‘social cooperation’ mechanism will not arise.
Machine intelligence programmers seem likely to construct their machines so as to help them satisfy their preferences—which in turn is likely to make them satisfied. I am not sure what you are talking about—but surely this kind of thing is already happening all the time—with Sergey Brin, James Harris Simons—and so on.
That doesn’t really strike me as a stunning insight, though. I have a feeling that I could find many people who would like to be in almost any group of “cheerful, contented, intellectually and physically well-nourished people.”
This all depends on what the AI wants. Without some idea of its utility function, can we really speculate? And if we speculate, we should note those assumptions. People often think of an AI as being essentially human-like in its values, which is problematic.
It’s a fair description of today’s more successful IT companies. The most obvious extrapolation for the immediate future involves more of the same—but with even greater wealth and power inequalities. However, I would certainly also council caution if extrapolating this out more than 20 years or so.