Just a random thought: aren’t corporations superhuman goal-driven agents, AIs, albeit using human intelligence as one of their raw inputs? They seem like examples of systems that we have created that have come to control our behaviour, with both positive and negative effects? Does this tell us anything about how powerful electronic AIs could be harnessed safely?
You seem to be operating on the assumption that corporations have been harnessed safely. This doesn’t look true to me. Even mildly superhuman entities with significantly superhuman resources but no chance of recursively self-improving are already running away with nearly all of the power, resources, and control of the future.
Corporations optimize profit. Governments optimize (among other things) the monopoly of force. Both of these goals exhibit some degree of positive feedback, which explains how these two kinds of entities have developed some superhuman characteristics despite their very flawed structures.
Since corporations and governments are now superhuman, it seems likely that one of them will be the first to develop AI. Since they are clearly not 100% friendly, it is likely that they will not have the necessary motivation to do the significantly harder task of developing friendly AI.
Therefore, I believe that one task important to saving the world is to make corporations and governments more Friendly. That means engaging in politics; specifically, in meta-politics, that is, the politics of reforming corporations and governments. On the government side, that means things like reforming election systems and campaign finance rules; on the corporate side, that means things like union regulations and standards of corporate governance and transparency. In both cases, I’m acutely aware that there’s a gap between “more democratic” and “more Friendly”, but I think the former is the best we can do.
Note: the foregoing is an argument that politics, and in particular election reform, is important in achieving a Friendly singularity. Before constructing this argument and independently of the singularity, I believed that these things were important. So you can discount these arguments appropriately as possible rationalizations. I believe, though, that appropriate discounting does not mean ignoring me without giving a counterargument.
Second note: yes, politics is the mind-killer. But the universe has no obligation to ensure that the road to saving the world does not run through any mind-killing swamps. I believe that here, the ban on mind-killing subjects is not appropriate.
I agree with your main points, but it’s worth noting that corporations and governments don’t really have goals—people who control them have goals. Corporations are supposed to maximize shareholder value, but their actual behavior reflects the personal goals of executives, major shareholders, etc. See, for example, “Dividends and Expropriation” Am Econ Rev 91:54-78. So one key question is how to align the interests of those who actually control corporations and governments with those they are supposed to represent.
Yes. And obviously corporations and governments have multiple levels on which they’re irrational and don’t effectively optimize any goals at all. I was skating over that stuff to make a point, but thanks for pointing it out, and thanks for the good citation.
They seem like examples of systems that we have created that have come to control our behaviour, with both positive and negative effects? Does this tell us anything about how powerful electronic AIs could be harnessed safely?
Not any more so than any other goal-driven systems we have created that have come to influence our behavior, which includes… every social and organizational system we’ve ever created (albeit with some following quite poorly defined goals). Corporations don’t seem like a special example.
So the beginning goals are what mostly define the future actions of the goal-driven organizations. That sounds like an obvious statement, but these systems have all had humans tainting said system. So, human goals are bound to either leak into or be built into the goals, creating some kind of monstrosity with human flaws but superhuman abilities. A corporation, a church, a fraternity, union, or government, all have self-preservation built in to them from the start, which is the root cause of their eventual corruption. They sacrifice other goals like ethics, because without self-preservation, the other goals cannot be accomplished. Humans have a biological imperative for self-preservation that has been refined over billions of years. Why does an AI even have to have a sense of self-preservation? A human understands what he is and has intelligence, and he wants to continue his life. Those two (nearly) always go together in humans, but one doesn’t follow from the other. Especially with the digital nature of replicable technology, self-preservation shouldn’t even be a priority in a self-aware non-biological entity.
Just a random thought: aren’t corporations superhuman goal-driven agents, AIs, albeit using human intelligence as one of their raw inputs? They seem like examples of systems that we have created that have come to control our behaviour, with both positive and negative effects? Does this tell us anything about how powerful electronic AIs could be harnessed safely?
You seem to be operating on the assumption that corporations have been harnessed safely. This doesn’t look true to me. Even mildly superhuman entities with significantly superhuman resources but no chance of recursively self-improving are already running away with nearly all of the power, resources, and control of the future.
Corporations optimize profit. Governments optimize (among other things) the monopoly of force. Both of these goals exhibit some degree of positive feedback, which explains how these two kinds of entities have developed some superhuman characteristics despite their very flawed structures.
Since corporations and governments are now superhuman, it seems likely that one of them will be the first to develop AI. Since they are clearly not 100% friendly, it is likely that they will not have the necessary motivation to do the significantly harder task of developing friendly AI.
Therefore, I believe that one task important to saving the world is to make corporations and governments more Friendly. That means engaging in politics; specifically, in meta-politics, that is, the politics of reforming corporations and governments. On the government side, that means things like reforming election systems and campaign finance rules; on the corporate side, that means things like union regulations and standards of corporate governance and transparency. In both cases, I’m acutely aware that there’s a gap between “more democratic” and “more Friendly”, but I think the former is the best we can do.
Note: the foregoing is an argument that politics, and in particular election reform, is important in achieving a Friendly singularity. Before constructing this argument and independently of the singularity, I believed that these things were important. So you can discount these arguments appropriately as possible rationalizations. I believe, though, that appropriate discounting does not mean ignoring me without giving a counterargument.
Second note: yes, politics is the mind-killer. But the universe has no obligation to ensure that the road to saving the world does not run through any mind-killing swamps. I believe that here, the ban on mind-killing subjects is not appropriate.
I agree with your main points, but it’s worth noting that corporations and governments don’t really have goals—people who control them have goals. Corporations are supposed to maximize shareholder value, but their actual behavior reflects the personal goals of executives, major shareholders, etc. See, for example, “Dividends and Expropriation” Am Econ Rev 91:54-78. So one key question is how to align the interests of those who actually control corporations and governments with those they are supposed to represent.
Yes. And obviously corporations and governments have multiple levels on which they’re irrational and don’t effectively optimize any goals at all. I was skating over that stuff to make a point, but thanks for pointing it out, and thanks for the good citation.
Not any more so than any other goal-driven systems we have created that have come to influence our behavior, which includes… every social and organizational system we’ve ever created (albeit with some following quite poorly defined goals). Corporations don’t seem like a special example.
So the beginning goals are what mostly define the future actions of the goal-driven organizations. That sounds like an obvious statement, but these systems have all had humans tainting said system. So, human goals are bound to either leak into or be built into the goals, creating some kind of monstrosity with human flaws but superhuman abilities. A corporation, a church, a fraternity, union, or government, all have self-preservation built in to them from the start, which is the root cause of their eventual corruption. They sacrifice other goals like ethics, because without self-preservation, the other goals cannot be accomplished. Humans have a biological imperative for self-preservation that has been refined over billions of years. Why does an AI even have to have a sense of self-preservation? A human understands what he is and has intelligence, and he wants to continue his life. Those two (nearly) always go together in humans, but one doesn’t follow from the other. Especially with the digital nature of replicable technology, self-preservation shouldn’t even be a priority in a self-aware non-biological entity.
One difference is that corporations have little chance of leading to an Intelligence Explosion or FOOMing (except by creating AI themselves).