It’s worth noting that if AGI comes from something Siri, it is likely to be friendly, since the marketplace will select friendly agents. That is almost an arguemnt against building singleton AGI in an isolated lab..why trow away existing advances in friendliness #?
The marketplace selects friendly-looking agents. Those friendly-looking agents not infrequently go on to mine your personal data and sell it to advertisers, or sell cars with known dangerous defects having calculated that the extra profit from cutting corners exceeds the likely losses from lawsuits from the families of the people killed, or persuade you to get a mortgage you can’t afford to repay, or sell you wine containing poisonous chemicals that taste nice.
I don’t find that process so reliably friendly that I feel good about having it creating superintelligent agents.
The marketplace is not selecting unfriendly agents in the sense that friendlier agents are left on the shelf, and the agents are not unfriendly in the sense that they make their own decision to be unfriendly—they are not deliberately dissimulating, and are not complex enough to do so. The behaviours you mention ae essentially hard-coding by the authors of the software, or the decision of producers to market certain products. The current situations one where agentive corporations are battling out with agentive consumers, with some not-very agentive software in the middle.
It’s not in the interest of consumers to buy unfriendly agent,s because they whole point of agents is to be ion the owner’s side, and act on their behalf. It is in the interest of corporations to sell software that’s biased towards themselves, and therefore sell software that’s only seemingly friendly. But that’s a variation on an age-old battle, and there are solutions. The free market solutions is to offer agents which aren’t rigged. -- the rational purchases will prefer them. The statist solution is to call, for regulation. It’s not much to do with AI as such either way.
It’s worth noting that if AGI comes from something Siri, it is likely to be friendly, since the marketplace will select friendly agents. That is almost an arguemnt against building singleton AGI in an isolated lab..why trow away existing advances in friendliness #?
The marketplace selects friendly-looking agents. Those friendly-looking agents not infrequently go on to mine your personal data and sell it to advertisers, or sell cars with known dangerous defects having calculated that the extra profit from cutting corners exceeds the likely losses from lawsuits from the families of the people killed, or persuade you to get a mortgage you can’t afford to repay, or sell you wine containing poisonous chemicals that taste nice.
I don’t find that process so reliably friendly that I feel good about having it creating superintelligent agents.
The marketplace is not selecting unfriendly agents in the sense that friendlier agents are left on the shelf, and the agents are not unfriendly in the sense that they make their own decision to be unfriendly—they are not deliberately dissimulating, and are not complex enough to do so. The behaviours you mention ae essentially hard-coding by the authors of the software, or the decision of producers to market certain products. The current situations one where agentive corporations are battling out with agentive consumers, with some not-very agentive software in the middle.
It’s not in the interest of consumers to buy unfriendly agent,s because they whole point of agents is to be ion the owner’s side, and act on their behalf. It is in the interest of corporations to sell software that’s biased towards themselves, and therefore sell software that’s only seemingly friendly. But that’s a variation on an age-old battle, and there are solutions. The free market solutions is to offer agents which aren’t rigged. -- the rational purchases will prefer them. The statist solution is to call, for regulation. It’s not much to do with AI as such either way.