Last night I found myself thinking, “Well, suppose there’s no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That’s gotta be a good thing, right?”
Then I realized this sounds like rationalization.
Which got me to thinking about what my concerns are about this stuff.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They’re wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn’t even to make its owners happy — just to make them rich — and it certainly doesn’t care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.
It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.
Current HFT systems have little to do with AI. They are basically statistical models of a very narrow slice of reality (specifically the dynamics of the market microstructure) that can forecast these dynamics to some extent.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They’re wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn’t even to make its owners happy — just to make them rich — and it certainly doesn’t care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
I contend that those that exist are already a problem.
Most likely because there have been some alarming failures of automated traders, such as the 2010 “Flash Crash” or the April Flash Crash caused by a Twitter hoax. From a layman’s perspective, it seems like all the regular problems of speculation with the added benefit of trades taking place faster than any human regulator could react. So far there hasn’t been any serious damage but it’s not clear to me whether that’s a point in the traders’ favors or just blind luck.
Of course, this isn’t a Friendliness issue so much as a competence one and I’m fairly sure there isn’t much of an existential risk involved in these programs undergoing an intelligence explosions. So it might not be what the other posters here were thinking of.
Speculators are good for a market—they smooth out price fluctuations and give fundamentals traders better prices. And when they screw up the effect is usually to give money to other people, as with the flash crash. So I don’t see the problem.
You’ll have fun reading Accelerando. The solar system gentrifies by essentially HFTs on steroid driving up rents in the prime real estate closest the sun and thus energy dense.
Accepting for the sake of comity that the endpoint of those trends is indeed an Ending, are there historical events that you would similarly class as an Ending, or would this Ending be in a class by itself?
One could argue that China’s inward-turning, burn-the-boats collapse around 1500 was a result of similar wealth concentration? Though I don’t know the history in any detail.
I’d compare it to some of the hypothetical sociopolitical risk scenarios in Bostrom’s “Existential Risks”. Bostrom specifically mentions a “misguided world government” (driven by “a fundamentalist religious or ecological movement”) and a “repressive totalitarian global regime” (driven by “mistaken religious or ethical convictions”), but doesn’t mention scenarios driven by business or financial forces.
I’m sorry… this appears to be my evening for just not being able to communicate questions clearly. What I meant by “historical events” is events that actually have occurred in our real history, as distinct from counterfactuals.
Last night I found myself thinking, “Well, suppose there’s no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That’s gotta be a good thing, right?”
Then I realized this sounds like rationalization.
Which got me to thinking about what my concerns are about this stuff.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They’re wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn’t even to make its owners happy — just to make them rich — and it certainly doesn’t care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.
It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.
Current HFT systems have little to do with AI. They are basically statistical models of a very narrow slice of reality (specifically the dynamics of the market microstructure) that can forecast these dynamics to some extent.
I contend that those that exist are already a problem.
How? Because they took some money off other speculators? Because some of them went bankrupt?
Most likely because there have been some alarming failures of automated traders, such as the 2010 “Flash Crash” or the April Flash Crash caused by a Twitter hoax. From a layman’s perspective, it seems like all the regular problems of speculation with the added benefit of trades taking place faster than any human regulator could react. So far there hasn’t been any serious damage but it’s not clear to me whether that’s a point in the traders’ favors or just blind luck.
Of course, this isn’t a Friendliness issue so much as a competence one and I’m fairly sure there isn’t much of an existential risk involved in these programs undergoing an intelligence explosions. So it might not be what the other posters here were thinking of.
Speculators are good for a market—they smooth out price fluctuations and give fundamentals traders better prices. And when they screw up the effect is usually to give money to other people, as with the flash crash. So I don’t see the problem.
You’ll have fun reading Accelerando. The solar system gentrifies by essentially HFTs on steroid driving up rents in the prime real estate closest the sun and thus energy dense.
Accepting for the sake of comity that the endpoint of those trends is indeed an Ending, are there historical events that you would similarly class as an Ending, or would this Ending be in a class by itself?
One could argue that China’s inward-turning, burn-the-boats collapse around 1500 was a result of similar wealth concentration? Though I don’t know the history in any detail.
I’d compare it to some of the hypothetical sociopolitical risk scenarios in Bostrom’s “Existential Risks”. Bostrom specifically mentions a “misguided world government” (driven by “a fundamentalist religious or ecological movement”) and a “repressive totalitarian global regime” (driven by “mistaken religious or ethical convictions”), but doesn’t mention scenarios driven by business or financial forces.
I’m sorry… this appears to be my evening for just not being able to communicate questions clearly. What I meant by “historical events” is events that actually have occurred in our real history, as distinct from counterfactuals.
Oh. Well, no.