New here, so, please bear with me if I say things that have been gone over with a backhoe in the past. There’s a lot of reading here to catch up on.
So, AI development isn’t just an academic development of potentially dangerous tools. It’s also something much, much scarier. An arms race. In cases like this, where the “first across the post” takes the prize, and that prize is potentially everything, the territory favors the least ethical and cautious. We can only restrain, and slowly develop our ai developers, we have little influence or power over Chinese, Saudi, or Russian (among others) developments. In a case like this, where development is recursive, those more willing to “gamble” have higher odds of winning the prize.
That’s not really an argument for absolute, “Floor it and pray for the best” type development, but it is an argument for “as fast as you can with reasonable safety”.
Now, there’s another aspect to consider: Infrastructure. Even IF “the singularity” were to happen tomorrow, assuming that it isn’t outright suicidaliy bloody minded, it’ll be a minimum 20 to 40 years until it can actually have the level of infrastructure to take over/destroy humanity. There are a lot of places in the various supply chains that are not, at present, replaceable by even an infinitely smart AI. We still have miners, truck drivers, equipment operators, Iphones are still assembled by human hands, all repair work of everything is still done with human hands. This means that if the singularity were to happen today, then the Deus ex machina would have 2 options. Make nice with humans, OR destroy itself. Until there are FAR more capable autonomous robots numbered in the tens to hundreds of millions, that will remain true. Which robots will have to be built in factories constructed by humans, using materials transported by humans, mined by humans, refined by humans, and crafted into finished products by humans. A lot of the individual steps are automated, the totality of the supply chain is wholly dependent on human labor, skill, and knowledge. And the machines that could do those jobs don’t exist now. Nor does the energy production infrastructure to run the datacenters and machines at current.
All of which means that, at current, even the MOST evil AI would be possible to “stop”. Would it possibly be very bad (tm)? yes. It could, conceivably, kick us back to pre-internet conditions, which would be BAD. But not extinction level bad unless it happens well beyond the predictability horizon.
Which, in turn, means that what it would do in a “boots on the ground” sense is place an infinitely smart “oracle” in the hands of whoever develops it first. That itself is frightening enough, But it won’t be the AI that ends humanity if it happens while humans are still controlling most of the supply chain steps, it’ll just hand it over to the entity (person, government, corporation) that creates it first.
Which, again, in turn, means that the call is, paradoxically, for the entity you see as the “most ethical” in it’s desired use of AI to behave the least ethically in it’s development. Who would you prefer to have “god on a leash”? Sam Altman… or Xi Jinping?
Again, sorry if this post went over a pile of things that were said before.
New here, so, please bear with me if I say things that have been gone over with a backhoe in the past. There’s a lot of reading here to catch up on.
So, AI development isn’t just an academic development of potentially dangerous tools. It’s also something much, much scarier. An arms race. In cases like this, where the “first across the post” takes the prize, and that prize is potentially everything, the territory favors the least ethical and cautious. We can only restrain, and slowly develop our ai developers, we have little influence or power over Chinese, Saudi, or Russian (among others) developments. In a case like this, where development is recursive, those more willing to “gamble” have higher odds of winning the prize.
That’s not really an argument for absolute, “Floor it and pray for the best” type development, but it is an argument for “as fast as you can with reasonable safety”.
Now, there’s another aspect to consider: Infrastructure. Even IF “the singularity” were to happen tomorrow, assuming that it isn’t outright suicidaliy bloody minded, it’ll be a minimum 20 to 40 years until it can actually have the level of infrastructure to take over/destroy humanity. There are a lot of places in the various supply chains that are not, at present, replaceable by even an infinitely smart AI. We still have miners, truck drivers, equipment operators, Iphones are still assembled by human hands, all repair work of everything is still done with human hands. This means that if the singularity were to happen today, then the Deus ex machina would have 2 options. Make nice with humans, OR destroy itself. Until there are FAR more capable autonomous robots numbered in the tens to hundreds of millions, that will remain true. Which robots will have to be built in factories constructed by humans, using materials transported by humans, mined by humans, refined by humans, and crafted into finished products by humans. A lot of the individual steps are automated, the totality of the supply chain is wholly dependent on human labor, skill, and knowledge. And the machines that could do those jobs don’t exist now. Nor does the energy production infrastructure to run the datacenters and machines at current.
All of which means that, at current, even the MOST evil AI would be possible to “stop”. Would it possibly be very bad (tm)? yes. It could, conceivably, kick us back to pre-internet conditions, which would be BAD. But not extinction level bad unless it happens well beyond the predictability horizon.
Which, in turn, means that what it would do in a “boots on the ground” sense is place an infinitely smart “oracle” in the hands of whoever develops it first. That itself is frightening enough, But it won’t be the AI that ends humanity if it happens while humans are still controlling most of the supply chain steps, it’ll just hand it over to the entity (person, government, corporation) that creates it first.
Which, again, in turn, means that the call is, paradoxically, for the entity you see as the “most ethical” in it’s desired use of AI to behave the least ethically in it’s development. Who would you prefer to have “god on a leash”? Sam Altman… or Xi Jinping?
Again, sorry if this post went over a pile of things that were said before.