People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
That’s certainly the hope of the powerful. It’s unclear whether there is a tipping point where the 90% decide not to respect the on-paper ownership of capital.
so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
Don’t use passive voice for this. Who is enforcing which rights, and how well can they maintain the control? This is a HUGE variable that’s hard to control in large-scale social changes.
It’s unclear whether there is a tipping point where [...]
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Don’t use passive voice for this. [...]
Good point! I guess one way to frame that would be as
by what kind of process do the humans in law enforcement, military, and intelligence agencies get replaced by AIs? Who/what is in effective control of those systems (or their successors) at various points in time?
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>
That’s certainly the hope of the powerful. It’s unclear whether there is a tipping point where the 90% decide not to respect the on-paper ownership of capital.
Don’t use passive voice for this. Who is enforcing which rights, and how well can they maintain the control? This is a HUGE variable that’s hard to control in large-scale social changes.
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Good point! I guess one way to frame that would be as
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>