Just wanting to get a contrarian view down here. My biggest complaint is that this scenario like many others seems to hinge on a core assumption that ‘with sufficient intelligence, all problems are shallow’. I disagree and believe that ‘dilemmas’ exist, where no matter how smart you are, all available options are bad.
I think this section: “Yet that seems like it could be a dramatic underestimate. Plants and insects often have “doubling times” of far less than a year—sometimes just weeks! Perhaps eventually the robots would be so sophisticated, so intricately manufactured and well-designed, that the robot economy could double in a few weeks (again assuming available raw materials).” is under emphasized. I assert that there is a point where material and energy constraints will absolutely drive outcomes. I further assert that these constraints may prevent achievement of superintelligence, and they will prevent the worst takeover scenarios.
I assert that physics likely limits a lot of efficiency gains for industry, not engineering. I further assert that the ASI will not overcome these physics limits by doing things like inventing teleportation in 2031.
I assert that capital substitution for labor has predictable economic effects, specifically, wealth concentration. The resulting social and economic dynamics of concentration might choke investment of resources into AI development
We know today that when models are fed their own outputs, collapse happens. This technical problem is not solved, when we hit the outer limits of data availability, and synthetic data is required; this may not be a shallow problem. If it turns out that data generation is constrained by a need to generate new observations of the natural world, that could impose substantial constraints in terms of time
I don’t see industrial robots replacing ‘skilled workers that have to travel to customer sites’ in the next ten years.
I think white collar workers will probably continue to exist as ‘blamecatchers’ for the robots in even the most optimistic scenarios. OpenBrain may make a perfect accountant-bot, but will absolutely find a way to put the liability of the inevitable errors in individual cases onto someone other than the company. This role might expand.
In the happy scenario, the outcome is global democracy. Authoritarianism is making a political comeback, I think a ‘humans win’ scenario is more likely than not to be ‘a specific group of humans win, and the rest of the humans are a peasantry which the elites ignore, exploit, keep as pets, or eliminate’.
Technology has swung back and forth between the bronze spear (requires long trade routes, centralized production, and professional soldiers organized into units) and the iron sword (requires an angry person with a hammer and a fire). The dawn of the iron age precipitated the bronze age collapse. The invention of the gun overthrew feudal Europe. Modern technology, particularly AI technology, seems to favor long supply chains and tight organization. This is usually associated with tyranny
Assuming that in the next ten years, there is a massive biological weapon deployment that kills a lot of people...I would be most surprised if an AI did it deliberately, without human direction, to kill humans. I would be slightly less surprised if a particular human ‘peasant’ or otherwise unknown actor did it. I would be utterly unsurprised if the responsible party is tied to a nation-state.
Overall, I am more worried about an effort to censor AI to prevent ‘regular people’ from encountering ‘infohazards and dangerous knowledge’ succeeding than I am about it failing. The oligarchy that controls the censor’s pen will absolutely use these tools to impose an inescapable despotism.
Just wanting to get a contrarian view down here. My biggest complaint is that this scenario like many others seems to hinge on a core assumption that ‘with sufficient intelligence, all problems are shallow’. I disagree and believe that ‘dilemmas’ exist, where no matter how smart you are, all available options are bad.
I think this section: “Yet that seems like it could be a dramatic underestimate. Plants and insects often have “doubling times” of far less than a year—sometimes just weeks! Perhaps eventually the robots would be so sophisticated, so intricately manufactured and well-designed, that the robot economy could double in a few weeks (again assuming available raw materials).” is under emphasized. I assert that there is a point where material and energy constraints will absolutely drive outcomes. I further assert that these constraints may prevent achievement of superintelligence, and they will prevent the worst takeover scenarios.
I assert that physics likely limits a lot of efficiency gains for industry, not engineering. I further assert that the ASI will not overcome these physics limits by doing things like inventing teleportation in 2031.
I assert that capital substitution for labor has predictable economic effects, specifically, wealth concentration. The resulting social and economic dynamics of concentration might choke investment of resources into AI development
We know today that when models are fed their own outputs, collapse happens. This technical problem is not solved, when we hit the outer limits of data availability, and synthetic data is required; this may not be a shallow problem. If it turns out that data generation is constrained by a need to generate new observations of the natural world, that could impose substantial constraints in terms of time
I don’t see industrial robots replacing ‘skilled workers that have to travel to customer sites’ in the next ten years.
I think white collar workers will probably continue to exist as ‘blamecatchers’ for the robots in even the most optimistic scenarios. OpenBrain may make a perfect accountant-bot, but will absolutely find a way to put the liability of the inevitable errors in individual cases onto someone other than the company. This role might expand.
In the happy scenario, the outcome is global democracy. Authoritarianism is making a political comeback, I think a ‘humans win’ scenario is more likely than not to be ‘a specific group of humans win, and the rest of the humans are a peasantry which the elites ignore, exploit, keep as pets, or eliminate’.
Technology has swung back and forth between the bronze spear (requires long trade routes, centralized production, and professional soldiers organized into units) and the iron sword (requires an angry person with a hammer and a fire). The dawn of the iron age precipitated the bronze age collapse. The invention of the gun overthrew feudal Europe. Modern technology, particularly AI technology, seems to favor long supply chains and tight organization. This is usually associated with tyranny
Assuming that in the next ten years, there is a massive biological weapon deployment that kills a lot of people...I would be most surprised if an AI did it deliberately, without human direction, to kill humans. I would be slightly less surprised if a particular human ‘peasant’ or otherwise unknown actor did it. I would be utterly unsurprised if the responsible party is tied to a nation-state.
Overall, I am more worried about an effort to censor AI to prevent ‘regular people’ from encountering ‘infohazards and dangerous knowledge’ succeeding than I am about it failing. The oligarchy that controls the censor’s pen will absolutely use these tools to impose an inescapable despotism.