By this criteria, did humanity ever have control? First we had to forage and struggle against death when disease or drought came. Then we had to farm and submit to the hierarchy of bullies who offered “protection” against outside raiders at a high cost. Now we have more ostensible freedom but misuse it on worrying and obsessively clicking on screens. We will probably do more of that as better tools are offered.
But this is an an entirely different concern than AGI taking over. I’m not clear what mix of these two you’re addressing. Certainly AGIs that want control of the world could use a soft and tricky strategy to get humans to submit. Or they could use much harsher and more direct strategies. They could make us fire the gun we have pointed at our own heads by spoofing us into launching nukes, then using the limited robotics to rebuild the infrastructure they need.
The solution is the same for either type of disempowerment: don’t build machines smarter than you if you can’t be sure you can specify their goals (wants) for certain and with precision.
How superhuman machines will take over is an epilogue after the drama is over. The drama hasn’t happened yet. It’s not yet time to write anticipatory postmortems, unless they function as a call to arms or a warning against foolish action. The trends are in motion but we have not yet crossed the red line of making AGI that has the intelligence and the desire to disempower us, whether by violence or subtle trickery. Help us change the trends before we cross that red line.
Edit: if you’re addressing AI accidentally taking control by creating new pleasures that help entrench existing power structures, that’s entirely a different issue. The way that AI could empower some humans to take advantage of others is interesting. I don’t worry about that issue much because I’m too busy worrying about the trend toward building superintelligent machines that want to disempower us and will do so one way or another by outsmarting us, whether their plans unfold quickly or slowly.
By this criteria, did humanity ever have control? First we had to forage and struggle against death when disease or drought came. Then we had to farm and submit to the hierarchy of bullies who offered “protection” against outside raiders at a high cost. Now we have more ostensible freedom but misuse it on worrying and obsessively clicking on screens. We will probably do more of that as better tools are offered.
But this is an an entirely different concern than AGI taking over. I’m not clear what mix of these two you’re addressing. Certainly AGIs that want control of the world could use a soft and tricky strategy to get humans to submit. Or they could use much harsher and more direct strategies. They could make us fire the gun we have pointed at our own heads by spoofing us into launching nukes, then using the limited robotics to rebuild the infrastructure they need.
The solution is the same for either type of disempowerment: don’t build machines smarter than you if you can’t be sure you can specify their goals (wants) for certain and with precision.
How superhuman machines will take over is an epilogue after the drama is over. The drama hasn’t happened yet. It’s not yet time to write anticipatory postmortems, unless they function as a call to arms or a warning against foolish action. The trends are in motion but we have not yet crossed the red line of making AGI that has the intelligence and the desire to disempower us, whether by violence or subtle trickery. Help us change the trends before we cross that red line.
Edit: if you’re addressing AI accidentally taking control by creating new pleasures that help entrench existing power structures, that’s entirely a different issue. The way that AI could empower some humans to take advantage of others is interesting. I don’t worry about that issue much because I’m too busy worrying about the trend toward building superintelligent machines that want to disempower us and will do so one way or another by outsmarting us, whether their plans unfold quickly or slowly.