Max Autonomy

I would like to raise a discussion topic in the spirit of trying to quantify risk from uncontrolled /​ unsupervised software.

What is the maximum autonomy that has been granted to an algorithm according to your best estimates? What is the likely trend in the future?

The estimates could be in terms of money, human lives, processes, etc.

Another estimate could be on the time it takes for a human to come in the process and say “This isn’t right”.

A high speed trading algorithm has a lot of money on the line, but a drone might have lives on the line.

A lot of business processes might get affected by data coming in via an API from a system that might have had slightly different assumptions resulting in catastrophic events. eg. http://​​en.wikipedia.org/​​wiki/​​2010_Flash_Crash

The reason this topic might be worth researching is that it is a relatively easy to communicate risk of AGI. There might be many people who have an implicit assumption that whatever software is being deployed in the real world, there are humans to counter balance it. For them, empirical evidence that they are mistaken about the autonomy given to present day software may shift beliefs.

EDIT : formatting