Building an civilisation scale OODA loop for the problem of AGI

You can break down our civilizations reaction to the problem of AGI into a massive decentralized OODA loop. Each part of the OODA loop is not one person but an aggregate of many people and organisations.

My current major worry is that we do not have a robust loop.

Observations: There are a few major observations we have made that inform our current work. AGI systems might be more powerful than humans and able to make themselves more powerful again via RSI. We can’t control our current reinforcement learning systems

Orientation: This is the philosophy and AI strategy work of FHI and others

Decide: This is primarily done inside the big foundations and soon the governments

Act: This is OpenAIs work on AI safety for RL. Or instituting AI policy.

What I think we are not doing is investing much money into the observation phase. There is a fair amount of observation of the RL work, but we do not have much observation going into how much more powerful AGI systems will be and can be made via RSI.

One example of the observations we could make, would be to try and get an estimate of how much speeding up human cognitive work would speed up science. We could could look at science from a systemic perspective and see how long various steps take. The steps might be

  1. Gathering data

  2. Analysing data

  3. Thinking about the analysis

Each of these will have a human and a non-human component (either collecting data or computational analysis of the data).

If we could get better observations of how much each component has we can get an estimate of how quickly things could be sped up.

Similar observations might be made for programming, especially programming of machine learning systems.

I will try and write a longer post at some point, fleshing things out more. But I would be interested in peoples other ideas on how we could improve the OODA loop for AGI.