Closed-loop AI is more dangerous than open-loop AI
That said, closed-loops allow the possibility of homeostasis, which open-loop AI does not
I agree that homeostatic processes, specifically negative-feedback loops, are why ~everything in the universe says in balance. If positive feedback wasn’t checked there wouldn’t be anything interesting in the world.
AI is moving towards agents. Agents are, by their nature, homeostatic processes, at least for the duration of their time trying to achieve a goal.
Even if we can’t align open-loop systems like LLMs, maybe we can align closed-loop systems by preventing run-away positive feedback loops.
A couple notes:
I expect future AI to be closed-loop
Closed-loop AI is more dangerous than open-loop AI
That said, closed-loops allow the possibility of homeostasis, which open-loop AI does not
I agree that homeostatic processes, specifically negative-feedback loops, are why ~everything in the universe says in balance. If positive feedback wasn’t checked there wouldn’t be anything interesting in the world.
AI is moving towards agents. Agents are, by their nature, homeostatic processes, at least for the duration of their time trying to achieve a goal.
Even if we can’t align open-loop systems like LLMs, maybe we can align closed-loop systems by preventing run-away positive feedback loops.