Alignment idea: Myopic AI is probably much safer than non-myopic AI. But it can’t get complicated things done or anything that requires long-term planning. Would it be possible to create a separate AI that can solve only long-term problems and not act on short timescales?
Then use both together? That way we could inspect each long-term issues without risk of them leading to short-term consequences. And we can iterate on the myopic solutions—or ask the long-term AI about the consequences. There are still risks we might not understand like johnswentworth’s gun powder example. And the approach is complicated and that is also harder to get right.
Alignment idea: Myopic AI is probably much safer than non-myopic AI. But it can’t get complicated things done or anything that requires long-term planning. Would it be possible to create a separate AI that can solve only long-term problems and not act on short timescales? Then use both together? That way we could inspect each long-term issues without risk of them leading to short-term consequences. And we can iterate on the myopic solutions—or ask the long-term AI about the consequences. There are still risks we might not understand like johnswentworth’s gun powder example. And the approach is complicated and that is also harder to get right.
Also: This is a bit how the human brain works—System 1 and 2.