This is actually pretty cool! Feels like it’s doing the type of reasoning that might result in critical insight, and maybe even is one itself. It’s towards the upper tail of the distribution of research I’ve read by people I’m not already familiar with.
I think there’s big challenges to this solving AGI alignment including: probably this restriction bounds AI’s power a lot, but still feels like a neat idea and I hope you continue to explore the space of possible solutions.
This is actually pretty cool! Feels like it’s doing the type of reasoning that might result in critical insight, and maybe even is one itself. It’s towards the upper tail of the distribution of research I’ve read by people I’m not already familiar with.
I think there’s big challenges to this solving AGI alignment including: probably this restriction bounds AI’s power a lot, but still feels like a neat idea and I hope you continue to explore the space of possible solutions.