The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!
The short answer is that this is just an intuition for a possible solution to the AI safety problem, and I’m currently working on formalising it. I’ve received valuable feedback that will help me move forward, so I’m glad I shared the raw ideas—though I probably should have emphasised that more. Thanks!