It may seem unreasonable within the current paradigm, but I think it’s necessary to reach if we get strong superintelligence. You need to have a system that you can’t make destroy the entire system if you want the whole system to remain undestroyed indefinitely.
You’re that I didn’t explain why each framework fails to plausibly scale to very strong models, maybe that’s also worth it’s own post, because there are a lot and each have limits that you need to go a bit into the weeds to see.
It may seem unreasonable within the current paradigm, but I think it’s necessary to reach if we get strong superintelligence. You need to have a system that you can’t make destroy the entire system if you want the whole system to remain undestroyed indefinitely.
You’re that I didn’t explain why each framework fails to plausibly scale to very strong models, maybe that’s also worth it’s own post, because there are a lot and each have limits that you need to go a bit into the weeds to see.