Yes definitely. Pretty much the main regions of interest to us are from Par-human up. Returns are almost definitely not consistent across scales. But what really matters for Xrisk is whether they are positive or negative around current or near-future ML models—i.e. can existing models or AGIs we create in the next few decades self improve to super intelligence or not?
Yes definitely. Pretty much the main regions of interest to us are from Par-human up. Returns are almost definitely not consistent across scales. But what really matters for Xrisk is whether they are positive or negative around current or near-future ML models—i.e. can existing models or AGIs we create in the next few decades self improve to super intelligence or not?
I’m curious what you think about my post expressing scepticism of the relevance of recursive self improvement to the deep learning paradigm.