Let’s assume you or anyone else really did have a proposed path to AGI/ASI that would be in some important senses safer than our current path. Who is the entity for whom this would or would not be a “viable course?”
A new startup created specifically for the task. Examples: one, two.
Like, imagine that we actually did discover a non-DL AGI-complete architecture with strong safety guarantees, such that even MIRI would get behind it. Do you really expect that the project would then fail at the “getting funded”/”hiring personnel” stages?
tailcalled’s argument is the sole true reason: we don’t know of any neurosymbolic architecture that’s meaningfully safer than DL. (The people in the examples above are just adding to the AI-risk problem.) That said, I think the lack of alignment research going into it is a big mistake, mainly caused by the undertaking seeming too intimidating/challenging to pursue / by the streetlighting effect.
Do you really expect that the project would then fail at the “getting funded”/”hiring personnel” stages?
Not at all, I’d expect them to get funded and get people. Plausibly quite well, or at least I hope so!
But when I think about paths by which such a company shapes how we reach AGI, I find it hard to see how that happens unless something (regulation, hitting walls in R&D, etc.) either slows the incumbents down or else causes them to adopt the new methods themselves. Both of which are possible! I’d just hope anyone seriously considering pursuing such a venture has thought through what success actually looks like.
“Independently develop AGI through different methods before the big labs get there through current methods” is a very heavy lift that’s downstream of but otherwise almost unrelated to “Could this proposal work if pursued and developed enough?”
I think, “Get far enough fast enough to show it can work, show it would be safer, and show it would only lead to modest delays, then find points of leverage to get the leaders in capabilities to use it, maybe by getting acquired at seed or series A” is a strategy not enough companies go for (probably because VCs don’t think its as good for their returns).
A new startup created specifically for the task. Examples: one, two.
Like, imagine that we actually did discover a non-DL AGI-complete architecture with strong safety guarantees, such that even MIRI would get behind it. Do you really expect that the project would then fail at the “getting funded”/”hiring personnel” stages?
tailcalled’s argument is the sole true reason: we don’t know of any neurosymbolic architecture that’s meaningfully safer than DL. (The people in the examples above are just adding to the AI-risk problem.) That said, I think the lack of alignment research going into it is a big mistake, mainly caused by the undertaking seeming too intimidating/challenging to pursue / by the streetlighting effect.
Not at all, I’d expect them to get funded and get people. Plausibly quite well, or at least I hope so!
But when I think about paths by which such a company shapes how we reach AGI, I find it hard to see how that happens unless something (regulation, hitting walls in R&D, etc.) either slows the incumbents down or else causes them to adopt the new methods themselves. Both of which are possible! I’d just hope anyone seriously considering pursuing such a venture has thought through what success actually looks like.
“Independently develop AGI through different methods before the big labs get there through current methods” is a very heavy lift that’s downstream of but otherwise almost unrelated to “Could this proposal work if pursued and developed enough?”
I think, “Get far enough fast enough to show it can work, show it would be safer, and show it would only lead to modest delays, then find points of leverage to get the leaders in capabilities to use it, maybe by getting acquired at seed or series A” is a strategy not enough companies go for (probably because VCs don’t think its as good for their returns).