Would be even better if you could attach rough probabilities to both theses. Right now my sense is I probably disagree significantly, but it’s hard to say how much. For the record, my credence for the weak thesis depends a ton on how some details are formalized (e.g. how much non-DL is allowed, does it have to be one monolithic network or not). For the strong thesis, <15%, would need to think more to figure out how low I’d go. If you just think the strong thesis is more plausible than most other people, at say 50%, that’s not a huge difference, whereas if you have something like 95% that seems really wild to me.
I intentionally use the word “superintelligence” here because “AGI” or “human-level intelligence” have become rather loaded terms, the definitions of which are frequently a point of contention.
Is that the main reason to focus on ASI, or does ASI vs AGI also have a big impact on whether you believe these theses? E.g. do you still think they’re true for AGI instead of ASI, if you get to judge what counts as “AGI” (but something more like human-level intelligence than quickly designing nanotech)?
Would be even better if you could attach rough probabilities to both theses. Right now my sense is I probably disagree significantly, but it’s hard to say how much. For the record, my credence for the weak thesis depends a ton on how some details are formalized (e.g. how much non-DL is allowed, does it have to be one monolithic network or not). For the strong thesis, <15%, would need to think more to figure out how low I’d go. If you just think the strong thesis is more plausible than most other people, at say 50%, that’s not a huge difference, whereas if you have something like 95% that seems really wild to me.
Is that the main reason to focus on ASI, or does ASI vs AGI also have a big impact on whether you believe these theses? E.g. do you still think they’re true for AGI instead of ASI, if you get to judge what counts as “AGI” (but something more like human-level intelligence than quickly designing nanotech)?