I have no AI expertise and learned a whole lot from this post, very well written, thank you!
One thing that surprised me, as a layperson, was the seemingly sharp distinction between early human level TAI and more superhuman AI. I have been expecting the gap between these be extremely small. Not because of anything to with self-improvement, but because human-level reasoning would seem to be already superhuman in a number of ways when one system operates many OOMs faster, with many OOMs more working memory and backround knowledge, than a human.
I get there would still be many ighly impactful actions and plans out of reach for an early TAI compared to later AI systems, that makes sense. But I think it is a big deal if even early TAI has all possible intellectual skills, at close to the highest level a human could learn from analyzing all available data, and execute on all of them in parallel with a subjective hour of thinking per second.
Am I completely off base? If so, is there a simple explanation why?
I have no AI expertise and learned a whole lot from this post, very well written, thank you!
One thing that surprised me, as a layperson, was the seemingly sharp distinction between early human level TAI and more superhuman AI. I have been expecting the gap between these be extremely small. Not because of anything to with self-improvement, but because human-level reasoning would seem to be already superhuman in a number of ways when one system operates many OOMs faster, with many OOMs more working memory and backround knowledge, than a human.
I get there would still be many ighly impactful actions and plans out of reach for an early TAI compared to later AI systems, that makes sense. But I think it is a big deal if even early TAI has all possible intellectual skills, at close to the highest level a human could learn from analyzing all available data, and execute on all of them in parallel with a subjective hour of thinking per second.
Am I completely off base? If so, is there a simple explanation why?