In parallel, I think that a lot of work is defaulting towards ‘fully general agent AI’ because it is an easy and natural target, not because it is the best one, and that if people knew what other kinds of interfaces to build for, that would actually suck some energy out of investing in getting long-term planning/drop-in replacements for everything as soon as possible.
I think the issue is that automating away humans is just a very large portion of the value of AI, and that 90% of automation away of tasks basically leads to 0 value being captured, due to the long tail:
I think the issue is that automating away humans is just a very large portion of the value of AI, and that 90% of automation away of tasks basically leads to 0 value being captured, due to the long tail:
https://www.lesswrong.com/posts/Nbcs5Fe2cxQuzje4K/value-of-the-long-tail
So unfortunately, I think human irrelevance is just more valuable than humans still being relevant.