Seb is explicitly talking about AGI and not ASI. It’s right there in the tweet.
Most people in policy and governance are not talking about what happens after an intelligence explosion. There are many voices in AI policy and governance and lots of them say dumb stuff, e.g. I expect someone has said the next generation of AIs will cause huge unemployment. Comparative advantage is indeed one reasonable thing to discuss in response to that conversation.
Stop assuming that everything anyone says about AI must clearly be a response to Yudkowsky.
Since he says he vibe coded the app, I can’t be sure what he contributed and what was added by the AI. However, the app does include a section on ASI, I included a screenshot above. But even the section on AGI includes 1000x AI super research, so I guess this is quite capable AGI. During the immediate future, I do expect humans to remain relevant because a few cognitive niches will remain where AIs don’t do so well, but that isn’t quite the same as CA. I actually don’t really see how humans could be paid for a job that AI can also do just as well, I mean AI is so much cheaper and we are not really compute bottle-necked where we couldn’t get more AIs spinned up. Currently humans have an absolute advantage at a shrinking number of tasks which keeps us employed for the moment. But maybe that’s a different post.
This post is not really about Yudkowsky, I had this in drafts and published it slightly rushed when I saw Yudkowsky’s post on the same topic come out yesterday. I did only minor edits after reading Yud’s post, I mean this post basically came out minutes after reading Yud’s post. Most of these ideas where not directly influenced but these arguments have been around in the discussion for a while, that’s why I assume they seem similar.
Seb is explicitly talking about AGI and not ASI. It’s right there in the tweet.
Most people in policy and governance are not talking about what happens after an intelligence explosion. There are many voices in AI policy and governance and lots of them say dumb stuff, e.g. I expect someone has said the next generation of AIs will cause huge unemployment. Comparative advantage is indeed one reasonable thing to discuss in response to that conversation.
Stop assuming that everything anyone says about AI must clearly be a response to Yudkowsky.
Since he says he vibe coded the app, I can’t be sure what he contributed and what was added by the AI. However, the app does include a section on ASI, I included a screenshot above. But even the section on AGI includes 1000x AI super research, so I guess this is quite capable AGI. During the immediate future, I do expect humans to remain relevant because a few cognitive niches will remain where AIs don’t do so well, but that isn’t quite the same as CA. I actually don’t really see how humans could be paid for a job that AI can also do just as well, I mean AI is so much cheaper and we are not really compute bottle-necked where we couldn’t get more AIs spinned up. Currently humans have an absolute advantage at a shrinking number of tasks which keeps us employed for the moment. But maybe that’s a different post.
This post is not really about Yudkowsky, I had this in drafts and published it slightly rushed when I saw Yudkowsky’s post on the same topic come out yesterday. I did only minor edits after reading Yud’s post, I mean this post basically came out minutes after reading Yud’s post. Most of these ideas where not directly influenced but these arguments have been around in the discussion for a while, that’s why I assume they seem similar.