Since humans are consequentialist-type intelligences, we should expect them to be ruthless, and we should prevent them from gaining too much power, lest they destroy everything we hold dear.
This is a very weird sentence to me. If we want to know about human behavior, we can just observe past and present humans. We shouldn’t take one fact about human brain architecture in isolation, and ignore everything else we know about human brains and human psychology and human history.
In particular, if we want to know whether “absolute power corrupts absolutely”, we should obviously start by looking at the historical record of humans with absolute power. (No opinion.)
Developing compute-intensive, imitation-learning-based AI should be considered closer to human-brain augmentation than ASI capability development
I’m not sure what this paragraph is getting at. My best guess is that you’re interested in the AI pause / stop vs AI acceleration debate, and suggesting that if LLMs are not a path to “ASI”, then that’s a reason not to pause LLM progress?
If so: (1) I generally stay out of that debate because I don’t expect it to make much difference regardless, (2) I don’t like taking sides in a generic way rather than talking about specific proposals with their own particular suites of intended and unintended consequences, (3) …but if I had to pick a side, it would be the “pause” side, because, while my opinion is in fact that LLMs are not a path to “ASI” (as I define it), OTOH (A) I don’t hold that opinion with 100% confidence, and (B) there are legitimate LLM x-risk worries even without “ASI”, and (C) there are legitimate LLM worries short of x-risk, and (D) like you said, there are various indirect ways that I’d expect the (small, indirect, marginal) effect of LLM-centric “pause” efforts to push ASI later rather than earlier, including via LLM-assisted coding & research, the relentless ramp-up of global compute, etc.
This is a very weird sentence to me. If we want to know about human behavior, we can just observe past and present humans. We shouldn’t take one fact about human brain architecture in isolation, and ignore everything else we know about human brains and human psychology and human history.
In particular, if we want to know whether “absolute power corrupts absolutely”, we should obviously start by looking at the historical record of humans with absolute power. (No opinion.)
I’m not sure what this paragraph is getting at. My best guess is that you’re interested in the AI pause / stop vs AI acceleration debate, and suggesting that if LLMs are not a path to “ASI”, then that’s a reason not to pause LLM progress?
If so: (1) I generally stay out of that debate because I don’t expect it to make much difference regardless, (2) I don’t like taking sides in a generic way rather than talking about specific proposals with their own particular suites of intended and unintended consequences, (3) …but if I had to pick a side, it would be the “pause” side, because, while my opinion is in fact that LLMs are not a path to “ASI” (as I define it), OTOH (A) I don’t hold that opinion with 100% confidence, and (B) there are legitimate LLM x-risk worries even without “ASI”, and (C) there are legitimate LLM worries short of x-risk, and (D) like you said, there are various indirect ways that I’d expect the (small, indirect, marginal) effect of LLM-centric “pause” efforts to push ASI later rather than earlier, including via LLM-assisted coding & research, the relentless ramp-up of global compute, etc.