Some unspelled implications of this post, taking it as true for a moment:
Since humans are consequentialist-type intelligences, we should expect them to be ruthless, and we should prevent them from gaining too much power, lest they destroy everything we hold dear. (one may retort most humans share values with us, but since value formation is so fragile, they likely have values incompatible with us once they start optimizing for them for real).
Developing compute-intensive, imitation-learning-based AI should be considered closer to human-brain augmentation than ASI capability development, since it will be all “pointless” until people figure out how to develop consequentialist thinking. (one may retort that imitation-learning-based AI might make it easier to develop the consequentialist-minded AI, as a base for a consequentialist AI. But to the extent that consequentialist-based thinking is so much more powerful than imitation-based learning and likely to developed through an entirely different path than LLMs, the first factor should mostly be a rounding error. It might still speed up scientific discovery more broadly, bringing forward the date when consequentialist AI is invented, but this is not differentially speeding up that particular technology, except maybe insofar the users of such AI might be predisposed to use future LLM differentially for this purpose)
Since humans are consequentialist-type intelligences, we should expect them to be ruthless, and we should prevent them from gaining too much power, lest they destroy everything we hold dear.
This is a very weird sentence to me. If we want to know about human behavior, we can just observe past and present humans. We shouldn’t take one fact about human brain architecture in isolation, and ignore everything else we know about human brains and human psychology and human history.
In particular, if we want to know whether “absolute power corrupts absolutely”, we should obviously start by looking at the historical record of humans with absolute power. (No opinion.)
Developing compute-intensive, imitation-learning-based AI should be considered closer to human-brain augmentation than ASI capability development
I’m not sure what this paragraph is getting at. My best guess is that you’re interested in the AI pause / stop vs AI acceleration debate, and suggesting that if LLMs are not a path to “ASI”, then that’s a reason not to pause LLM progress?
If so: (1) I generally stay out of that debate because I don’t expect it to make much difference regardless, (2) I don’t like taking sides in a generic way rather than talking about specific proposals with their own particular suites of intended and unintended consequences, (3) …but if I had to pick a side, it would be the “pause” side, because, while my opinion is in fact that LLMs are not a path to “ASI” (as I define it), OTOH (A) I don’t hold that opinion with 100% confidence, and (B) there are legitimate LLM x-risk worries even without “ASI”, and (C) there are legitimate LLM worries short of x-risk, and (D) like you said, there are various indirect ways that I’d expect the (small, indirect, marginal) effect of LLM-centric “pause” efforts to push ASI later rather than earlier, including via LLM-assisted coding & research, the relentless ramp-up of global compute, etc.
Some unspelled implications of this post, taking it as true for a moment:
Since humans are consequentialist-type intelligences, we should expect them to be ruthless, and we should prevent them from gaining too much power, lest they destroy everything we hold dear. (one may retort most humans share values with us, but since value formation is so fragile, they likely have values incompatible with us once they start optimizing for them for real).
Developing compute-intensive, imitation-learning-based AI should be considered closer to human-brain augmentation than ASI capability development, since it will be all “pointless” until people figure out how to develop consequentialist thinking. (one may retort that imitation-learning-based AI might make it easier to develop the consequentialist-minded AI, as a base for a consequentialist AI. But to the extent that consequentialist-based thinking is so much more powerful than imitation-based learning and likely to developed through an entirely different path than LLMs, the first factor should mostly be a rounding error. It might still speed up scientific discovery more broadly, bringing forward the date when consequentialist AI is invented, but this is not differentially speeding up that particular technology, except maybe insofar the users of such AI might be predisposed to use future LLM differentially for this purpose)
This is a very weird sentence to me. If we want to know about human behavior, we can just observe past and present humans. We shouldn’t take one fact about human brain architecture in isolation, and ignore everything else we know about human brains and human psychology and human history.
In particular, if we want to know whether “absolute power corrupts absolutely”, we should obviously start by looking at the historical record of humans with absolute power. (No opinion.)
I’m not sure what this paragraph is getting at. My best guess is that you’re interested in the AI pause / stop vs AI acceleration debate, and suggesting that if LLMs are not a path to “ASI”, then that’s a reason not to pause LLM progress?
If so: (1) I generally stay out of that debate because I don’t expect it to make much difference regardless, (2) I don’t like taking sides in a generic way rather than talking about specific proposals with their own particular suites of intended and unintended consequences, (3) …but if I had to pick a side, it would be the “pause” side, because, while my opinion is in fact that LLMs are not a path to “ASI” (as I define it), OTOH (A) I don’t hold that opinion with 100% confidence, and (B) there are legitimate LLM x-risk worries even without “ASI”, and (C) there are legitimate LLM worries short of x-risk, and (D) like you said, there are various indirect ways that I’d expect the (small, indirect, marginal) effect of LLM-centric “pause” efforts to push ASI later rather than earlier, including via LLM-assisted coding & research, the relentless ramp-up of global compute, etc.