He appears to be arguing against a thing, while simultaneously criticizing people; but I appreciate that he seems to do it in ways that are not purely negative, also mentioning times things have gone relatively well (specifically, updating on evidence that folks here aren’t uniquely correct), even if it’s not enough to make the rest of his points not a criticism.
I entirely agree with his criticism of the strategy he’s criticizing. I do think there are more obviously tenable approaches than the “just build it yourself lol” approach or “just don’t let anyone build it lol” approach, such as “just figure out why things suck as quickly as possible by making progress on thousand year old open questions in philosophy that science has some grip on but has not resolved”. I mean, actually I’m not highly optimistic, but it seems quite plausible that what’s most promising is just rushing to do the actual research of figuring out how make constructive and friendly coordination more possible or even actually reliably happen, especially between highly different beings like humans and AIs, especially given the real world we actually have now where things suck and that doesn’t happen.
Specifically, institutions are dying and have been for a while, and the people who think they’re going to set up new institutions don’t seem to be competent enough to pull it off, in most cases. I have the impression that institutions would be dying even without anyone specifically wanting to kill them, but that also seems to be a thing that’s happening. Solving this is stuff like traditional politics or economics or etc, from a perspective of something like “human flourishing, eg oneself”.
Specifically, figuring out how to technically ensure that the network of pressures which keeps humanity very vaguely sane also integrates with AIs in a way that keeps them in touch with us and inclined to help us keep up and participating/actualizing our various individuals’ and groups’/cultures’ preferences in society as things get crazier, seems worth doing.
He appears to be arguing against a thing, while simultaneously criticizing people; but I appreciate that he seems to do it in ways that are not purely negative, also mentioning times things have gone relatively well (specifically, updating on evidence that folks here aren’t uniquely correct), even if it’s not enough to make the rest of his points not a criticism.
I entirely agree with his criticism of the strategy he’s criticizing. I do think there are more obviously tenable approaches than the “just build it yourself lol” approach or “just don’t let anyone build it lol” approach, such as “just figure out why things suck as quickly as possible by making progress on thousand year old open questions in philosophy that science has some grip on but has not resolved”. I mean, actually I’m not highly optimistic, but it seems quite plausible that what’s most promising is just rushing to do the actual research of figuring out how make constructive and friendly coordination more possible or even actually reliably happen, especially between highly different beings like humans and AIs, especially given the real world we actually have now where things suck and that doesn’t happen.
Specifically, institutions are dying and have been for a while, and the people who think they’re going to set up new institutions don’t seem to be competent enough to pull it off, in most cases. I have the impression that institutions would be dying even without anyone specifically wanting to kill them, but that also seems to be a thing that’s happening. Solving this is stuff like traditional politics or economics or etc, from a perspective of something like “human flourishing, eg oneself”.
Specifically, figuring out how to technically ensure that the network of pressures which keeps humanity very vaguely sane also integrates with AIs in a way that keeps them in touch with us and inclined to help us keep up and participating/actualizing our various individuals’ and groups’/cultures’ preferences in society as things get crazier, seems worth doing.