Thanks for the counterexample! I hadn’t heard that usage of ‘agentic’ before, but yeah, Milgram uses ‘agentic’ to mean ‘obedient’ or ‘deferential’. In googling it, I came across this recent paper, which seems to misread Milgram’s usage of the words ‘agentic’ and ‘agency’ in precisely this way. https://pmc.ncbi.nlm.nih.gov/articles/PMC11263708/
Jackson Hurley
Karma: 4
As for (ii), I would distinguish between massive economic costs in the form of (a) missed opportunities and (b) damaging the existing economy. Many such cases of (a), fewer, but still many, of (b). Nobody is proposing taking away today’s LLMs.
Since (ii) and (iii) have many examples mentioned in the post, it seems (i) is the crux.
Each part of (i) sounds like a self-fulfilling prophecy. Ultimately, it amounts to the proposition that AI Safety arguments have not yet won decisively, and therefore they cannot win. Building scientific consensus is an ongoing process of convincing people that the dangers are real and that action is possible, and posts like Katja’s are part of building it.
As for political salience and public awareness of the risks, they are already pretty high and rising fast. Many people are taking actions to raise awareness and salience, and to a large extent it’s taking care of itself as capabilities improve and the dangers become obvious (see e.g. Mythos/Glasswing).