I’m disappointed that they don’t make any mention of extinction risk
Agree, but I wonder if extinction risk is just too vague, at the moment, for something like this. Absent a fast takeoff scenario, AI doom probably does look something like the gradual and unchecked increase of autonomy mentioned in the call, and I’m not sure if there’s enough evidence of a looming fast takeoff scenario for it to be taken seriously.
I think the stakes are high enough that experts should firmly state, like Eliezer, that we should back off way before fast takeoff even seems like a possibility. But I see why that may be less persuasive to outsiders.
Agree; I’m strongly in favor of using a term like “disempowerment-risk” over “extinction-risk” in communication to laypeople – I think the latter detracts from the more important question of preventing a loss of control and emphasizes the thing that happens after, which is far more speculative (and often invites the common “sci-fi scenario” criticism).
Of course, it doesn’t sound as flashy, but I think saying “we shouldn’t build a machine that takes control of our entire future” is sufficiently attention-grabbing.