I mean if you’re counting “the world” as opposed to the neurotic demographic I’m discussing then obviously capabilities have advanced more than the MIRI outlook would like. But the relevant people basically never cared about that in the first place and are therefore kind of irrelevant to what I’m saying.
I guess I’m unclear on what people you are considering the relevant neurotic demographic, and since I feel that “agent foundations” is a pointer to a bunch of concepts which it would be very good if we could develop further, I find myself getting confused at your use of the phrase “agent foundations era”.
For a worldview check, I am currently much more concerned about the risks of “advancing capabilities” than I am about missed opportunities. We may be coming at this from different perspectives. I’m also getting some hostile soldier mindset vibe from you. My apologies if I am misreading you. Unfortunately, I am in the position of thinking that people promoting the advancement of AI capabilities are indeed promoting increased global catastrophic risk, which I oppose. So if I am falling into the soldier mindset, I likewise am sorry.
I mean if you’re counting “the world” as opposed to the neurotic demographic I’m discussing then obviously capabilities have advanced more than the MIRI outlook would like. But the relevant people basically never cared about that in the first place and are therefore kind of irrelevant to what I’m saying.
Thanks for the reply.
I guess I’m unclear on what people you are considering the relevant neurotic demographic, and since I feel that “agent foundations” is a pointer to a bunch of concepts which it would be very good if we could develop further, I find myself getting confused at your use of the phrase “agent foundations era”.
For a worldview check, I am currently much more concerned about the risks of “advancing capabilities” than I am about missed opportunities. We may be coming at this from different perspectives. I’m also getting some hostile soldier mindset vibe from you. My apologies if I am misreading you. Unfortunately, I am in the position of thinking that people promoting the advancement of AI capabilities are indeed promoting increased global catastrophic risk, which I oppose. So if I am falling into the soldier mindset, I likewise am sorry.