This is a good comment, and I think describes some of what is going on. I also feel concerned about some of those dynamics, though I do have high p-doom (and like 13 year timelines, which I think is maybe on the longer side these days, so not sure where I fall here in your ontology).
I disagree a lot with the examples you list that you say are deceiving or wrong. Like, I do think capabilities are growing faster than our ability to control, and that feels like a fine summary of the situation (though also not like an amazing one).
I also personally don’t care much about “the big tent” coalition. I care about saying what I believe. I don’t want to speak on behalf of others, but I also really don’t want to downplay what I believe because other people think that will make them look bad.
Independently of my commitment to not join mutual reputation protection alliances, my sense is most actions that have been taken so far by people vaguely in the LW/EA space in the public sphere and the policy sphere have been quite harmful (and e.g. involved giving huge amounts of power and legitimacy to AI capability companies), so I don’t feel much responsibility to coordinate with or help the people who made that happen. I like many of those people, and think they are smart, and I like talking to them and sometimes learn things from them, but I don’t think I owe them much in terms of coordinating our public messaging on AI, or something like that (though I do owe them not speaking on their behalf, and I do think a lot of people could do much better to speak more on behalf of themselves and less on behalf of ‘the AI safety community’).
This is a good comment, and I think describes some of what is going on. I also feel concerned about some of those dynamics, though I do have high p-doom (and like 13 year timelines, which I think is maybe on the longer side these days, so not sure where I fall here in your ontology).
I disagree a lot with the examples you list that you say are deceiving or wrong. Like, I do think capabilities are growing faster than our ability to control, and that feels like a fine summary of the situation (though also not like an amazing one).
I also personally don’t care much about “the big tent” coalition. I care about saying what I believe. I don’t want to speak on behalf of others, but I also really don’t want to downplay what I believe because other people think that will make them look bad.
Independently of my commitment to not join mutual reputation protection alliances, my sense is most actions that have been taken so far by people vaguely in the LW/EA space in the public sphere and the policy sphere have been quite harmful (and e.g. involved giving huge amounts of power and legitimacy to AI capability companies), so I don’t feel much responsibility to coordinate with or help the people who made that happen. I like many of those people, and think they are smart, and I like talking to them and sometimes learn things from them, but I don’t think I owe them much in terms of coordinating our public messaging on AI, or something like that (though I do owe them not speaking on their behalf, and I do think a lot of people could do much better to speak more on behalf of themselves and less on behalf of ‘the AI safety community’).