ok I will moderate my tone. I was a competitive debator and irrationality makes me upset. I thought this was a safe space for high standards wrt logic, but I can modulate. Thank you for the feebback.
There is a narrow point—people were wrong about this narrow prediction—“the ccp is scared of AI”
The broader point is that I perceive, and could be wrong, there is epistemic rot if a community dedicated to rationalism is incapable of updating. The comments I’ve seen so far are by and large consistent with that intuition. Folks seem defensive, and more concerned about my interest/tone than the thing at hand...a lot of people made decisions based off (in retrospect) bad expectations about the world. Which is fine, it happens all the time. But the thing that matters isn’t the old predictions, it’s identifying them, understand why and where they came from, and then updating.
If we want to talk about the narrow thing of “is China ready to pause AI”,it obviously is not entirely knowable, but the bigger issue is the one I think more important, are we capable of updating, because we need to be able to do that to actually investigate the small thing, going forward.
:) thank you for saying you’ll moderate your tone. It’s rare I manage to criticize someone and they reply with “ok” and actually change what they do.
My first post on LessWrong was A better “Statement on AI Risk?”. I felt it was a very good argument for the government to fund AI alignment, and I tried really hard to convince people to turn it into an open letter.
Some people told me the problem with my idea is that asking for more AI alignment funding is the wrong strategy. The right strategy is to slow down and pause AI.
I tried to explain that: when politicians reject pausing AI, they just need the easy belief of “China must not win,” or “if we don’t do it someone else will.” But for politicians to reject my open letter, they need the difficult belief of being 99.999% sure of no AI catastrophe and 99.95% sure most experts are wrong.
But I felt my argument fell on deaf ears because the community was so deadset on pausing AI, they don’t even want to waste their time on anything other than pausing AI. It was very frustrating.
(I was talking to people in private messages and emails)
I feel you and I are on the same team haha. Maybe we might even work together sometime.
ok I will moderate my tone. I was a competitive debator and irrationality makes me upset. I thought this was a safe space for high standards wrt logic, but I can modulate. Thank you for the feebback.
There is a narrow point—people were wrong about this narrow prediction—“the ccp is scared of AI”
The broader point is that I perceive, and could be wrong, there is epistemic rot if a community dedicated to rationalism is incapable of updating. The comments I’ve seen so far are by and large consistent with that intuition. Folks seem defensive, and more concerned about my interest/tone than the thing at hand...a lot of people made decisions based off (in retrospect) bad expectations about the world. Which is fine, it happens all the time. But the thing that matters isn’t the old predictions, it’s identifying them, understand why and where they came from, and then updating.
If we want to talk about the narrow thing of “is China ready to pause AI”,it obviously is not entirely knowable, but the bigger issue is the one I think more important, are we capable of updating, because we need to be able to do that to actually investigate the small thing, going forward.
:) thank you for saying you’ll moderate your tone. It’s rare I manage to criticize someone and they reply with “ok” and actually change what they do.
My first post on LessWrong was A better “Statement on AI Risk?”. I felt it was a very good argument for the government to fund AI alignment, and I tried really hard to convince people to turn it into an open letter.
Some people told me the problem with my idea is that asking for more AI alignment funding is the wrong strategy. The right strategy is to slow down and pause AI.
I tried to explain that: when politicians reject pausing AI, they just need the easy belief of “China must not win,” or “if we don’t do it someone else will.” But for politicians to reject my open letter, they need the difficult belief of being 99.999% sure of no AI catastrophe and 99.95% sure most experts are wrong.
But I felt my argument fell on deaf ears because the community was so deadset on pausing AI, they don’t even want to waste their time on anything other than pausing AI. It was very frustrating.
(I was talking to people in private messages and emails)
I feel you and I are on the same team haha. Maybe we might even work together sometime.
Thanks