EU AI Policy / Mechatronics Engineer @ Independent Researcher
So basically, post hoc, ergo propter hoc (post hoc fallacy)
If winning happened after rationality (in this case, any action you judge to be rational under any definition you prefer) it does not mean it happened because of it.
This was a great read! Personally I feel like it ended too quickly - even without going into gruesome details, I felt like 1 more paragraph or so of concluding bits in the story was needed. But, overall I really enjoyed it.
I’m trying to think of ideas here. As a recap of what I think the post says:
China is still very much in the race
There is very little AI safety activities there, possibly due to a lack of reception of EA ideas.
^let me know if I am understanding correctly.
it seems to me that many in AI Safety or in other specific “cause areas” are already dissociating from EA, though not much from LW.
I am not sure if we should expect mainstream adoption of AI Safety ideas (its not really mainstream in the west, nor is EA, or LW).
It seems like there are some communication issues (the Org looking for funding) that this post can help with
to me it is super interesting to hear that there is less resistance to the ideas of AI Safety in China. Though I don’t want to fully believe that yet. Though, im not sure that the AI Safety field is people bottlenecked right now, it seems we currently don’t know what to do with more people really.
still, it’s clear that we need to have a strong field in China. Perhaps less alignment focused, and more governance? Though my impression from your post is that governance is less doable, but maybe I am misunderstanding.
I might have more thoughts later on.
(for context, I am recently involved in governance work for the EU AI Act)
Working at a startup made me realize how little we can actually “reason through” things to get to a point where all team members agree. Often there’s too little time to test all assumptions, if it’s even doable at all. Part of the role of the CEO is to “cut” these discussions when it’s evident that spending more time on it is worse than proceeding despite uncertainty.
If we had “the facts”, we might find it easier to agree. But in an uncertain environment, many decisions come down to the intuition (hopefully based on reliable experience—such as founding a similar previous startup)
To me it seems that there are parallels here. In discussions I can always push back on the intuitions of others, but where no reliable facts exist, I have little chance at getting far. Which is not always bad, since if we couldn’t function well under uncertainty, we would likely be a lot less successful as a species.