You’ve already noted that it doesn’t really matter, but I thought I’d help fill in the blanks.
The current global regime of sovereign nation-states that we take for granted is the product of the 20th century. It’s not like an existing sovereign nation-state belonging to the Palestinians was carved up by external powers and arbitrarily handed to Jews. Rather, the disintegration of empires created opportunities for local nationalist movements to arise, creating new countries based on varying and competing unifying or dividing factors such as language, tribal associations, and sect. Palestinians and Zionist Jews both had nationalist aspirations during this period, and for various reasons the Zionists came out on top.
The idea that “the Palestinians were there first” is not particularly meaningful or accurate, especially given the historical fact of Judea and Israel as the birthplace of Judaism and the continuous presence of Jewish communities in the region, despite the many events contributing to the creation of a Jewish diaspora.
I agree. But I was not trying to argue against dangers of AI-directed agentic activity. The thesis is not that “alignment risk” is overblown, nor is the comparison of the risks the point, it’s that those risks accumulate such that the technology is guaranteed to be lethal for the average person. This is significant because the risk of misalignment is typically thought to be accepted because of rewards that will be broadly shared. “You or your children are likely to be killed by this technology, whether it works as designed or not” is a very different story from “there is a chance this will go badly for everyone, but if it doesn’t it will be really great for everyone.”