It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.
I read this as a lack of engaging with the situation as real. But possibly my sense that a non-negligible number of people have this flavor of position is wrong.
This post raises a relevant objection to a common pro-AI position, using an easily understandable analogy. Katja’s argument shows that the best-case pro-AI position is not as self-evident as it may seem.
On closer inspection, I believe this does not add much towards understanding the described people’s psychology.
Although the described reactions seem accurate, the analogy seems week and the posts jumps too quickly towards unflattering conclusions about the outgroup. In particular, the case of being forcibly moved by a company towards another location is an extremely radical action given our current social norms and thus people can be expected to be indignant.
On the other hand, organizations imposing large but longer term changes on societies without asking is the norm. such as introducing social media or the internet.