AI as a Civilizational Risk Part 6/​6: What can be done

Fix or destroy social media

One of the critical positive developments is the potential of Elon Musk to buy Twitter. The acquisition happened as the essay was being finalized. Positive effects that can happen are cracking down on bots and re-vamping moderation to avoid bad AI-driven mischaracterization of public opinion. However, the main benefit would be a potential implementation of a non-optimization feed ranking algorithm in the vein of TrustRank. Proper feed ranking would promote socially cohesive ideas instead of wedge issues.

Aside from Elon’s specific actions around Twitter, most social media needs to be destroyed or drastically reformed. We need to be careful around ranking algorithms. Any algorithm with an “optimization nature” rather than a “contractual nature” must be viewed with suspicion. At the very least, science needs to test the effect of using some websites for a prolonged time. If it causes mental health issues in people or small groups, this is a sign of imposing unacceptable externalities. Setting up such tests requires a good assessment of mental health problems and how to measure them correctly. Even lacking great assessments with the crude approximation that we have today, we can design social media that does not slowly cause people to go insane. In addition to personal defense, there needs to be “group defense” against hostile outside optimization. This reasoning has caused me to research this area, developing TrustRank, hoping that it becomes a core algorithm in the future of social media, similar to the way PageRank is a core algorithm of current search engines. Even correctly measuring social cohesion can give decision-makers some ideas about how to preserve it. Of course, this requires decision-makers that care about the nation’s well-being, the absence of which is part of the problem. We would also need to implement solutions to the governments forcing social media to use AI to de-platform people with valuable insights. However, since these issues are well-known, they are more likely to happen by default through web3 systems.

However, completely replacing social media is not enough to stave off civilizational risk. We still have the background problems of social cohesions, which are not new but reappear every so often in civilization. Elites need to improve social cohesion among themselves and the people they manage. Even if the Western order is doomed, there are suggestions that a core group of people with good internal social cohesion can manage to “reboot” the good parts of it in a new form of it, such as a network state, and avoid a further Dark Age.

Meta discourse issues

What else can be done, and why is this hard? Even if we accept some of the optimism and hope that civilizational risk may decrease existential risk, this still leaves a lot of potential problems on the table. Even if we accept that civilizational collapse may give us the necessary impetus to do something about existential risk, we still have many issues. The main one is that due to declining social cohesion, there is little meta capacity to discuss these issues properly.

The culture war and the left-right dichotomy are now splitting the meta-object dichotomy. Effective altruism, mainly on the broad left, occasionally libertarian or centrist, tends to focus on just the meta-issues. Many discussions about AI’s effect on humanity tend to be about abstract agents, abstractly acting in the world against abstract human values. Occasionally this can be made concrete, but the concrete examples usually involve a manufacturing or power dichotomy. They tend not to concretize this in political terms partly for good reason because political terms are themselves divisive and can spur an unnecessary backlash.

The right, however, tends to focus on the object level, civilizational issues, which means sounding the alarm that West civilization is collapsing and the current directionality of society is negative. The counter-arguments of “everything is great” that roughly claim that “metrics such as GDP and life-expectancy have been going up and therefore everything is broadly ok” are not cutting it because life-expectancy has gone down quite a bit recently.

The left/​right, meta/​object dichotomies do not engage with each other due to fear of being perceived as one another. At some point, the government will start using narrow AIs or bots to sway the public’s perception of itself, demoralize the public, or radicalize it for war. Due to left/​right issues, there will be less productive dialogue than needed between the pro-regime but worried about AI left and anti-government right. Occasionally, there is some bipartisan talk about digital social media, but the diagnosis of it and the solutions tend to diverge. As a result, an essay like this one is hard to write, and it is likely to become more challenging as the culture war increases in magnitude, even as the issues become more apparent than today.

Math and Economics research

There is also mathematical research that needs to happen. There are many examples, but a big one is the notion of approximation of utility functions. When I talked about narrow AIs of search engines vs. social media, I talked about search engines “approximately” being closer to human “utility” without being perfectly aligned. However, this mathematical notion of “more aligned” is intuitive but not precise enough to discuss formally. The lack of formalization is surprising because people have been talking about aligned versus non-aligned corporations or ideas for a long time. However, we need a theory to describe or compare two utility functions in precise terms.

We need to have economic research which furthers the understanding of externalities and political externalities. COVID shows that economists and politicians do not correctly model proper disease-spread externalities. Airplane travel and large inside gatherings likely have some disease vector externalities, but these activities do not correctly price these externalities. Political decision-makers end up promoting or banning certain activities. However, there does not seem to be a proper utility calculation with some activities, such as cruise ships being under-regulated and small gatherings of people who already know each other being over-regulated. Biological externalities are a good metaphor for understanding “social cohesion” externalities. They are a more complicated problem, but economics research can approach it. Pollution metaphors can help us understand the notion of “signal pollution” and improper “behavioral modification.” Putting a price on improper “nudges” from companies may be tricky, but it can re-use some of the existing protections from “fraud” or “misleading advertising.” Of course, given that a lot of behavioral modification and signal pollution comes from the government itself, the notion of “self-regulation” is essential. If the framework of “human rights” continues to function as intended, which is a big if, then we might need to develop new rights for the digital age, such as the right of individuals to “not be nudged” too hard.

Platforms and people can use economics research to evaluate the cost of using advertising vs. subscription business methods. These problems are old, but we have to solve them now. We must understand the pricing of externalities, measures of “anti-economy,” and behavioral modification. This understanding can be essential to reduce the impact of narrow AIs and establish the groundwork for AGI safety research.

One of the core philosophical directions of avoiding and mitigating both civilizational and existential risk is the ability to define, measure, and act on proper vs. improper behavior modification. What changes to the Human Being do we allow AIs to perform without our consent? Staying on the side of caution is most prudent here. Distributing and decentralizing the power of modification away from technocapital and towards kin networks is likely safer than centralization.

All parts

P1: Historical Priors

P2: Behavioral Modification

P3: Anti-economy and Signal Pollution

P4: Bioweapons and Philosophy of Modification

P5: X-risk vs. C-risk

P6: What Can Be Done

Thanks to Elliot Olds and Mike Anderson for previewing earlier drafts of the post.

My name is Pasha Kamyshev. Pre-pandemic I was very active in the Seattle EA community. My previous LessWrong account is agilecaveman. Follow me on my Substack, Twitter or ask me for an invite to my work-in-progress startup: youtiki.com