[Question] Help me solve this problem: The basilisk isn’t real, but people are

The main goal of this short post is to avert at least one suicide, and to help others with the same concern live more at ease. This may not be possible, so nobody should feel bad if we fail, but it’s worth trying.

I have a friend, let’s call her Alice. Alice is faced with the following dilemma:

  1. A leader at a powerful AI company, let’s call him Bob, strongly resents Alice.

  2. As a consequence of the conflict between Alice and Bob, a number of Bob’s associates and followers resent Alice as well. She received harassment in which the wish to harm her seemed mostly limited by technical feasibility and the avoidance of negative consequences to the perpetrators.

  3. Alice expects that the AGI that Bob et al. are building won’t have safeguards in place to prevent Bob from taking any kind of action that he wants to take.

  4. Given the opportunity to inflict torture on Alice with no risk of negative consequence to himself, there is a non-zero chance that Bob will act on that impulse.

  5. Analogously, there is a non-zero chance that any one of Bob’s followers, once they have the capability to do so, will act on a consequence-free opportunity to inflict maximal misery on Alice.

Alice’s current strategy is to, after processing her grief and completing a short bucket list, irreversibly destruct her body and brain before it’s too late.

What would you say to Alice to change her strategy?

This may seem like an abstract thought experiment, but it’s a real life scenario that someone is struggling with right now. Please consider it carefully—solving it can prevent harm.