[Question] Rank the following based on likelihood to nullify AI-risk

Rank the following based on the likelihood to nullify AI risk (whether by achieving alignment, stopping AI development, or another way)
If you think you have better solutions to AI risk than I came up with, please add them to your ranking.

[1]

  • Give EY[2] $10M

  • Give EY $100M

  • Give EY $1B

  • Give EY $10B

  • Give EY $100B

  • Give EY $1T

  • Give Organization[3] $1B

  • Give Organization $10B

  • Give Organization $100B

  • Give Organization $1T

  • Achieve widespread agreement[4] on AI risk, by 2025

  • Achieve widespread agreement on AI risk, by 2030

  • Achieve widespread agreement on AI risk, by 2040

  • Convince Major Personalities[5] that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2025

  • Convince Major Personalities* that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2030

  • Convince Major Personalities that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2040

  • Convince the top STEM students/​talents[6] that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2025

  • Convince the top STEM students/​talents that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2030

  • Convince the top STEM+CS students/​talents that AI risk is real and achieving aligned AI is way harder than achieving AI, by 2040

  • Achieve widespread agreement that P(Doom|AI) is more likely than not, by 2025

  • Achieve widespread agreement that P(Doom|AI) is more likely than not, by 2030

  • Achieve widespread agreement that P(Doom|AI) is more likely than not, by 2040

  • Convince Major Personalities that P(Doom|AI) is more likely than not, by 2025

  • Convince Major Personalities that P(Doom|AI) is more likely than not, by 2030

  • Convince Major Personalities that P(Doom|AI) is more likely than not, by 2040

  • Convince the top STEM students/​talents that P(Doom|AI) is more likely than not, by 2025

  • Convince the top STEM students/​talents that P(Doom|AI) is more likely than not, by 2030

  • Convince the top STEM students/​talents that P(Doom|AI) is more likely than not, by 2040

  • [Solution I’m not thinking of #1]

  • [Solution I’m not thinking of #2]
    .
    .
    .

  • [Solution I’m not thinking of #n]

Final Notes:

  • Please, do NOT consider the difficulty of the solutions yet.
    Rank based on - ’if we made that happen, would we be more likely to nullify P(Doom) than if we made something else on the list happen?’
    The difficulty/​probability of success of achieving any of the solutions is a separate question that I’m not looking at YET.

  • More useful to me than your ranking alone is your reasoning behind it and the evidence you’re leaning on.
    However, if you’re feeling lazy and the choice for you is between not writing anything or just ranking options, I’d rather have a ranking with no explanation than no ranking.

  • If you think I’m somehow gravely confused about something for me to even be asking this question, or if it’s totally the wrong question to be asking, do tell me, and please explain why.

  • Finally, before telling me about that brilliant option that I didn’t think of, you might want to stop to consider whether it should be posted for everyone to read… #infohazard.

    Thanks for reading!

  1. ^

    I’ve put line jumps between each category but in your final ranking please rank them all together and against each other.
    (eg 1. Give $10B, 2. Convince Major Personalities by 2030, 3. Achieve widespread agreement by 2030...).
    If you’re feeling lazy, just ignore the solutions you think are worthless.

  2. ^

    I use EY (Eliezer Yudkowsky) as a catch-all for [someone competent AND ultra-motivated to nullify AI risk /​ P(Doom)].
    That being said, there is a reason I’m using Eliezer specifically. I believe he’d be more willing than others to be creative and unconventional, even at the cost of looking foolish or unreasonable. I trust EY is able to navigate outside the Overton Window, and, in his writing, I like his moral code.

    Feel free to change the name to whom you’d give money no-questions-asked (why them?).

    Giving EY money is not the same as funding MIRI or another org. Organizations have to justify themselves to funders in exchange for money. Organizations have to look reasonable. I think that [Organisation with extra 100M] looks different than [EY with extra 100M].
    (Let’s not have a debate whether it’s a good or bad idea to give anyone money with 0 conditions. Obviously, you’d at least make sure the person is sane).
    Crux-solving: Would organizations still be constrained by the ‘need to look reasonable’ to the outside world, if one gave them money no-questions-asked? Could the work they do with that money be done in secret?

  3. ^

    Organization… that seeks to nullify P(Doom).
    My list doesn’t contain ‘give Organisation $10M’, because I’ve gotten the impression from reading on EA and AI Alignment that money is not a bottleneck right now, but that talent is.
    That said, I do include ‘give organization $1B+’ because maybe, at those amounts, organizations are not bottlenecked on talent anymore.

  4. ^

    eg. ~“97% of AI scientists agree that AI Risk is real and that achieving aligned AIthe is way harder than achieving AI”

  5. ^

    eg. Mark Zuckerberg, Yann LeCun, Sergey Brin, Larry Page, Bill Gates, Jack Ma, Donald Trump, Joe Biden, Barack Obama, Xi Jinping…

  6. ^

    May or may not involve convincing everyone else

No comments.