Could the Maxipok rule have catastrophic consequences? (I argue yes.)

Here I ar­gue that fol­low­ing the Max­ipok rule could have truly catas­trophic con­se­quences.

Here I provide a com­pre­hen­sive list of ac­tual hu­mans who ex­pressed, of­ten with great in­ten­sity, om­ni­ci­dal urges. I also dis­cuss the wor­ri­some phe­nomenon of “la­tent agen­tial risks.”

And fi­nally, here I ar­gue that a su­per­in­tel­li­gence sin­gle­ton con­sti­tutes the only mechanism that could neu­tral­ize the “threat of uni­ver­sal unilat­er­al­ism” and the con­se­quent break­down of the so­cial con­tract, re­sult­ing in a Hobbe­sian state of con­stant war among Earthi­ans.

I would gen­uinely wel­come feed­back on any of these pa­pers! The first one seems es­pe­cially rele­vant to the good denizens of this web­site. :-)