[Question] Outcome Terminology?

I’m writ­ing a post about S-risks, and I need ac­cess to some clean, es­tab­lished ter­minol­ogy/​back­ground ma­te­rial for dis­cussing AI-based long-term out­comes for hu­man­ity.

My cur­rent (very limited) vo­cab­u­lary can be sum­ma­rized with the fol­low­ing cat­e­gories:

  1. Out­comes which are roughly max­i­mally bad: Hyper­ex­is­ten­tial risk/​S-risk/​Un­friendly AI/​Ex­is­ten­tial risk

  2. Out­comes which are non­triv­ially worse than pa­per­clip­ping-equiv­a­lents but bet­ter than ap­prox­i­mate min­i­miza­tion of hu­man util­ity: Hyper­ex­is­ten­tial risk/​S-risk/​Un­friendly AI/​Ex­is­ten­tial risk

  3. Out­comes which are pro­duced by agents es­sen­tially or­thog­o­nal to hu­man val­ues: Paper­clip­ping/​Un­friendly AI/​Ex­is­ten­tial risk

  4. Out­comes which are non­triv­ially bet­ter than pa­per­clip­ping but worse than Friendly AI: ???

  5. Out­comes which are roughly max­i­mally good: Friendly AI

The prob­lems are man­i­fold:

  • I haven’t read any dis­cus­sion which speci­fi­cally ad­dresses parts 1 or 2. I have read gen­eral dis­cus­sion of parts 1 and 2 com­bined un­der the names of “Out­comes worse than death”, “Hyper­ex­is­ten­tial risk”, “S-risk”, etc.

  • My cur­rent ter­minol­ogy over­laps too strongly to use to uniquely iden­tify out­comes 1 and 2.

  • I have no ter­minol­ogy or back­ground in­for­ma­tion for out­come 4.

I’ve done a small amount of in­ves­ti­ga­tion and de­ter­mined less brain­power would be wasted by just ask­ing for links.