Mind Crime

TagLast edit: 19 Oct 2021 21:02 UTC by Multicore

Mind Crime occurs when a computational process which has moral value is mistreated. For example, an advanced AI trying to predict human behavior might create simulations of humans so detailed as to be conscious observers, which would then suffer through whatever hypothetical scenarios the AI wanted to test and then be discarded.

Mind crime on a large scale constitutes a risk of astronomical suffering.

Mind crime is different from other AI risks in that the AI need not even affect anything outside its box for the catastrophe to occur.

The term was coined by Nick Bostrom in Superintelligence: Paths, Dangers, Strategies.

Not the same as thoughtcrime, a term for having beliefs considered unacceptable by society.

The Aliens have Landed!

TimFreeman19 May 2011 17:09 UTC
46 points
158 comments3 min readLW link

The AI in a box boxes you

Stuart_Armstrong2 Feb 2010 10:10 UTC
157 points
390 comments1 min readLW link

Is it pos­si­ble to pre­vent the tor­ture of ems?

NancyLebovitz29 Jun 2011 7:42 UTC
14 points
32 comments1 min readLW link

Non­per­son Predicates

Eliezer Yudkowsky27 Dec 2008 1:47 UTC
49 points
176 comments6 min readLW link

Thoughts on Hu­man Models

21 Feb 2019 9:10 UTC
124 points
32 comments10 min readLW link1 review

Su­per­in­tel­li­gence 12: Mal­ig­nant failure modes

KatjaGrace2 Dec 2014 2:02 UTC
15 points
51 comments5 min readLW link

A Models-cen­tric Ap­proach to Cor­rigible Alignment

Jemist17 Jul 2021 17:27 UTC
2 points
0 comments6 min readLW link