For the sake of being moral/ethical, we assume that there is a region in the space of complex beings from where we begin caring about them and a point in complexity below which it is ok to simulate since there is nothing worth caring about at below that level of complexity.
My contrived infinite torture scenario is really simple. In its effort to be ethical, the organization seeking friendly AI doesn’t do a thorough enough job of delineating this boundary. There follow uncountably many simulations of pain.
For the sake of being moral/ethical, we assume that there is a region in the space of complex beings from where we begin caring about them and a point in complexity below which it is ok to simulate since there is nothing worth caring about at below that level of complexity.
My contrived infinite torture scenario is really simple. In its effort to be ethical, the organization seeking friendly AI doesn’t do a thorough enough job of delineating this boundary. There follow uncountably many simulations of pain.