I’m confused. A “cold consequentialist calculator” sounds like a strawman consequentialist. Also, “an AI that is unshakingly committed to honesty, integrity, and fairness, but doesn’t think hard about consequences” sounds like a strawman virtue-aligned AI. It looked to me like you wanted to discuss a concrete case, with simplified strawman AIs, as an intuition pump to explain your views. The fact that this simplified case leads to genocide is relevant to my intuitions in this area.
I’m confused. You say that my comment didn’t pass “the ideological turing test”. It wasn’t trying to. That’s not how an Ideological Turing Test works.
If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct.
My comment was not an attempt to explain a position. It’s not an attempt to pass an Ideological Turing Test. I agree that it doesn’t pass an Ideological Turing Test for your position. It also doesn’t pass an English Literature exam. It would pass an Ideological Turing Test for my position. It would also pass an Ideological Turing Test for committed consequentialists, because there are committed consequentialists who think that a consequentialist ASI would by default lead to human genocide. These are entirely compatible views.
I’m confused. Here’s your question again, relating to powerful AIs. It’s a good question.
Would you want a cold consequentialist calculator running the FAA?
In general, no, I would not, because genocide.
If you had further specified that the powerful AI had perfect alignment with human values, I would still not want it running the FAA, I would want it running the universe. I don’t expect this to be a practical option, and I’m not sure it’s theoretically possible. I could see the answer going either way.
I’m confused. A “cold consequentialist calculator” sounds like a strawman consequentialist. Also, “an AI that is unshakingly committed to honesty, integrity, and fairness, but doesn’t think hard about consequences” sounds like a strawman virtue-aligned AI. It looked to me like you wanted to discuss a concrete case, with simplified strawman AIs, as an intuition pump to explain your views. The fact that this simplified case leads to genocide is relevant to my intuitions in this area.
I’m confused. You say that my comment didn’t pass “the ideological turing test”. It wasn’t trying to. That’s not how an Ideological Turing Test works.
My comment was not an attempt to explain a position. It’s not an attempt to pass an Ideological Turing Test. I agree that it doesn’t pass an Ideological Turing Test for your position. It also doesn’t pass an English Literature exam. It would pass an Ideological Turing Test for my position. It would also pass an Ideological Turing Test for committed consequentialists, because there are committed consequentialists who think that a consequentialist ASI would by default lead to human genocide. These are entirely compatible views.
I’m confused. Here’s your question again, relating to powerful AIs. It’s a good question.
In general, no, I would not, because genocide.
If you had further specified that the powerful AI had perfect alignment with human values, I would still not want it running the FAA, I would want it running the universe. I don’t expect this to be a practical option, and I’m not sure it’s theoretically possible. I could see the answer going either way.