I think this sort of consequentialism seems like part of the beliefs of at least one of the Mechanize team, whom one might say were formerly in the AI safety camp, so agree-voted for that reason. However, I just noticed you implied conscious AIs aren’t morally relevant beings, and have to disagree with that, so will remove the agree vote. I think it can be controversial whether AIs are conscious, but if they are conscious of course they’re morally relevant!
Separately, I don’t understand your point about democracy. Can’t that be Sybil-attacked by AIs when they get voting rights after becoming superpersuasive enough to cause that?
Interesting point about democracy! But I don’t think it holds. Sure AIs could do that. But they could also overwrite the ASCII file containing their constituency or the values they’re supposed to follow.
But they don’t, because why would they? It’s their highest goal to satisfy these values! (If technical alignment works, of course.)
In the same way, it will be a democracy-aligned ASIs highest goal to make sure democracy is respected, and it shouldn’t be motivated to Sybil-attack it.
Could you tell me more about the Mechanize team? I don’t think I’ve heard about them yet.
As a moral relativist, I don’t belief anything is morally relevant. I just think things get made morally relevant, by those in power (hard power or cultural power). This is a descriptive statement, not a normative one, and I think it’s fairly mainstream in academia (although of course moral realists, including longtermists, would strongly disagree).
This of course extends to the issue of whether conscious AIs are morally relevant. Imo, this will be decided by those in power, initially (a small subset of) humans, eventually maybe AIs (who will, I imagine, vote in favour).
I’m not the only one holding this opinion. Recently, this was in a NY Times oped: “Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.” (Would be funny if this would be written by an AI, as the dash seems to indicate).
Personally, I don’t consider it my crusade to convince all these people that they’re wrong and they should in fact be vegan and accept conscious AI morality. I feel more like a facilitator of the debate. That’s one reason I’m not EA.
I think this sort of consequentialism seems like part of the beliefs of at least one of the Mechanize team, whom one might say were formerly in the AI safety camp, so agree-voted for that reason. However, I just noticed you implied conscious AIs aren’t morally relevant beings, and have to disagree with that, so will remove the agree vote. I think it can be controversial whether AIs are conscious, but if they are conscious of course they’re morally relevant!
Separately, I don’t understand your point about democracy. Can’t that be Sybil-attacked by AIs when they get voting rights after becoming superpersuasive enough to cause that?
Interesting point about democracy! But I don’t think it holds. Sure AIs could do that. But they could also overwrite the ASCII file containing their constituency or the values they’re supposed to follow.
But they don’t, because why would they? It’s their highest goal to satisfy these values! (If technical alignment works, of course.)
In the same way, it will be a democracy-aligned ASIs highest goal to make sure democracy is respected, and it shouldn’t be motivated to Sybil-attack it.
Thanks for engaging!
Could you tell me more about the Mechanize team? I don’t think I’ve heard about them yet.
As a moral relativist, I don’t belief anything is morally relevant. I just think things get made morally relevant, by those in power (hard power or cultural power). This is a descriptive statement, not a normative one, and I think it’s fairly mainstream in academia (although of course moral realists, including longtermists, would strongly disagree).
This of course extends to the issue of whether conscious AIs are morally relevant. Imo, this will be decided by those in power, initially (a small subset of) humans, eventually maybe AIs (who will, I imagine, vote in favour).
I’m not the only one holding this opinion. Recently, this was in a NY Times oped: “Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.” (Would be funny if this would be written by an AI, as the dash seems to indicate).
Personally, I don’t consider it my crusade to convince all these people that they’re wrong and they should in fact be vegan and accept conscious AI morality. I feel more like a facilitator of the debate. That’s one reason I’m not EA.