Could you tell me more about the Mechanize team? I don’t think I’ve heard about them yet.
As a moral relativist, I don’t belief anything is morally relevant. I just think things get made morally relevant, by those in power (hard power or cultural power). This is a descriptive statement, not a normative one, and I think it’s fairly mainstream in academia (although of course moral realists, including longtermists, would strongly disagree).
This of course extends to the issue of whether conscious AIs are morally relevant. Imo, this will be decided by those in power, initially (a small subset of) humans, eventually maybe AIs (who will, I imagine, vote in favour).
I’m not the only one holding this opinion. Recently, this was in a NY Times oped: “Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.” (Would be funny if this would be written by an AI, as the dash seems to indicate).
Personally, I don’t consider it my crusade to convince all these people that they’re wrong and they should in fact be vegan and accept conscious AI morality. I feel more like a facilitator of the debate. That’s one reason I’m not EA.
Thanks for engaging!
Could you tell me more about the Mechanize team? I don’t think I’ve heard about them yet.
As a moral relativist, I don’t belief anything is morally relevant. I just think things get made morally relevant, by those in power (hard power or cultural power). This is a descriptive statement, not a normative one, and I think it’s fairly mainstream in academia (although of course moral realists, including longtermists, would strongly disagree).
This of course extends to the issue of whether conscious AIs are morally relevant. Imo, this will be decided by those in power, initially (a small subset of) humans, eventually maybe AIs (who will, I imagine, vote in favour).
I’m not the only one holding this opinion. Recently, this was in a NY Times oped: “Some worry that if A.I. becomes conscious, it will deserve our moral consideration — that it will have rights, that we will no longer be able to use it however we like, that we might need to guard against enslaving it. Yet as far as I can tell, there is no direct implication from the claim that a creature is conscious to the conclusion that it deserves our moral consideration. Or if there is one, a vast majority of Americans, at least, seem unaware of it. Only a small percentage of Americans are vegetarians.” (Would be funny if this would be written by an AI, as the dash seems to indicate).
Personally, I don’t consider it my crusade to convince all these people that they’re wrong and they should in fact be vegan and accept conscious AI morality. I feel more like a facilitator of the debate. That’s one reason I’m not EA.