I think that first you should elaborate on what you mean by “the goals of humanity”. Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.
Okay, I admit it—what I really think is that “goals of humanity” is a nonsensical phrase, especially when spoken by an American academic. It would be a little better to talk about values instead of goals, but not much better. The phrase still implies the unspoken belief that everyone would think like the person who speaks it, if only they were smarter.
I hope at least you care if everyone on Earth dies painfully tomorrow. We don’t have any theory that would stop AI from doing that, and any progress toward such a theory would be on topic for the contest.
Sorry, I’m feeling a bit frustrated. It’s as if the decade of LW never happened, and people snap back out of rationality once they go off the dose of Eliezer’s writing. And the mode they snap back to is so painfully boring.
That’s not conventionally considered to be “in the long run”.
We don’t have any theory that would stop AI from doing that
The primary reason is that we don’t have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not “progress”.
I do care about tomorrow, which is not the long run.
I don’t think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.
I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.
I think that first you should elaborate on what you mean by “the goals of humanity”. Do you mean majority opinion? In that case, one goal of humanity is to have a single world religious State, although there is disagreement on what that religion should be. Other goals of humanity include eliminating homosexuality and enforcing traditional patriarchal family structures.
Okay, I admit it—what I really think is that “goals of humanity” is a nonsensical phrase, especially when spoken by an American academic. It would be a little better to talk about values instead of goals, but not much better. The phrase still implies the unspoken belief that everyone would think like the person who speaks it, if only they were smarter.
For example, not turning the universe into paperclips is a goal of humanity.
Not really. I don’t care if that happens in the long run, and many people wouldn’t.
I hope at least you care if everyone on Earth dies painfully tomorrow. We don’t have any theory that would stop AI from doing that, and any progress toward such a theory would be on topic for the contest.
Sorry, I’m feeling a bit frustrated. It’s as if the decade of LW never happened, and people snap back out of rationality once they go off the dose of Eliezer’s writing. And the mode they snap back to is so painfully boring.
That’s not conventionally considered to be “in the long run”.
The primary reason is that we don’t have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not “progress”.
I do care about tomorrow, which is not the long run.
I don’t think we should assume that AIs will have any goals at all, and I rather suspect they will not, in the same way that humans do not, only more so.
I considered submitting an entry basically saying this, but decided that it would be pointless since obviously it would not get any prize. Human beings do not have coherent goals even individually. Much less does humanity.