I expect you to be making a correct and important point here, but I don’t think I get it yet. I feel confused because I don’t know what it would mean for this frame to make false predictions. I could say “Evolution selected me to have two eyeballs” and I go “Yep I have two eyeballs”? “Evolution selected for [trait with higher fitness]” and then “lots of people have trait of higher fitness” seems necessarily true?
I feel like I’m missing something.
Oh. Perhaps it’s nontrivial that humans were selected to value a lot of stuff, and (different, modern) humans still value a lot of stuff, even in today’s different environment? Is that the point?
Perhaps it’s nontrivial that humans were selected to value a lot of stuff
I prefer the reverse story: humans are tools in the hand of the angiosperms, and they’re still doing the job these plants selected them for: they defend angiosperm at all cost. If superIA destruct 100% of the humans along with 99% of life on earth, they’ll call that the seed phase and chill for the new empty environment they would have made us clean for them.
Oh. Perhaps it’s nontrivial that humans were selected to value a lot of stuff, and (different, modern) humans still value a lot of stuff, even in today’s different environment? Is that the point?
Sort of, but I think it is more specific than that. As I point out in my AI pause essay:
An anthropologist looking at humans 100,000 years ago would not have said humans are aligned to evolution, or to making as many babies as possible. They would have said we have some fairly universal tendencies, like empathy, parenting instinct, and revenge. They might have predicted these values will persist across time and cultural change, because they’re produced by ingrained biological reward systems. And they would have been right.
I take this post to be mostly negative, in that it shows that “IGF” is not a unified loss function; its content is entirely dependent on the environmental context, in ways that ML loss functions are not.
I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.
I find hard to grant something that would have make our response to pandemics or global warming even slower than they are. By the same reasoning, we would not have the Montreal protocol and the UV levels would be public concerns.
I expect you to be making a correct and important point here, but I don’t think I get it yet. I feel confused because I don’t know what it would mean for this frame to make false predictions. I could say “Evolution selected me to have two eyeballs” and I go “Yep I have two eyeballs”? “Evolution selected for [trait with higher fitness]” and then “lots of people have trait of higher fitness” seems necessarily true?
I feel like I’m missing something.
Oh. Perhaps it’s nontrivial that humans were selected to value a lot of stuff, and (different, modern) humans still value a lot of stuff, even in today’s different environment? Is that the point?
Does this comment help clarify the point?
I prefer the reverse story: humans are tools in the hand of the angiosperms, and they’re still doing the job these plants selected them for: they defend angiosperm at all cost. If superIA destruct 100% of the humans along with 99% of life on earth, they’ll call that the seed phase and chill for the new empty environment they would have made us clean for them.
https://m.youtube.com/watch?v=HLYPm2idSTE
Sort of, but I think it is more specific than that. As I point out in my AI pause essay:
I take this post to be mostly negative, in that it shows that “IGF” is not a unified loss function; its content is entirely dependent on the environmental context, in ways that ML loss functions are not.
Nitpick in there
I find hard to grant something that would have make our response to pandemics or global warming even slower than they are. By the same reasoning, we would not have the Montreal protocol and the UV levels would be public concerns.
If you want to discuss the other contents of my AI pause essay, it’s probably best for you to comment over on the EA forum post, not here.
Good point. Also I should stop hitting icons to read what it means.