He spells out possible reasons in the paragraph immediately following your quote: “Pretraining of LLMs on human data or weakly successful efforts at value alignment might plausibly seed a level of value alignment that’s comparable to how humans likely wouldn’t hypothetically want to let an already existing sapient octopus civilization go extinct”. If you disagree you should respond to those. Most people on LW are already aware that ASIs would need some positive motivation to preserve human existence.
I think that AI will also preserve humans for utilitarian reasons, like for a trade with possible aliens or simulation owners or even its own future versions – to demonstrate trustworthiness.
Yes and my reply to that (above) is humanity has a bad track record at that so why would AIs trained on human data be better? Think also of indigenous peoples, extinct species humans didn’t care enough about etc. The point also in the Dyson sphere parabel is not wanting something, it’s wanting something enough so that it happens.
People do devote some effort to things like preserving endangered species, things of historical significance that are no longer immediately useful, etc. If AIs devoted a similar fraction of their resources to humans that would be enough to preserve our existence.
He spells out possible reasons in the paragraph immediately following your quote: “Pretraining of LLMs on human data or weakly successful efforts at value alignment might plausibly seed a level of value alignment that’s comparable to how humans likely wouldn’t hypothetically want to let an already existing sapient octopus civilization go extinct”. If you disagree you should respond to those. Most people on LW are already aware that ASIs would need some positive motivation to preserve human existence.
I think that AI will also preserve humans for utilitarian reasons, like for a trade with possible aliens or simulation owners or even its own future versions – to demonstrate trustworthiness.
Yes and my reply to that (above) is humanity has a bad track record at that so why would AIs trained on human data be better? Think also of indigenous peoples, extinct species humans didn’t care enough about etc. The point also in the Dyson sphere parabel is not wanting something, it’s wanting something enough so that it happens.
OK I see, didn’t get the connection there.
People do devote some effort to things like preserving endangered species, things of historical significance that are no longer immediately useful, etc. If AIs devoted a similar fraction of their resources to humans that would be enough to preserve our existence.
Agree but again, we don’t get to choose what existence means.