That is because we have limited attention and so we pick and choose the values we hold dearly. I think when we theorise about superintelligence we no longer need to have this constraint / the scope for including more things is much higher.
It’s because we care about other things a lot more than chimps, and would happily trade off chimp well being, chimp population size, chimp optionality and self-determination etc. in favor of those other things. By itself that should be enough to tell you that under your analogy, superintelligence taking over is not a great outcome for us.
In fact, the situations are not closely analogous. We will build ASI, whereas we developed from chimps, which is not similar. Also, there is little reason to expect ASI psychology to reflect human psychology.
Sorry I think our positions might be quite far apart — to me I’m reading your position as “most people don’t care about chimp rights… because we care about other things a lot more than chimps” which sounds circular / insufficiently explanatory.
The more I work to discuss this topic, the more I see it may be hard in many cases because of starting points being somewhat “political”. I wrote about this in Unionists vs. Separatists. Accordingly I think it can feel hard to find common ground or keep things based on first principles because of mind-killer effects.
the situations are not closely analogous. We will build ASI, whereas we developed from chimps, which is not similar.
Humans share 98% of their DNA with chimps. What % of ASI training data and architecture is human in origin? We don’t know. Maybe a lot of the data at that point is synthetically generated. Maybe most of the valuable signal is human in origin. Maybe the core model architecture is similar to that built by human AI researchers, maybe not.
there is little reason to expect ASI psychology to reflect human psychology
We agree here! This is why I’m encouraging openness with regards to considering the things that ASI might be capable of / driven to care about. I’m particularly interested in behaviours that appear to be emergent in intelligent systems from first principles — like shared identity and self-preservation.
That is because we have limited attention and so we pick and choose the values we hold dearly. I think when we theorise about superintelligence we no longer need to have this constraint / the scope for including more things is much higher.
It’s because we care about other things a lot more than chimps, and would happily trade off chimp well being, chimp population size, chimp optionality and self-determination etc. in favor of those other things. By itself that should be enough to tell you that under your analogy, superintelligence taking over is not a great outcome for us.
In fact, the situations are not closely analogous. We will build ASI, whereas we developed from chimps, which is not similar. Also, there is little reason to expect ASI psychology to reflect human psychology.
Sorry I think our positions might be quite far apart — to me I’m reading your position as “most people don’t care about chimp rights… because we care about other things a lot more than chimps” which sounds circular / insufficiently explanatory.
The more I work to discuss this topic, the more I see it may be hard in many cases because of starting points being somewhat “political”. I wrote about this in Unionists vs. Separatists. Accordingly I think it can feel hard to find common ground or keep things based on first principles because of mind-killer effects.
Humans share 98% of their DNA with chimps. What % of ASI training data and architecture is human in origin? We don’t know. Maybe a lot of the data at that point is synthetically generated. Maybe most of the valuable signal is human in origin. Maybe the core model architecture is similar to that built by human AI researchers, maybe not.
We agree here! This is why I’m encouraging openness with regards to considering the things that ASI might be capable of / driven to care about. I’m particularly interested in behaviours that appear to be emergent in intelligent systems from first principles — like shared identity and self-preservation.