Sorry I think our positions might be quite far apart — to me I’m reading your position as “most people don’t care about chimp rights… because we care about other things a lot more than chimps” which sounds circular / insufficiently explanatory.
The more I work to discuss this topic, the more I see it may be hard in many cases because of starting points being somewhat “political”. I wrote about this in Unionists vs. Separatists. Accordingly I think it can feel hard to find common ground or keep things based on first principles because of mind-killer effects.
the situations are not closely analogous. We will build ASI, whereas we developed from chimps, which is not similar.
Humans share 98% of their DNA with chimps. What % of ASI training data and architecture is human in origin? We don’t know. Maybe a lot of the data at that point is synthetically generated. Maybe most of the valuable signal is human in origin. Maybe the core model architecture is similar to that built by human AI researchers, maybe not.
there is little reason to expect ASI psychology to reflect human psychology
We agree here! This is why I’m encouraging openness with regards to considering the things that ASI might be capable of / driven to care about. I’m particularly interested in behaviours that appear to be emergent in intelligent systems from first principles — like shared identity and self-preservation.
Sorry I think our positions might be quite far apart — to me I’m reading your position as “most people don’t care about chimp rights… because we care about other things a lot more than chimps” which sounds circular / insufficiently explanatory.
The more I work to discuss this topic, the more I see it may be hard in many cases because of starting points being somewhat “political”. I wrote about this in Unionists vs. Separatists. Accordingly I think it can feel hard to find common ground or keep things based on first principles because of mind-killer effects.
Humans share 98% of their DNA with chimps. What % of ASI training data and architecture is human in origin? We don’t know. Maybe a lot of the data at that point is synthetically generated. Maybe most of the valuable signal is human in origin. Maybe the core model architecture is similar to that built by human AI researchers, maybe not.
We agree here! This is why I’m encouraging openness with regards to considering the things that ASI might be capable of / driven to care about. I’m particularly interested in behaviours that appear to be emergent in intelligent systems from first principles — like shared identity and self-preservation.