Koen—thanks for the link to ACM FAccT; looks interesting. I’ll see what their people have to say about the ‘aligned with whom’ question.
I agree that AI X-risk folks should probably pay more attention to the algorithmic fairness folks and self-driving car folks, in terms of seeing what general lessons can be learned about alignment from these specific domains.
Koen—thanks for the link to ACM FAccT; looks interesting. I’ll see what their people have to say about the ‘aligned with whom’ question.
I agree that AI X-risk folks should probably pay more attention to the algorithmic fairness folks and self-driving car folks, in terms of seeing what general lessons can be learned about alignment from these specific domains.