A good heuristic is one that tells you what to do. “Friends don’t let friends drive drunk” is a heuristic that tells you what you should do. If you are in a situation where a friend might engage in drunk driving, you do something to stop them.
“We should …” is not a heuristic that tells you what to do. It’s not embodied in that sense. It’s largely a statement about what you think other people should do.
If I ask you whether you applied points Anna listed in the YIMBY or the Mothers Against Drunk Driving sections in the last week you can tell me “yes” or “no”. Applying those is something you have the personal agency to do.
Am I understanding you correctly in that you are pointing out that people have spheres of influence with areas that seemingly have full control over and other places where they seemingly have no control? That makes sense and seems important. In places where you can aim your ethical heuristic where people have full control it will obviously be better, but unfortunately it is important for people to try to influence things that they don’t seem to have any control over.
I suppose you could prescribe self referential heuristics, for example “have you spent 5 interrupted minutes thinking about how you can influence AI policy in the last week?” It isn’t clear whether any given person can influence these companies, but it is clear that any given person can consider it for 5 minutes. That’s not a bad idea, but there may be better ways to take the “We should...” statement out of intractability and make it embodied. Can you think of any?
My longer comment on ethical design patterns explores a bit about how I’m thinking about influence through my “OIS” lens in a way tangentially related to this.
If you look at the YIMBY example that Anna laid out, cities policies are not under direct control of citizens, yet Anna found some points that relate to what people can actually do.
If it seems like you don’t have any control over something you want to change it makes sense to think of a theory of changes according to which you have control.
Right now, one issue seems to be that most people don’t really have it as part of their world view that there’s a good change of human extinction via AI. You could build a heuristic around, being open about the fact that there’s a good chance of human extinction via AI with everyone you meet.
There are probably also many other heuristics you could think of about what people should do.
A good heuristic is one that tells you what to do. “Friends don’t let friends drive drunk” is a heuristic that tells you what you should do. If you are in a situation where a friend might engage in drunk driving, you do something to stop them.
“We should …” is not a heuristic that tells you what to do. It’s not embodied in that sense. It’s largely a statement about what you think other people should do.
If I ask you whether you applied points Anna listed in the YIMBY or the Mothers Against Drunk Driving sections in the last week you can tell me “yes” or “no”. Applying those is something you have the personal agency to do.
Am I understanding you correctly in that you are pointing out that people have spheres of influence with areas that seemingly have full control over and other places where they seemingly have no control? That makes sense and seems important. In places where you can aim your ethical heuristic where people have full control it will obviously be better, but unfortunately it is important for people to try to influence things that they don’t seem to have any control over.
I suppose you could prescribe self referential heuristics, for example “have you spent 5 interrupted minutes thinking about how you can influence AI policy in the last week?” It isn’t clear whether any given person can influence these companies, but it is clear that any given person can consider it for 5 minutes. That’s not a bad idea, but there may be better ways to take the “We should...” statement out of intractability and make it embodied. Can you think of any?
My longer comment on ethical design patterns explores a bit about how I’m thinking about influence through my “OIS” lens in a way tangentially related to this.
If you look at the YIMBY example that Anna laid out, cities policies are not under direct control of citizens, yet Anna found some points that relate to what people can actually do.
If it seems like you don’t have any control over something you want to change it makes sense to think of a theory of changes according to which you have control.
Right now, one issue seems to be that most people don’t really have it as part of their world view that there’s a good change of human extinction via AI. You could build a heuristic around, being open about the fact that there’s a good chance of human extinction via AI with everyone you meet.
There are probably also many other heuristics you could think of about what people should do.