I think a key feature of how we as humans choose heuristics is that we have a state of the world in mind that we want and we choose the heuristics we use to reach that state. It’s one of the points of jimmy’s sequence that I think is underread.
It’s relatively easy to coherently imagine a world where most people aren’t engaging in drunk driving and pick designated drivers. It’s easy to imagine a world in which more and bigger buildings get build.
On the other hand, it’s hard to imagine how 2040 would look while stopping the building of AGI. For me that makes “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.” a heuristics that feels more intellectual than embodied. I think to have it feel embodied, I would need to have a vision of how a future that the heuristic produces would look like.
As far as concrete imagination goes, I’m also not sure what “we” in that sentence means. Note that you don’t have any unclear “we” in either the YIMBY or the Mothers Against Drunk Driving examples you describe.
“Successful heuristics are embodied” seems like a good ethical heuristic heuristic. I support the call to action to make “we shouldn’t let companies cause minor risks of major harm” more embodied by giving examples of how a future where we have and use that heuristic. (Related, I think “we shouldn’t let companies cause minor risks of major harms” is better phrasing for heuristic C.)
A good heuristic is one that tells you what to do. “Friends don’t let friends drive drunk” is a heuristic that tells you what you should do. If you are in a situation where a friend might engage in drunk driving, you do something to stop them.
“We should …” is not a heuristic that tells you what to do. It’s not embodied in that sense. It’s largely a statement about what you think other people should do.
If I ask you whether you applied points Anna listed in the YIMBY or the Mothers Against Drunk Driving sections in the last week you can tell me “yes” or “no”. Applying those is something you have the personal agency to do.
Am I understanding you correctly in that you are pointing out that people have spheres of influence with areas that seemingly have full control over and other places where they seemingly have no control? That makes sense and seems important. In places where you can aim your ethical heuristic where people have full control it will obviously be better, but unfortunately it is important for people to try to influence things that they don’t seem to have any control over.
I suppose you could prescribe self referential heuristics, for example “have you spent 5 interrupted minutes thinking about how you can influence AI policy in the last week?” It isn’t clear whether any given person can influence these companies, but it is clear that any given person can consider it for 5 minutes. That’s not a bad idea, but there may be better ways to take the “We should...” statement out of intractability and make it embodied. Can you think of any?
My longer comment on ethical design patterns explores a bit about how I’m thinking about influence through my “OIS” lens in a way tangentially related to this.
If you look at the YIMBY example that Anna laid out, cities policies are not under direct control of citizens, yet Anna found some points that relate to what people can actually do.
If it seems like you don’t have any control over something you want to change it makes sense to think of a theory of changes according to which you have control.
Right now, one issue seems to be that most people don’t really have it as part of their world view that there’s a good change of human extinction via AI. You could build a heuristic around, being open about the fact that there’s a good chance of human extinction via AI with everyone you meet.
There are probably also many other heuristics you could think of about what people should do.
It’s a good point, re: some of the gap being that it’s hard to concretely visualize the world in which AGI isn’t built. And also about the “we” being part of the lack of concreteness.
I suspect there’re lots of kinds of ethical heuristics that’re supposed to interweave, and that some are supposed to be more like “checksums” (indicators everyone can use in an embodied way to see whether there’s a problem, even though they don’t say how to address it if there is a problem), and others are supposed to be more concrete.
For some more traditional examples:
There’re heuristics for how to tell whether a person or organization is of bad character (even though these heuristics don’t tell how how to respond if a person is of bad character). Eg JK Rowling’s character Sirius’s claim that you can see the measure of a person by how they treat their house-elves (which has classical Christian antecedents, I’m just mentioning a contemporary phrasing).
There’re heuristics for how countries should be, e.g. “should have freedom of speech and press” or (longer ago) “should have a monarch who inherited legitimately.”
It would be too hard to try to equip humans and human groups for changing circumstances via only a “here’s what you do in situation X”. It’s somewhat easier to do it (and traditional ethical heuristics did do it) by a combination of “you can probably do well by [various what-to-do heuristics]” and “you can tell if you’re doing well by [various other checksum-type heuristics]. Ethics is help to let us design our way to better plans, not to only always give us those plans.
A key aspect of modern democracy with the rule of law is that companies can operate even if people believe they are acting with bad character. It’s not hard to convince a majority that Elon Musk and Sam Altman are people with bad character but that’s not sufficient to stopping them from building AGI.
As far as “should have freedom of speech and press” goes, both Republican and Democratic administrations over the last two decades did a lot to reduce those freedoms but the pushback comes mostly on partisan lines. They amount of people who take a principled stand on freedom of speech no matter whether it’s speech by friends or foes is small.
As far as “should have a monarch who inherited legitimately” goes, I think it worked for a long time as a Schelling point around with people could coordinate and not because most people found the concept of being ruled by a king that great. It was a Schelling point that allowed peaceful transition of power after a king died where otherwise there would have been more conflict about succession.
Eg JK Rowling’s character Sirius’s claim that you can see the measure of a person by how they treat their house-elves
While we are at general principles, citing JK Rowling in a discussion on ethics is probably generally a bad idea for politics is the mind killer reasons. I think the article is very interesting in terms of cultural norms.
It gets frequently cited to make a point that discussing politics is inherently bad, which isn’t something the article argues. On the other hand, the actual argument that if you use political examples it will make your audience focus on politics and make them less clear thinking when you could use non-political examples that don’t have this problem is seldomly appreciated, because people like using their political examples.
I think a key feature of how we as humans choose heuristics is that we have a state of the world in mind that we want and we choose the heuristics we use to reach that state. It’s one of the points of jimmy’s sequence that I think is underread.
It’s relatively easy to coherently imagine a world where most people aren’t engaging in drunk driving and pick designated drivers. It’s easy to imagine a world in which more and bigger buildings get build.
On the other hand, it’s hard to imagine how 2040 would look while stopping the building of AGI. For me that makes “If something has a >10% chance of killing everyone according to most experts, we probably shouldn’t let companies build it.” a heuristics that feels more intellectual than embodied. I think to have it feel embodied, I would need to have a vision of how a future that the heuristic produces would look like.
As far as concrete imagination goes, I’m also not sure what “we” in that sentence means. Note that you don’t have any unclear “we” in either the YIMBY or the Mothers Against Drunk Driving examples you describe.
“Successful heuristics are embodied” seems like a good ethical heuristic heuristic. I support the call to action to make “we shouldn’t let companies cause minor risks of major harm” more embodied by giving examples of how a future where we have and use that heuristic. (Related, I think “we shouldn’t let companies cause minor risks of major harms” is better phrasing for heuristic C.)
A good heuristic is one that tells you what to do. “Friends don’t let friends drive drunk” is a heuristic that tells you what you should do. If you are in a situation where a friend might engage in drunk driving, you do something to stop them.
“We should …” is not a heuristic that tells you what to do. It’s not embodied in that sense. It’s largely a statement about what you think other people should do.
If I ask you whether you applied points Anna listed in the YIMBY or the Mothers Against Drunk Driving sections in the last week you can tell me “yes” or “no”. Applying those is something you have the personal agency to do.
Am I understanding you correctly in that you are pointing out that people have spheres of influence with areas that seemingly have full control over and other places where they seemingly have no control? That makes sense and seems important. In places where you can aim your ethical heuristic where people have full control it will obviously be better, but unfortunately it is important for people to try to influence things that they don’t seem to have any control over.
I suppose you could prescribe self referential heuristics, for example “have you spent 5 interrupted minutes thinking about how you can influence AI policy in the last week?” It isn’t clear whether any given person can influence these companies, but it is clear that any given person can consider it for 5 minutes. That’s not a bad idea, but there may be better ways to take the “We should...” statement out of intractability and make it embodied. Can you think of any?
My longer comment on ethical design patterns explores a bit about how I’m thinking about influence through my “OIS” lens in a way tangentially related to this.
If you look at the YIMBY example that Anna laid out, cities policies are not under direct control of citizens, yet Anna found some points that relate to what people can actually do.
If it seems like you don’t have any control over something you want to change it makes sense to think of a theory of changes according to which you have control.
Right now, one issue seems to be that most people don’t really have it as part of their world view that there’s a good change of human extinction via AI. You could build a heuristic around, being open about the fact that there’s a good chance of human extinction via AI with everyone you meet.
There are probably also many other heuristics you could think of about what people should do.
It’s a good point, re: some of the gap being that it’s hard to concretely visualize the world in which AGI isn’t built. And also about the “we” being part of the lack of concreteness.
I suspect there’re lots of kinds of ethical heuristics that’re supposed to interweave, and that some are supposed to be more like “checksums” (indicators everyone can use in an embodied way to see whether there’s a problem, even though they don’t say how to address it if there is a problem), and others are supposed to be more concrete.
For some more traditional examples:
There’re heuristics for how to tell whether a person or organization is of bad character (even though these heuristics don’t tell how how to respond if a person is of bad character). Eg JK Rowling’s character Sirius’s claim that you can see the measure of a person by how they treat their house-elves (which has classical Christian antecedents, I’m just mentioning a contemporary phrasing).
There’re heuristics for how countries should be, e.g. “should have freedom of speech and press” or (longer ago) “should have a monarch who inherited legitimately.”
It would be too hard to try to equip humans and human groups for changing circumstances via only a “here’s what you do in situation X”. It’s somewhat easier to do it (and traditional ethical heuristics did do it) by a combination of “you can probably do well by [various what-to-do heuristics]” and “you can tell if you’re doing well by [various other checksum-type heuristics]. Ethics is help to let us design our way to better plans, not to only always give us those plans.
A key aspect of modern democracy with the rule of law is that companies can operate even if people believe they are acting with bad character. It’s not hard to convince a majority that Elon Musk and Sam Altman are people with bad character but that’s not sufficient to stopping them from building AGI.
As far as “should have freedom of speech and press” goes, both Republican and Democratic administrations over the last two decades did a lot to reduce those freedoms but the pushback comes mostly on partisan lines. They amount of people who take a principled stand on freedom of speech no matter whether it’s speech by friends or foes is small.
As far as “should have a monarch who inherited legitimately” goes, I think it worked for a long time as a Schelling point around with people could coordinate and not because most people found the concept of being ruled by a king that great. It was a Schelling point that allowed peaceful transition of power after a king died where otherwise there would have been more conflict about succession.
While we are at general principles, citing JK Rowling in a discussion on ethics is probably generally a bad idea for politics is the mind killer reasons. I think the article is very interesting in terms of cultural norms.
It gets frequently cited to make a point that discussing politics is inherently bad, which isn’t something the article argues. On the other hand, the actual argument that if you use political examples it will make your audience focus on politics and make them less clear thinking when you could use non-political examples that don’t have this problem is seldomly appreciated, because people like using their political examples.