To extend the metaphor: it is a well-known observation among linguists and amateur grammarians that the more you think about edge cases the worse your intuition becomes.
I wonder if there is a similar hazard from spending too much time thinking about weird hypothetical situations.
I’m not /sure/ that there is such an effect, but I would be willing to bet in favor of it. Not standing next to someone who obsesses over the Trolley Problem when I’m waiting for a train, that sort of thing.
Actually, this is a point that I’ve wondered about for a while. Trolley problems rely on the Least Convenient Possible World constraint to force your decision. While this is great for helping you to investigate your intuitions, it’s terrible for saving people from trolleys.
Maybe sometimes you need the guy who can make the cold, utilitarian call and choose who lives and who dies. But more often than not you want the guy who will actually try to think of a way to save everybody. I’d hate for someone to die on the tracks because the guy who could have saved him/her thought that derailing the trolley was cheating.
TLDR: Spending too much time on artificially constrained problems seems like it could optimize you away from being able to think in real-world situations.
This idea has practical applications for anyone who designs, tests, and/or releases for use complex systems, such as vehicles, computing systems (hardware or software), buildings, and so on. The first time you’re part of a team that does this, you spend a lot of the last days before release obsessing about the worst-cases that could assail your system, and then you test for them. This is all well and good the first time.
It’s when you work on designing your next system that you have to watch out, because you can be so focused on edge cases that you fail to design a system that just basically works well without costing too much.
Hence, the counterbalancing principle of design: KISS (Keep It Simple, Stupid!)
I don’t think spending time on artificially constrained will optimize you away from thinking the right way in real-world situations, just that it only develops one of the skills that is useful.
If you’re preparing to row across the Atlantic Ocean, it’s probably a bad idea to spend all your energy improving your upper-body strength; you should also learn a lot about weather, about nutrition and physiology, etc.
“This is an example of what I call “lifeboat questions”—ethical formulations such as”What should a man do if he and another man are in a lifeboat that can hold only one?” First, every code of ethics must be based on a metaphysics—on a view of the world in which man lives. But man does not live in a lifeboat—in a world in which he must kill innocent men to survive.”
To extend the metaphor: it is a well-known observation among linguists and amateur grammarians that the more you think about edge cases the worse your intuition becomes.
I wonder if there is a similar hazard from spending too much time thinking about weird hypothetical situations.
“Hard cases make for bad law.”
I’m not /sure/ that there is such an effect, but I would be willing to bet in favor of it. Not standing next to someone who obsesses over the Trolley Problem when I’m waiting for a train, that sort of thing.
Actually, this is a point that I’ve wondered about for a while. Trolley problems rely on the Least Convenient Possible World constraint to force your decision. While this is great for helping you to investigate your intuitions, it’s terrible for saving people from trolleys.
Maybe sometimes you need the guy who can make the cold, utilitarian call and choose who lives and who dies. But more often than not you want the guy who will actually try to think of a way to save everybody. I’d hate for someone to die on the tracks because the guy who could have saved him/her thought that derailing the trolley was cheating.
TLDR: Spending too much time on artificially constrained problems seems like it could optimize you away from being able to think in real-world situations.
“If the real world was maximally inconvenient, we would all be dead by now.”
This idea has practical applications for anyone who designs, tests, and/or releases for use complex systems, such as vehicles, computing systems (hardware or software), buildings, and so on. The first time you’re part of a team that does this, you spend a lot of the last days before release obsessing about the worst-cases that could assail your system, and then you test for them. This is all well and good the first time.
It’s when you work on designing your next system that you have to watch out, because you can be so focused on edge cases that you fail to design a system that just basically works well without costing too much.
Hence, the counterbalancing principle of design: KISS (Keep It Simple, Stupid!)
I don’t think spending time on artificially constrained will optimize you away from thinking the right way in real-world situations, just that it only develops one of the skills that is useful.
If you’re preparing to row across the Atlantic Ocean, it’s probably a bad idea to spend all your energy improving your upper-body strength; you should also learn a lot about weather, about nutrition and physiology, etc.
“This is an example of what I call “lifeboat questions”—ethical formulations such as”What should a man do if he and another man are in a lifeboat that can hold only one?” First, every code of ethics must be based on a metaphysics—on a view of the world in which man lives. But man does not live in a lifeboat—in a world in which he must kill innocent men to survive.”
-Ayn Rand
There’s some debate about that,
This is my impression of UDT.
Which implies that you treat decision theory like grammar and morality rather than a reducible abstract theory.