I’ve also found this useful in my software engineering career. However, a flipside: when something is both easy to do and to undo, the overhead induced by questioning the requirements sometimes leads to not doing enough experiments. It depends per-task on (how hard to implement + how hard it will be to undo) / how hard to check why I want to implement. if it’s motivated by vague intuition, or by a user request, AND it’s low overhead to (implement | maintain | undo), then I often just do it then undo it later if I don’t like it. (I’ve mostly been on teams where I was granted autonomy to just do things based on my own taste.)
also, sometimes the answer you get from the requirement-giver is “this is my area of expertise, you can reply with your own requirements and we can debate”, which is face-saving/partially-guess-culture speak for “you don’t get to question that requirement in full depth, because if you do you’ll optimize in a domain which is too hard to get you up to speed on right now, according to me, the person you’re asking; seeing as how it’s my job to optimize in that domain, and the domain in question has significant complexity I can’t explain to you efficiently”. eg, one often gets this from lawyers, hardware engineers, etc. people who OOP would tell you to just push through; I think he’s specifically asking people to throw away the safety margins of “someone else has a reason to ask for this” and try to know everything, and sometimes that’s reasonable—quite often having that ethos has been extremely helpful for me—but I think there’s a lot to be lost from having to download a full copy of a requirement’s motivation into one’s head in order to act on it.
(Both of my comments are trying to get at a reason I think lesswrong is systematically underperforming what it should be due to what appears to me to be a culture problem in the team. I’m motivated by irritation at a category of community feedback going unapplied. That the issue I’m describing seems to apply to the companies run by OOP seems to back up my concern—a tendency to underuse data gained from sources that are unable to explain themselves in the ontology the team uses internally. This is probably less than half of the variance in explaining lack of responsiveness; probably the top item on the list is just tech debt, same as any org that takes a while to apply user feedback.)
I’ve also found this useful in my software engineering career. However, a flipside: when something is both easy to do and to undo, the overhead induced by questioning the requirements sometimes leads to not doing enough experiments. It depends per-task on (how hard to implement + how hard it will be to undo) / how hard to check why I want to implement. if it’s motivated by vague intuition, or by a user request, AND it’s low overhead to (implement | maintain | undo), then I often just do it then undo it later if I don’t like it. (I’ve mostly been on teams where I was granted autonomy to just do things based on my own taste.)
also, sometimes the answer you get from the requirement-giver is “this is my area of expertise, you can reply with your own requirements and we can debate”, which is face-saving/partially-guess-culture speak for “you don’t get to question that requirement in full depth, because if you do you’ll optimize in a domain which is too hard to get you up to speed on right now, according to me, the person you’re asking; seeing as how it’s my job to optimize in that domain, and the domain in question has significant complexity I can’t explain to you efficiently”. eg, one often gets this from lawyers, hardware engineers, etc. people who OOP would tell you to just push through; I think he’s specifically asking people to throw away the safety margins of “someone else has a reason to ask for this” and try to know everything, and sometimes that’s reasonable—quite often having that ethos has been extremely helpful for me—but I think there’s a lot to be lost from having to download a full copy of a requirement’s motivation into one’s head in order to act on it.
(Both of my comments are trying to get at a reason I think lesswrong is systematically underperforming what it should be due to what appears to me to be a culture problem in the team. I’m motivated by irritation at a category of community feedback going unapplied. That the issue I’m describing seems to apply to the companies run by OOP seems to back up my concern—a tendency to underuse data gained from sources that are unable to explain themselves in the ontology the team uses internally. This is probably less than half of the variance in explaining lack of responsiveness; probably the top item on the list is just tech debt, same as any org that takes a while to apply user feedback.)
I’m assuming you’re not talking about object-oriented programming, but I can’t figure out what this acronym refers to.
Original original poster. In this context OP is habryka.