Especially with vastly abstract topics—economics, philosophy, etc. -- I find nothing substitutes for working through concrete examples. My brain’s ability to just gloss over abstractions is hard to overestimate. So I’ve sort of trained myself to sound an alarm whenever my feet “don’t touch bottom”… that is, when I can’t think of concrete examples of the thing I’m talking about.
For example: I remember a few years ago suddenly realizing, in the course of a conversation about currency exchange rates, that I had no idea how such rates are set. I had been working with them, but had never asked myself where they come from… I hadn’t really been thinking about them as information that moves from one place to another, but just vaguely taking them for granted as aspects of my environment. That was embarrassing.
Another useful approach in particular domains is to come up with a checklist of questions to ask about every new feature of that domain, and then ask those questions every time.
This is something I started doing a while ago as part of requirements analysis, and it works pretty well in stable domains, though I sheepishly admit that I dropped the discipline once I had internalized the questions. (This is bad practice and I don’t endorse it; checklists are useful.)
It’s not quite so useful as a general-analysis technique, admittedly, because the scale differences start to kill you. Still it’s better than nothing.
Also, at the risk of repeating myself, I find that restating the thing-about-which-the-question-is-or-might-be is a good way to make myself notice gaps.
Yeah, I knew someone was going to ask. Sadly, I can’t, for reasons of proprietaryness. But a general sense:
For each high-level action to be taken: is this a choicepoint (if so, what alternatives are there, and who chooses, and when is that choice made, and can it be changed later)? is this a potential endpoint, intentional or otherwise (if so, do we have to clean up, and how do we do that? what happens next?) is it optional (see choicepoint)? should we log the action? should we journal the action?
For each decision to be made: on what data structure does that decision depend? how does that data structure get populated, and by what process, and is that process reliable (and if not, how do we validate the data structure)? What happens if that data structure is changed later? Where does that decision get logged? Where does the data structure get exposed, and what processes care about it, and what do they need to do with it?
For each data structure to be instantiated and/or marshalled: what latency/throughput requirements are there? are they aggregate or individual, and do they need to be monitored? need they be guaranteed? how long do they need to persist for, and what happens then (e.g., archiving)? What’s the estimated size and volume?
Especially with vastly abstract topics—economics, philosophy, etc. -- I find nothing substitutes for working through concrete examples. My brain’s ability to just gloss over abstractions is hard to overestimate. So I’ve sort of trained myself to sound an alarm whenever my feet “don’t touch bottom”… that is, when I can’t think of concrete examples of the thing I’m talking about.
For example: I remember a few years ago suddenly realizing, in the course of a conversation about currency exchange rates, that I had no idea how such rates are set. I had been working with them, but had never asked myself where they come from… I hadn’t really been thinking about them as information that moves from one place to another, but just vaguely taking them for granted as aspects of my environment. That was embarrassing.
Another useful approach in particular domains is to come up with a checklist of questions to ask about every new feature of that domain, and then ask those questions every time.
This is something I started doing a while ago as part of requirements analysis, and it works pretty well in stable domains, though I sheepishly admit that I dropped the discipline once I had internalized the questions. (This is bad practice and I don’t endorse it; checklists are useful.)
It’s not quite so useful as a general-analysis technique, admittedly, because the scale differences start to kill you. Still it’s better than nothing.
Also, at the risk of repeating myself, I find that restating the thing-about-which-the-question-is-or-might-be is a good way to make myself notice gaps.
Could you post your checklist, or if it is domain specific, something that is more general but based on it?
Yeah, I knew someone was going to ask. Sadly, I can’t, for reasons of proprietaryness. But a general sense:
For each high-level action to be taken: is this a choicepoint (if so, what alternatives are there, and who chooses, and when is that choice made, and can it be changed later)? is this a potential endpoint, intentional or otherwise (if so, do we have to clean up, and how do we do that? what happens next?) is it optional (see choicepoint)? should we log the action? should we journal the action?
For each decision to be made: on what data structure does that decision depend? how does that data structure get populated, and by what process, and is that process reliable (and if not, how do we validate the data structure)? What happens if that data structure is changed later? Where does that decision get logged? Where does the data structure get exposed, and what processes care about it, and what do they need to do with it?
For each data structure to be instantiated and/or marshalled: what latency/throughput requirements are there? are they aggregate or individual, and do they need to be monitored? need they be guaranteed? how long do they need to persist for, and what happens then (e.g., archiving)? What’s the estimated size and volume?
Etc., etc., etc.