Majorly with you on this one. Piping in shell changed my life, and then piping in R (via Magrittr via dplyr) changed my life again.
AABoyles
Statistics. I took it in High school, but it was so poorly taught that I learned almost nothing from it. Now I use statistics every day.
Scripting Languages. I learned Java in High School and rode that knowledge all the way through my CS degree. But when I got a job as a software engineer, Java was among the worst languages available for solving the type of problems with which I was dealing. Looking back at my old code, I could have saved hundreds of hours if I had learned Python instead of Java.
Related, curves I haven’t climbed but wish I could/would/intended to:
A European foreign language. I dabbled in a variety of languages in school and settled on Chinese, wasting many years and having little to show for it. If it had been an Alphabetic language (and better yet, a Latin-alphabet language), I’d have a much higher level of proficiency.
A Martial Art. I love the martial arts, but I’ve never been able to devote myself to any one of them.
The universe we perceive is probably a simulation of a more complex Universe. In breaking with the simulation hypothesis, however, the simulation is not originated by humans. Instead, our existence is simply an emergent property of the physics (and stochasticity) of the simulation.
As a social scientist (who spends a LOT of time and effort developing rigorous methodology in keeping with the scientific method), I find your dismissal of my entire academic superfield disgraceful. Perhaps you’ve confused social science with punditry?
In the strictest sense, yes I am. I design, build and test social models for a living (so this may simply be a case of me holding Maslow’s Hammer). The universe exhibits a number of physical properties which resemble modeling assumptions. For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn’t enforce upper bounds—what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability.
On any given day, I’ll instantiate thousands of models. Having many models running in parallel is useful! We observe one universe, but if there’s a non-zero probability that the universe is a model of something else (a possibility which Ockham’s Razor certainly doesn’t refute), the fact that I generate so many models is indicative of the possibility that a super-universal process or entity may be doing the same thing, of which our universe is one instance.
I am also trying to start implementing a self-model of the type Brienne describes.
But I also have mental models of people with whom I interact everyday (instead of myself). Unfortunately, I don’t construct them consciously—they appear whenever I have issues with the real person in question. I’ll argue with them before confronting the actual person (if a confrontation is called for). When I do enter a real discussion with the real person, I’m almost always struck by how wrong my models were—to the point of being actively damaging to my psychological well-being.
This trick is pretty powerful. I’m channeling a model of the more confident version of myself to post this comment, rather than just lurking like I normally do.
Thanks a lot!
Computational Social Science (which is extremely methodology-oriented). I was trained in Political Science, but the lines between the social sciences are pretty fuzzy. I do substantive work which could be called Political Science, Sociology, or Economics.
Oof. You just trampled one of my pet peeves: Social Science is a subset of the Sciences, not the Humanities.
There’s still a persistent anti-positivist streak in the Humanities in the US, but mostly positivism has just been irrelevant to the work of Humanities scholars (though this is changing in some interesting and exciting ways).
More importantly, the social sciences in the US are overwhelming positivist, even amongst researchers whose work is not strictly empirical. I wish I could take credit for those good influences, but I think you’re probably the one deserving of kudos for managing to become a rationalist in such a hostile environment.
I doubt you can find a widely-acceptable definition of Data Science which is any less fuzzy. Computational Social Science (CSS) is a subset of Data Science. Take Drew Conway’s Data Science Venn Diagram: If your Substantive Expertise is a Social Science, you’re doing Computational Social Science.
Statistics is an important tool in CSS, but it doesn’t cover the other types of modeling we do: Agent-Based, System Dynamic, and Algorithmic Game Theoretic to name a few.
I’m in the business of modeling. I do all three of those tasks, but the emphasis is definitely on the last.
Nope! Not to say that an intervention proposed by a computational social model has never influenced policy in real life—I just don’t know of any examples. That said, I’m workin’ on it.
Huh. It never occurred to me that imposing finite bounds might increase the complexity of a simulation, but I can see how that could be true for physical models. Is the assumption you’re making in the Low Mach/incompressible fluid models that the speed of sound is explicitly infinite, or is it that the speed of sound lacks an upper bound? (i.e., is there a point in the code where you have to declare something like “sound.speed = infinity”?)
Anyway, I’ve certainly never encountered any such situation in models of social systems. I’ll keep an eye out for it now. Thanks for sharing!
That makes a lot of sense. I asked about explicit declaration versus implicit assumption because assumptions of this sort do exist in social models. They’re just treated as unmodeled characteristics either of agents or of reality. We can make these assumptions because they either don’t inform the phenomenon we’re investigating (e.g. infinite ammunition can be implicitly assumed in an agent-based model of battlefield medic behavior because we’re not interested in the draw-down or conclusion of the battle in the absence of a decisive victory) or the model’s purpose is to investigate relationships within a plausible range (which sounds like your use case). That said, I’m very curious about the existence of models for which explicitly setting a boundary of infinity can reduce computational complexity. It seems like such a thing is either provably possible or (more likely) provably impossible. Know of anything like that?
Not true: it means you shouldn’t use a normal distribution, and when you do you should say so up front. I see no reason not to apply normal distributions if your limit is high (say, greater than 4 sigmas—social science is much fuzzier than physical science). Better yet, make your limit a function of the number of observations you have. As the probability of getting into the long tail gets higher, make the tail longer.
Sentence 1: True, fair point. Sentence 2: This isn’t obvious to me. Selecting random values from a truncated normal distribution is (slightly) more complex than, say, a uniform distribution over the same range, but it is demonstrably (slightly) less complex than selecting random values from an unbounded normal distribution. Without finite boundaries, you’d need infinite precision arithmetic just to draw a value.
It seems like “sim” is the strictly dominant action for X and all X*. Thus we should always press “sim”. The more interesting question would be what would happen if the incentives for pressing “sim” were reversed for the agents (i.e., the payoff for an agent choosing “not sim” exceeded “sim”). Then we’d have a cool mixed strategy problem.
Oh! I see what you’re saying. Definitely can’t argue with that.
[content deleted]
I experienced a far less conscious and intentional version of noticing reflexively throughout my childhood. Specifically, I became very highly attuned to the act of stepping on cracks in pavings in response to the schoolyard rhyme “Don’t step on the crack or you’ll break your Momma’s back.” I never labored under the delusion that there was some mystical force which would cause gross harm to my mother if I did (or didn’t) step on a crack—it was more of a game. A game which lasted from early in elementary school through puberty. I have other gamified (if immature) examples of passive noticing—the game) comes to mind. (Apologies if anyone still cares about the game, by the way). Now, these parallels are shallow in that I wasn’t meta-noticing as Brienne was. But it does lend a concept I’ll find useful in applying the principles of noticing and meta-noticing: namely, the act of gamification. I question, however, whether gamification lends itself to moving the intention from conscious searching to subconscious noticing.