I am a scientist by vocation (and also by profession), in particular a biologist. Entangled with this calling is an equally deep and long-standing interest in epistemology. The kinds of scientific explanations I find satisfying are quantitative (often statistical) models or theories that parsimoniously account for empirical biological observations in normative (functional, teleological) terms.
My degrees (BS, PhD) are in biology but my background is interdisciplinary, including also philosophy, psychology, mathematics/statistics, and computer science/machine learning. For the last few decades I have been researching the neurobiology of sensory perception, decision-making, and value-based choice in animals (including humans) and in models.
I also do work in philosophy of science (focusing on the warrant of inductive inference and methodological rigor in exploratory research) and philosophy of biology (focusing on teleology and the biological basis of goal directed behavior).
I consider myself a visitor to your forum, in that my context is mainly from without. A few months ago I had never heard of this group, and had never considered the possibility of catastrophic risk from AGI.
Update: I have updated on AI risk such that I now consider it a good reason to prioritize research that might mitigate that risk over other projects, and to refrain from lines of research that could advance progress toward AGI. I’m considering working directly on AI safety, but it remains to be seen if this is a fit.
I have several ideas that fall within the broad scope of LW’s interests, but which I have been developing independently for a long time (decades) outside of this conversation. Many of these ideas seem similar to ones others express here (which is exciting!). But in the interest of preserving potentially important subtle differences, I will be articulating them in my own terms instead of recasting them in LW-ese, at least as a starting point. If it turns out I’ve arrived at exactly the same idea others have reached entirely independently, this adds more epistemic support than if my line of reasoning was overly influenced by theirs or vice versa.