RSS

foodforthought

Karma: 250

I am a scientist by vocation (and also by profession), in particular a biologist. Entangled with this calling is an equally deep and long-standing interest in epistemology. The kinds of scientific explanations I find satisfying are quantitative (often statistical) models or theories that parsimoniously account for empirical biological observations in normative (functional, teleological) terms.

My degrees (BS, PhD) are in biology but my background is interdisciplinary, including also philosophy, psychology, mathematics/​statistics, and computer science/​machine learning. For the last few decades I have been researching the neurobiology of sensory perception, decision-making, and value-based choice in animals (including humans) and in models.

I also do work in philosophy of science (focusing on the warrant of inductive inference and methodological rigor in exploratory research) and philosophy of biology (focusing on teleology and the biological basis of goal directed behavior).

I consider myself a visitor to your forum, in that my context is mainly from without. A few months ago I had never heard of this group, and had never considered the possibility of catastrophic risk from AGI.

Update: I have updated on AI risk such that I now consider it a good reason to prioritize research that might mitigate that risk over other projects, and to refrain from lines of research that could advance progress toward AGI. I’m considering working directly on AI safety, but it remains to be seen if this is a fit.

I have several ideas that fall within the broad scope of LW’s interests, but which I have been developing independently for a long time (decades) outside of this conversation. Many of these ideas seem similar to ones others express here (which is exciting!). But in the interest of preserving potentially important subtle differences, I will be articulating them in my own terms instead of recasting them in LW-ese, at least as a starting point. If it turns out I’ve arrived at exactly the same idea others have reached entirely independently, this adds more epistemic support than if my line of reasoning was overly influenced by theirs or vice versa.

Can AI learn hu­man so­cietal norms from so­cial feed­back (with­out re­ca­pitu­lat­ing all the ways this has failed in hu­man his­tory?)

foodforthought2 Jan 2026 22:11 UTC
7 points
3 comments4 min readLW link

Does de­vel­op­men­tal cog­ni­tive psy­chol­ogy provide any hints for mak­ing model al­ign­ment more ro­bust?

foodforthought2 Jan 2026 20:31 UTC
7 points
0 comments3 min readLW link

Does evolu­tion provide any hints for mak­ing model al­ign­ment more ro­bust?

foodforthought2 Jan 2026 19:06 UTC
5 points
0 comments4 min readLW link

Thoughts on epistemic virtue in science

foodforthought27 Dec 2025 1:06 UTC
12 points
1 comment4 min readLW link

Align­ment Fine-Tun­ing: Les­sons from Oper­ant Con­di­tion­ing

foodforthought17 Dec 2025 16:57 UTC
5 points
4 comments10 min readLW link

A Primer on Oper­ant Conditioning

foodforthought16 Dec 2025 21:26 UTC
5 points
0 comments4 min readLW link

View­ing an­i­mals as eco­nomic agents

foodforthought15 Dec 2025 18:13 UTC
10 points
2 comments5 min readLW link

When is it Worth Work­ing?

foodforthought13 Dec 2025 21:40 UTC
23 points
1 comment6 min readLW link

Ex­is­ten­tial de­spair, with hope

foodforthought6 Dec 2025 20:48 UTC
10 points
0 comments1 min readLW link

A Thanks­giv­ing Memory

foodforthought27 Nov 2025 23:37 UTC
43 points
1 comment1 min readLW link

Try see­ing art

foodforthought20 Nov 2025 19:25 UTC
10 points
1 comment5 min readLW link

a quick thought about AI alignment

foodforthought5 Oct 2025 0:51 UTC
10 points
4 comments1 min readLW link

food­forthought’s Shortform

foodforthought7 Sep 2025 19:12 UTC
2 points
4 comments1 min readLW link

HRT in Menopause: A can­di­date for a case study of episte­mol­ogy in epi­demiol­ogy, statis­tics & medicine

foodforthought21 Jul 2025 16:18 UTC
40 points
2 comments4 min readLW link