I am a scientist by vocation (and also by profession), in particular a biologist. Entangled with this calling is an equally deep interest in epistemology (specifically the warrant of inductive inference). The kinds of scientific explanations I find satisfying are quantitative (often statistical) models or theories that parsimoniously account for empirical biological observations in normative (functional, teleological) terms.
My degrees (BS, PhD) are in biology but my background is interdisciplinary, including also philosophy, psychology, mathematics/statistics, and computer science/machine learning. For the last few decades I have been researching the neurobiology of sensory perception, decision-making, and value-based choice in animals (including humans) and in models.
Now and then I get fascinated by something that isn’t obviously related to my existing domain, sometimes leading to semi-expertise and/or semi-professional activity in seemingly random other domains, and occasionally leading to an outright career change. At the moment that topic I’m fascinated by is AI-alignment, which brings me here.
I consider myself a visitor to your forum, in that my context is mainly from without.
Social media are anathema to me, but this forum seems to be an outlier.
I get excited about the possibility of contributing to AI alignment research whenever people talk about the problem being hard in this particular way. The problems are still illegible. The field needs relentless Original Seeing. Every assumption needs to be Truly Doubted. The approaches that will turn out to be fruitful probably haven’t even been imagined yet. It will be important to be able to learn and defer while simultaneously questioning, to come up with bold ideas while not being attached to them. It will require knowing your stuff, yet it might be optimal not to know too much about what other people have thought so far (“I would suggest not reading very much more.”). It’ll be important for people attempting this to be exceptionally good at rigorous, original conceptual thinking, thinking about thinking, grappling with minds, zooming all the way in and all the way out, constantly. It’ll probably require making ego and career sacrifices. Bring it on!
However, here’s an observation: the genuine potential to make such a contribution is itself highly illegible. Not only to the field, but perhaps to the potential contributor as well.
Apparently a lot of people fancy themselves to have Big Ideas or to be great at big-picture thinking, and most of them aren’t nearly as useful as they think they are. I feel like I’ve seen that sentiment many times on LW, and I’m guessing that’s behind statements like:
or the Talent Needs post that said
This presents a bit of a paradox. Suppose there exist a few rare, high-potential contributors not already working in AI who would be willing and able to take up the challenge you describe. It seems like the only way they could make themselves sufficiently legibly useful would be to work their way up through the ranks of much less abstract technical AI research until they distinguish themselves. That’s likely to deter horizontal movement of mature talent from other disciplines. I’m curious if you think this is true; or if you think starting out in object-level technical research is necessary training/preparation for the kind of big-picture work you have in mind; or if you think there are other routes of entry.