Caring about visible power is a very human motivation, and I’d expect will draw many people to care about “who are the AI principals”, “what are the AIs actually doing”, and few other topics, which have significant technical components
Somewhat wild datapoints in this space: nuclear weapons, space race. in each case, salient motivations such as “war” led some of the best technical people to work on hard technical problems. in my view, the problems the technical people ended up working on were often “vs. nature” and distant from the original social motivations
Another take on this is, some people want to technically interesting and import problems, but some of them want to work on “legibly important” or “legibly high-status” problems
I do believe there are some opportunities in steering some fraction of this attention toward some of the core technical problems (not toward all of them, at this moment).
This can often depend on framing; while my guess is e.g. you shouldn’t probably work on this, my guess is some people who understand alignment technical problems should
This can also depend on social dynamics; your “naive guess” seem a good starting point
Also: it seems there are many low-hanging fruits in low-difficulty problems which someone should work on—eg at this moment, many humans should be spending a lot of time trying to get empirical understanding of what types of generalization are LLMs capable of.
With prioritization, I think it would be good if someone made some sort of a curated list “who is working on which problems, and why”—my concern with part of the “EAs figuring out what to do” is many people are doing some sort of expert-aggregation on the wrong level. (Like, if someone basically averages your and Eliezer Yudkowsky’s conclusions giving 50% weight each, I don’t think it is useful and coherent model)
Not very coherent response to #3. Roughly
Caring about visible power is a very human motivation, and I’d expect will draw many people to care about “who are the AI principals”, “what are the AIs actually doing”, and few other topics, which have significant technical components
Somewhat wild datapoints in this space: nuclear weapons, space race. in each case, salient motivations such as “war” led some of the best technical people to work on hard technical problems. in my view, the problems the technical people ended up working on were often “vs. nature” and distant from the original social motivations
Another take on this is, some people want to technically interesting and import problems, but some of them want to work on “legibly important” or “legibly high-status” problems
I do believe there are some opportunities in steering some fraction of this attention toward some of the core technical problems (not toward all of them, at this moment).
This can often depend on framing; while my guess is e.g. you shouldn’t probably work on this, my guess is some people who understand alignment technical problems should
This can also depend on social dynamics; your “naive guess” seem a good starting point
Also: it seems there are many low-hanging fruits in low-difficulty problems which someone should work on—eg at this moment, many humans should be spending a lot of time trying to get empirical understanding of what types of generalization are LLMs capable of.
With prioritization, I think it would be good if someone made some sort of a curated list “who is working on which problems, and why”—my concern with part of the “EAs figuring out what to do” is many people are doing some sort of expert-aggregation on the wrong level. (Like, if someone basically averages your and Eliezer Yudkowsky’s conclusions giving 50% weight each, I don’t think it is useful and coherent model)