Nice post. It prompts two questions, which you may or may not be the right person to answer:
How do you find good obsessions? Is it “just” a matter of being curious and widely-read? What is the combination of life practice and psychological orientation that leads a person to become obsessed with one or more ideas in the way that you became obsessed with progress studies and with Fieldbook?
On your path to world-class status, how do you avoid the “middle-competence trap” (analogy to the middle-income trap)? How do you handle having something you love that you’ve gotten damn good at, better than most people will ever get, but can’t seem to break through to the level of the achievers who really make their mark on the field? Maybe this is more of an issue for me than for others—maybe for example it is “just” a matter of being willing to burrow deep into something to the exclusion of your other interests in life, and I’m too much of a generalist to do that—but it’s been a problem for me twice now, and I really wonder if it might be a common failure mode of this kind of questing process.
As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective “recipes for ruin” to use Nielsen’s phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it takes a little longer to manifest without AI helping find those recipes, but it’s unlikely to take, say, centuries longer.
And an AI much more powerful than (baseline un augmented biological) humans is likely to be much more capable of at least defending itself against extinction than we are or are likely to become. It may or may not want to defend us, it may or may not want to kill us all, but it will likely both want to and be able to be good at preserving itself.
So if x-risk is real and large, then the choice between developing powerful AI and stopping that development is a choice between a future where at least AI survives, and maybe as a bonus it is nice enough to preserve us too, and a future where we kill ourselves off anyway without AI “help” and leave nothing intelligent orbiting the Sun. The claimed possible future where humanity preserves a worthwhile future existence unaided is much lower probability than either of these even if AI development is stoppable.
Fwiw I do not work in AI and so do not have the memetic temptations the OP theorizes as a driver of successionist views.