I want to pull out one particular benefit that I think swamps the rest of the benefits, and in particular explains why I tend to gravitate to moderation over extremism/radicalism:
Because I’m trying to make changes on the margin, details of the current situation are much more interesting to me. In contrast, radicals don’t really care about e.g. the different ways that corporate politics affects AI safety interventions at different AI companies.
Caring about the real world details of a problem is often quite important in devising a good solution, and is arguably the reason why moderates in politics generally achieve their personal goals more than radicals.
Rationalists/EAs are generally better at this than most people, due to decoupling norms being more accepted, but there is a real problem when people forget that the real-life details of AI actually matter in designing solutions to the AI alignment problem.
Richard Ngo has talked before about how Eliezer’s intuitions on this topic is similar to a mathematician’s intuitions on a theorem, but where his choice to be abstract and avoid details pretty much blocks solutions to the problem, and while this is less prevalent than in 2018, it does still linger (AI control is probably the paradigmatic example of an indirect solution to the alignment problem that depends on details of how much AI is capable of).
Similarly, Eliezer’s That Alien Message and Einstein’s Speed definitely has a vibe that you can reasonably expect to ignore empirical details and still be right by pure algorithmic intelligence.
(Though at least for That Alien Message, there’s substantially more computation than humans usually do):
A BOTEC on the “that alien message” premise: You have a world with 10⁹ scientists, running at 10⁸× real-time (the internet is 1bps), each of whom has 15×10⁹ cortical neurons, each of which has 10³ connections, each of which computes 2 FLOP/spike, at avg 10Hz = 3×10³¹ FLOP/s
I want to pull out one particular benefit that I think swamps the rest of the benefits, and in particular explains why I tend to gravitate to moderation over extremism/radicalism:
Caring about the real world details of a problem is often quite important in devising a good solution, and is arguably the reason why moderates in politics generally achieve their personal goals more than radicals.
Rationalists/EAs are generally better at this than most people, due to decoupling norms being more accepted, but there is a real problem when people forget that the real-life details of AI actually matter in designing solutions to the AI alignment problem.
Richard Ngo has talked before about how Eliezer’s intuitions on this topic is similar to a mathematician’s intuitions on a theorem, but where his choice to be abstract and avoid details pretty much blocks solutions to the problem, and while this is less prevalent than in 2018, it does still linger (AI control is probably the paradigmatic example of an indirect solution to the alignment problem that depends on details of how much AI is capable of).
Similarly, Eliezer’s That Alien Message and Einstein’s Speed definitely has a vibe that you can reasonably expect to ignore empirical details and still be right by pure algorithmic intelligence.
(Though at least for That Alien Message, there’s substantially more computation than humans usually do):
https://x.com/davidad/status/1841959485365223606
That doesn’t mean abstraction is useless, but it does mean we have to engage in real-world details if we want to solve problems.