In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the “voice of AI Safety”, would you still agree that it’s important to have a dozen other people also writing similar articles?
I’m genuinely lost on the value of having a dozen similar papers—I don’t know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.
I don’t think making this list in 1980 would have been meaningful. How do you offer any sort of coherent, detailed plan for dealing with something when all you have is toy examples like Eliza?
We didn’t even have the concept of machine learning back then—everything computers did in 1980 was relatively easily understood by humans, in a very basic step-by-step way. Making a 1980s computer “safe” is a trivial task, because we hadn’t yet developed any technology that could do something “unsafe” (i.e. beyond our understanding). A computer in the 1980s couldn’t lie to you, because you could just inspect the code and memory and find out the actual reality.
What makes you think this would have been useful?
Do we have any historical examples to guide us in what this might look like?