What would you have done if you had meant to give him a bad name but nonetheless had to refrain from simply lying?
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME,
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.
That’s fairly easy. There are many ways to do that, although he is already pretty good at it himself.
First I would start acting like Otto E. Rössler with respect to risks from AI. Then I would write as many AI researchers, computer scientists, popular bloggers and politicians etc. as possible about how THIS IS CRUNCH TIME, “it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us” And to back up my claims I would frequently cite posts and papers written by Yudkowsky and talk about how he is probably the most rational person alive and how most AI researchers are just biased.
No lies. Hands-free.
Not to nitpick or anything, but since you don’t actually seem to believe it’s “crunch time”, the strategy you outlined would indeed be a series of lies, regardless of whether Eliezer believes it true.