If you’re curious about how I select what goes in the newsletter: I almost put in this critical review of the book, in the spirit of presenting both sides of the argument. I didn’t put it in because I couldn’t understand it.
My best guess right now is that the author is arguing that “we’ll never get superintelligence”, possibly because intelligence isn’t a coherent concept, but there’s probably something more that I’m not getting. If it turned out that it was only saying “we’ll never get superintelligence”, and there weren’t any new supporting arguments, I wouldn’t include it in the newsletter, because we’ve seen and heard that counterargument more than enough.
They also made an error in implicitly arguing that because they didn’t think unaligned behavior seems intelligent, then we have nothing to worry about from such AI—they wouldn’t be “intelligent”. I think leaving this out was a good choice.
If you’re curious about how I select what goes in the newsletter: I almost put in this critical review of the book, in the spirit of presenting both sides of the argument. I didn’t put it in because I couldn’t understand it.
My best guess right now is that the author is arguing that “we’ll never get superintelligence”, possibly because intelligence isn’t a coherent concept, but there’s probably something more that I’m not getting. If it turned out that it was only saying “we’ll never get superintelligence”, and there weren’t any new supporting arguments, I wouldn’t include it in the newsletter, because we’ve seen and heard that counterargument more than enough.
They also made an error in implicitly arguing that because they didn’t think unaligned behavior seems intelligent, then we have nothing to worry about from such AI—they wouldn’t be “intelligent”. I think leaving this out was a good choice.