I would, for instance, predict that if Superintelligence were published during the era of GOFAI, all else equal it would’ve made a bigger splash because AI researchers then were more receptive to abstract theorizing.
And then it would probably have been seen as outmoded and thrown away completely when AI capabilities research progressed into realms that vastly surpassed GOFAI. I don’t know that there’s an easy way to get capabilities researchers to think seriously about safety concerns that haven’t manifested on a sufficient scale yet.
Good comment. I disagree with this bit:
And then it would probably have been seen as outmoded and thrown away completely when AI capabilities research progressed into realms that vastly surpassed GOFAI. I don’t know that there’s an easy way to get capabilities researchers to think seriously about safety concerns that haven’t manifested on a sufficient scale yet.