But how do they plan to stop an AI appocalypse, or is that one of those things they haven’t figured out yet?
I recommend you read the “Brief Introduction” mentioned in the posting you’re commenting:
http://singinst.org/riskintro/index.html
I recommend you read the “Brief Introduction” mentioned in the posting you’re commenting:
http://singinst.org/riskintro/index.html