Thanks for the reply, I think we do mostly agree here. Some points of disagreement might be that I’m not at all confident that we get a truly large scale warning shot before AI gets powerful enough to just go and kill everyone. Like I think the threshold for what would really get people paying attention is above “there is a financial disaster”, I’m guessing it would actually take AI killing multiple people (outside of a self-driving context). That could totally happen before doom, but it could also totally fail to happen. We probably get a few warning shots that are at least bigger than all the ones we’ve had before, but I can’t even predict that with much confidence.
Yes I think we understand each other. One thing to keep in mind is that different stakeholders in AI are NOT utilitarians, they have local incentives they individually care about. Given the fact that COVID didn’t stop gain-of-function research, this means that getting EVERYONE to care would require a death toll larger than COVID. However, getting someone like CEO of google to care would “only” require a half—a - trillion dollar lawsuit against Microsoft for some issue relating to their AIs.
And I generally expect those—types of warning shots to be pretty likely given how gun-ho the current approach is.
Thanks for the reply, I think we do mostly agree here. Some points of disagreement might be that I’m not at all confident that we get a truly large scale warning shot before AI gets powerful enough to just go and kill everyone. Like I think the threshold for what would really get people paying attention is above “there is a financial disaster”, I’m guessing it would actually take AI killing multiple people (outside of a self-driving context). That could totally happen before doom, but it could also totally fail to happen. We probably get a few warning shots that are at least bigger than all the ones we’ve had before, but I can’t even predict that with much confidence.
Yes I think we understand each other. One thing to keep in mind is that different stakeholders in AI are NOT utilitarians, they have local incentives they individually care about. Given the fact that COVID didn’t stop gain-of-function research, this means that getting EVERYONE to care would require a death toll larger than COVID. However, getting someone like CEO of google to care would “only” require a half—a - trillion dollar lawsuit against Microsoft for some issue relating to their AIs.
And I generally expect those—types of warning shots to be pretty likely given how gun-ho the current approach is.