There can only be One X-risk. I wonder if anyone here believes that our first X-risk is not coming from AGI, that AGI is our fastest counter to our first X-risk.
In that case you are resisting what you should embrace. And I don’t blame you for that. Our age is driven by entertainment. As Elon said on Twitter “The most entertaining outcome is the most likely”. Next breakthrough in AGI will only happen when someone brings out “entertaining ” enough X-risk scenario for you to feast on. At that point all Safety AI protocols will be reworked and you will open your mind to new out of the box image of the situation you are in now.
There can only be One X-risk. I wonder if anyone here believes that our first X-risk is not coming from AGI, that AGI is our fastest counter to our first X-risk.
In that case you are resisting what you should embrace. And I don’t blame you for that. Our age is driven by entertainment. As Elon said on Twitter “The most entertaining outcome is the most likely”. Next breakthrough in AGI will only happen when someone brings out “entertaining ” enough X-risk scenario for you to feast on. At that point all Safety AI protocols will be reworked and you will open your mind to new out of the box image of the situation you are in now.